Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan
2018-01-01
Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow < 1.239 × 10 -3 mm 2 /s), and D fast (D fast < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
Kim, Dae-Young; Seo, Byoung-Do; Choi, Pan-Am
2014-04-01
[Purpose] This study was conducted to determine the influence of Taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery. [Subjects and Methods] Fourteen healthy university students were recruited and divided into an exercise group and a control group (n = 7 in each group). The subjects who participated in the experiment were subjected to an exercise loading test in which anaerobic threshold, value of ventilation, oxygen uptake, maximal oxygen uptake, heart rate, and maximal values of ventilation / heart rate were measured during the exercise, immediately after maximum exercise loading, and at 1, 3, 5, 10, and 15 min of recovery. [Results] At the anaerobic threshold time point, the exercise group showed a significantly longer time to reach anaerobic threshold. The exercise group showed significantly higher values for the time to reach VO2max, maximal values of ventilation, maximal oxygen uptake and maximal values of ventilation / heart rate. Significant changes were observed in the value of ventilation volumes at the 1- and 5-min recovery time points within the exercise group; oxygen uptake and maximal oxygen uptake were significantly different at the 5- and 10-min time points; heart rate was significantly different at the 1- and 3-min time points; and maximal values of ventilation / heart rate was significantly different at the 5-min time point. The exercise group showed significant decreases in blood lactate levels at the 15- and 30-min recovery time points. [Conclusion] The study results revealed that Taekwondo as a security martial arts training increases the maximal oxygen uptake and anaerobic threshold and accelerates an individual's recovery to the normal state of cardiorespiratory fitness and blood lactate level. These results are expected to contribute to the execution of more effective security services in emergencies in which violence can occur.
Nadkarni, Tanvi N; Andreoli, Matthew J; Nair, Veena A; Yin, Peng; Young, Brittany M; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S; Field, Aaron S; Baskaya, Mustafa K; Moritz, Chad H; Meyerand, M Elizabeth; Prabhakaran, Vivek
2015-01-01
Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits.
Salicylate-induced changes in auditory thresholds of adolescent and adult rats.
Brennan, J F; Brown, C A; Jastreboff, P J
1996-01-01
Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.
Nadkarni, Tanvi N.; Andreoli, Matthew J.; Nair, Veena A.; Yin, Peng; Young, Brittany M.; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S.; Field, Aaron S.; Baskaya, Mustafa K.; Moritz, Chad H.; Meyerand, M. Elizabeth; Prabhakaran, Vivek
2014-01-01
Background and purpose Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Materials and methods Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Results Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Conclusion Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits. PMID:25685705
NASA Astrophysics Data System (ADS)
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
Digital audio watermarking using moment-preserving thresholding
NASA Astrophysics Data System (ADS)
Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong
2007-09-01
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan
2015-06-01
[Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds.
Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan
2015-01-01
[Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds. PMID:26180348
The stability of color discrimination threshold determined using pseudoisochromatic test plates
NASA Astrophysics Data System (ADS)
Zutere, B.; Jurasevska Luse, K.; Livzane, A.
2014-09-01
Congenital red-green color vision deficiency is one of the most common genetic disorders. A previously printed set of pseudoisochromatic plates (KAMS test, 2012) was created for individual discrimination threshold determination in case of mild congenital red-green color vision deficiency using neutral colors (colors confused with gray). The diagnostics of color blind subjects was performed with Richmond HRR (4th edition, 2002) test, Oculus HMC anomaloscope, and further the examination was made using the KAMS test. 4 male subjects aged 20 to 24 years old participated in the study: all of them were diagnosed with deuteranomalia. Due to the design of the plates, the threshold of every subject in each trial was defined as the plate total color difference value ΔE at which the stimulus was detected 75% of the time, so the just-noticeable difference (jnd) was calculated in CIE LAB DeltaE (ΔE) units. Authors performed repeated discrimination threshold measurements (5 times) for all four subjects under controlled illumination conditions. Psychophysical data were taken by sampling an observer's performance on a psychophysical task at a number of different stimulus saturation levels. Results show that a total color difference value ΔE threshold exists for each individual tested with the KAMS pseudoisochromatic plates, this threshold value does not change significantly in multiple measurements. Deuteranomal threshold values aquired using greenish plates of KAMS test are significantly higher than thresholds acquired using reddish plates. A strong positive correlation (R=0.94) exists between anomaloscope matching range (MR) and deuteranomal thresholds aquired by the KAMS test and (R=0.81) between error score in the Richmond HRR test and thresholds aquired by the KAMS test.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, F; Shandong Cancer Hospital and Insititute, Jinan, Shandong; Bowsher, J
2014-06-01
Purpose: PET imaging with F18-FDG is utilized for treatment planning, treatment assessment, and prognosis. A region of interest (ROI) encompassing the tumor may be determined on the PET image, often by a threshold T on the PET standard uptake values (SUVs). Several studies have shown prognostic value for relevant ROI properties including maximum SUV value (SUVmax), metabolic tumor volume (MTV), and total glycolytic activity (TGA). The choice of threshold T may affect mean SUV value (SUVmean), MTV, and TGA. Recently spatial resolution modeling (SRM) has been introduced on many PET systems. SRM may also affect these ROI properties. The purposemore » of this work is to investigate the relative influence of SRM and threshold choice T on SUVmean, MTV, TGA, and SUVmax. Methods: For 9 anal cancer patients, 18F-FDG PET scans were performed prior to treatment. PET images were reconstructed by 2 iterations of Ordered Subsets Expectation Maximization (OSEM), with and without SRM. ROI contours were generated by 5 different SUV threshold values T: 2.5, 3.0, 30%, 40%, and 50% of SUVmax. Paired-samples t tests were used to compare SUVmean, MTV, and TGA (a) for SRM on versus off and (b) between each pair of threshold values T. SUVmax was also compared for SRM on versus off. Results: For almost all (57/60) comparisons of 2 different threshold values, SUVmean, MTV, and TGA showed statistically significant variation. For comparison of SRM on versus off, there were no statistically significant changes in SUVmax and TGA, but there were statistically significant changes in MTV for T=2.5 and T=3.0 and in SUVmean for all T. Conclusion: The near-universal statistical significance of threshold choice T suggests that, regarding harmonization across sites, threshold choice may be a greater concern than choice of SRM. However, broader study is warranted, e.g. other iterations of OSEM should be considered.« less
Self-Organization on Social Media: Endo-Exo Bursts and Baseline Fluctuations
Oka, Mizuki; Hashimoto, Yasuhiro; Ikegami, Takashi
2014-01-01
A salient dynamic property of social media is bursting behavior. In this paper, we study bursting behavior in terms of the temporal relation between a preceding baseline fluctuation and the successive burst response using a frequency time series of 3,000 keywords on Twitter. We found that there is a fluctuation threshold up to which the burst size increases as the fluctuation increases and that above the threshold, there appears a variety of burst sizes. We call this threshold the critical threshold. Investigating this threshold in relation to endogenous bursts and exogenous bursts based on peak ratio and burst size reveals that the bursts below this threshold are endogenously caused and above this threshold, exogenous bursts emerge. Analysis of the 3,000 keywords shows that all the nouns have both endogenous and exogenous origins of bursts and that each keyword has a critical threshold in the baseline fluctuation value to distinguish between the two. Having a threshold for an input value for activating the system implies that Twitter is an excitable medium. These findings are useful for characterizing how excitable a keyword is on Twitter and could be used, for example, to predict the response to particular information on social media. PMID:25329610
Bettinger, Nicolas; Khalique, Omar K; Krepp, Joseph M; Hamid, Nadira B; Bae, David J; Pulerwitz, Todd C; Liao, Ming; Hahn, Rebecca T; Vahl, Torsten P; Nazif, Tamim M; George, Isaac; Leon, Martin B; Einstein, Andrew J; Kodali, Susheel K
The threshold for the optimal computed tomography (CT) number in Hounsfield Units (HU) to quantify aortic valvular calcium on contrast-enhanced scans has not been standardized. Our aim was to find the most accurate threshold to predict paravalvular regurgitation (PVR) after transcatheter aortic valve replacement (TAVR). 104 patients who underwent TAVR with the CoreValve prosthesis were studied retrospectively. Luminal attenuation (LA) in HU was measured at the level of the aortic annulus. Calcium volume score for the aortic valvular complex was measured using 6 threshold cutoffs (650 HU, 850 HU, LA × 1.25, LA × 1.5, LA+50, LA+100). Receiver-operating characteristic (ROC) analysis was performed to assess the predictive value for > mild PVR (n = 16). Multivariable analysis was performed to determine the accuracy to predict > mild PVR after adjustment for depth and perimeter oversizing. ROC analysis showed lower area under the curve (AUC) values for fixed threshold cutoffs (650 or 850 HU) compared to thresholds relative to LA. The LA+100 threshold had the highest AUC (0.81), and AUC was higher than all studied protocols, other than the LA x 1.25 and LA + 50 protocols, where the difference approached statistical significance (p = 0.05, and 0.068, respectively). Multivariable analysis showed calcium volume determined by the LAx1.25, LAx1.5, LA+50, and LA+ 100 HU protocols to independently predict PVR. Calcium volume scoring thresholds which are relative to LA are more predictive of PVR post-TAVR than those which use fixed cutoffs. A threshold of LA+100 HU had the highest predictive value. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Tamrin, Shamsul Bahri Mohd; Jamalohdin, Mohd Nazri; Ng, Yee Guan; Maeda, Setsuo; Ali, Nurul Asyiqin Mohd
2012-01-01
The objectives of this study are to determine the prevalence of hand-arm vibration syndrome (HAVS) and the characteristics of the vibrotactile perception threshold (VPT) among users of hand-held vibrating tools working in a tropical environment. A cross sectional study was done among 47 shipyard workers using instruments and a questionnaire to determine HAVS related symptoms. The vibration acceleration magnitude was determined using a Human Vibration Meter (Maestro). A P8 Pallesthesiometer (EMSON-MAT, Poland) was used to determine the VPT of index and little finger at frequencies of 31.5 Hz and 125 Hz. The mean reference threshold shift was determined from the reference threshold shift derived from the VPT value. The results show a moderate prevalence of HAVS (49%) among the shipyard workers. They were exposed to the same high intensity level of HAVS (mean = 4.19 ± 1.94 m/s(2)) from the use of vibrating hand-held tools. The VPT values were found to be higher for both fingers and both frequencies (index, 31.5 Hz = 110.91 ± 7.36 dB, 125 Hz = 117.0 ± 10.25 dB; little, 31.5 Hz = 110.70 ± 6.75 dB, 125 Hz = 117.71 ± 10.25 dB) compared to the normal healthy population with a mean threshold shift of between 9.20 to 10.61 decibels. The frequency of 31.5 Hz had a higher percentage of positive mean reference threshold shift (index finger=93.6%, little finger=100%) compared to 125 Hz (index finger=85.1%, little finger=78.7%). In conclusion, the prevalence of HAVS was lower than those working in a cold environment; however, all workers had a higher mean VPT value compared to the normal population with all those reported as having HAVS showing a positive mean reference threshold shift of VPT value.
Bettembourg, Charles; Diot, Christian; Dameron, Olivier
2015-01-01
Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274
Preclinical studies of photodynamic therapy of intracranial tissues
NASA Astrophysics Data System (ADS)
Lilge, Lothar D.; Sepers, Marja; Park, Jane; O'Carroll, Cindy; Pournazari, Poupak; Prosper, Joe; Wilson, Brian C.
1997-05-01
The applicability and limitations of the photodynamic threshold model were investigated for an intracranial tumor (VX2) and normal brain tissues in a rabbit model. Photodynamic threshold values for four different photosensitizers, i.e., Photofrin, 5(delta) -aminolaevulinic acid (5(delta) -ALA) induced Protoporphyrin IX (PPIX), Tin Ethyl Etiopurpurin (SnET2), and chloroaluminum phthalocyanine (AlClPc), were determined based on measured light fluence distributions, macroscopic photosensitizer concentration in various brain structures, and histologically determined extent of tissue necrosis following PDT. For Photofrin, AlClPc, and SnET2, normal brain displayed a significantly lower threshold value than VX2 tumor. For 5(delta) -ALA induced PPIX and SnET2 no or very little white matter damage, equalling to very high or infinite threshold values, was observed. Additionally, the latter two photosensitizers showed significantly lower uptake in white matter compared to other brain structures and VX2 tumor. Normal brain structures lacking a blood- brain-barrier, such as the choroid plexus and the meninges, showed high photosensitizer uptake for all photosensitizers, and, hence, are at risk when exposed to light. Results to date suggest that the photodynamic threshold values iares valid for white matter, cortex and VX2 tumor. For clinical PDT of intracranial neoplasms 5(delta) -ALA induced PPIX and SnET2 appear to be the most promising for selective tumor necrosis.However, the photosensitizer concentration in each normal brain structure and the fluence distribution throughout the treatment volume and adjacent tissues at risk must be monitored to maximize the selectivity of PDT for intracranial tumors.
Does systemic steroid deficiency affect inner ear functions?
Dogan, Remzi; Merıc, Ayşenur; Gedık, Ozge; Tugrul, Selahattin; Eren, Sabri Baki; Ozturan, Orhan
2015-01-01
Today corticosteroids are employed for the treatment of various inner ear disorders. In this study we have investigated probable changes in hearing functions resulting from a deficiency of systemic steroid secretions. Twenty four healthy female rats were used in our study, allocated into three groups (medical adrenalectomy, medical adrenalectomy+dexamethasone, no treatment). Audiological evaluations were conducted at the beginning of the study and on days 7, 14 and 21. Blood samples were taken at the beginning and at the end of the study and blood corticosterone levels were determined. While there were no significant differences between the basal, 7th, 14th and 21st day DPOAE values of group 1, their ABR threshold values showed significant increases. In group 2, there were no significant differences between the basal, 7th, 14th and 21st day DPOAE values. ABR thresholds of group 2 showed significant increases on days 7 and 14 as compared to their basal values, but there were no significant differences between the 21st day and basal ABR threshold values. There were no significant differences between the basal cortisol levels of the three groups. The mean cortisol level of group 1 on day 21 was found to be significantly lower than those of groups 2 and 3. The results of the study demonstrated that there were no significant changes in DPOAE values with the cessation of cortisol secretion, while there was a progressive increase in ABR thresholds, which could be overcome with cortisone replacement. Copyright © 2015 Elsevier Inc. All rights reserved.
Kang, Hyunchul
2015-01-01
We investigate the in-network processing of an iceberg join query in wireless sensor networks (WSNs). An iceberg join is a special type of join where only those joined tuples whose cardinality exceeds a certain threshold (called iceberg threshold) are qualified for the result. Processing such a join involves the value matching for the join predicate as well as the checking of the cardinality constraint for the iceberg threshold. In the previous scheme, the value matching is carried out as the main task for filtering non-joinable tuples while the iceberg threshold is treated as an additional constraint. We take an alternative approach, meeting the cardinality constraint first and matching values next. In this approach, with a logical fragmentation of the join operand relations on the aggregate counts of the joining attribute values, the optimal sequence of 2-way fragment semijoins is generated, where each fragment semijoin employs a Bloom filter as a synopsis of the joining attribute values. This sequence filters non-joinable tuples in an energy-efficient way in WSNs. Through implementation and a set of detailed experiments, we show that our alternative approach considerably outperforms the previous one. PMID:25774710
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huajun; Dong, Yongqi; Cherukara, Matthew J.
Memristive devices are an emerging technology that enables both rich interdisciplinary science and novel device functionalities, such as nonvolatile memories and nanoionics-based synaptic electronics. Recent work has shown that the reproducibility and variability of the devices depend sensitively on the defect structures created during electroforming as well as their continued evolution under dynamic electric fields. However, a fundamental principle guiding the material design of defect structures is still lacking due to the difficulty in understanding dynamic defect behavior under different resistance states. Here, we unravel the existence of threshold behavior by studying model, single-crystal devices: resistive switching requires that themore » pristine oxygen vacancy concentration reside near a critical value. Theoretical calculations show that the threshold oxygen vacancy concentration lies at the boundary for both electronic and atomic phase transitions. Through operando, multimodal X-ray imaging, we show that field tuning of the local oxygen vacancy concentration below or above the threshold value is responsible for switching between different electrical states. These results provide a general strategy for designing functional defect structures around threshold concentrations to create dynamic, field-controlled phases for memristive devices.« less
Ultrasonically triggered ignition at liquid surfaces.
Simon, Lars Hendrik; Meyer, Lennart; Wilkens, Volker; Beyer, Michael
2015-01-01
Ultrasound is considered to be an ignition source according to international standards, setting a threshold value of 1mW/mm(2) [1] which is based on theoretical estimations but which lacks experimental verification. Therefore, it is assumed that this threshold includes a large safety margin. At the same time, ultrasound is used in a variety of industrial applications where it can come into contact with explosive atmospheres. However, until now, no explosion accidents have been reported in connection with ultrasound, so it has been unclear if the current threshold value is reasonable. Within this paper, it is shown that focused ultrasound coupled into a liquid can in fact ignite explosive atmospheres if a specific target positioned at a liquid's surface converts the acoustic energy into a hot spot. Based on ignition tests, conditions could be derived that are necessary for an ultrasonically triggered explosion. These conditions show that the current threshold value can be significantly augmented. Copyright © 2014 Elsevier B.V. All rights reserved.
Noise thresholds for optical quantum computers.
Dawson, Christopher M; Haselgrove, Henry L; Nielsen, Michael A
2006-01-20
In this Letter we numerically investigate the fault-tolerant threshold for optical cluster-state quantum computing. We allow both photon loss noise and depolarizing noise (as a general proxy for all local noise), and obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible for photon loss probabilities <3 x 10(-3), and for depolarization probabilities <10(-4).
[Clinical experiences with four newly developed, surface modified stimulation electrodes].
Winter, U J; Fritsch, J; Liebing, J; Höpp, H W; Hilger, H H
1993-05-01
Newly developed pacing electrodes with so-called porous surfaces promise a significantly improved post-operative pacing and sensing threshold. We therefore investigated four newly developed leads (ELA-PMCF-860 n = 10; Biotronik-60/4-DNP n = 10, CPI-4010 n = 10, Intermedics-421-03-Biopore n = 6) connected to two different pacing devices (Intermedics NOVA II, Medtronic PASYS) in 36 patients (18 men, 18 women, age: 69.7 +/- 9.8 years) suffering from symptomatic bradycardia. The individual electrode maturation process was investigated by means of repeated measurements of pacing threshold, electrode impedance in acute, subacute, and chronic phase, as well as energy consumption and sensing behavior in the chronic phase. However, with the exception of the 4010, the investigated leads showed largely varying values of the pacing threshold with individual peaks occurring from the second up to the 13th week. All leads had nearly similar chronic pacing thresholds (PMCF 0.13 +/- 0.07; DNP 0.25 +/- 0.18; Biopore 0.15 +/- 0.05; 4010 0.14 +/- 0.05 ms). Impedance measurements revealed higher, but not significantly different values for the DNP (PMCF 582 +/- 112, DNP 755 +/- 88, Biopore 650 +/- 15, 4010 718 +/- 104 Ohm). Despite differing values for pacing threshold and impedance, the energy consumption in the chronic phase during threshold-adapted, but secure stimulation (3 * impulse-width at pacing threshold) were comparable.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-10-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-05-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Crack Growth Behavior in the Threshold Region for High Cyclic Loading
NASA Technical Reports Server (NTRS)
Forman, R.; Figert, J.; Beek, J.; Ventura, J.; Martinez, J.; Samonski, F.
2011-01-01
The present studies show that fanning in the threshold regime is likely caused by other factors than a plastic wake developed during load shedding. The cause of fanning at low R-values is a result of localized roughness, mainly formation of a faceted crack surface morphology , plus crack bifurcations which alters the crack closure at low R-values. The crack growth behavior in the threshold regime involves both crack closure theory and the dislocation theory of metals. Research will continue in studying numerous other metal alloys and performing more extensive analysis, such as the variation in dislocation properties (e.g., stacking fault energy) and its effects in different materials.
Optical spectral singularities as threshold resonances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mostafazadeh, Ali
2011-04-15
Spectral singularities are among generic mathematical features of complex scattering potentials. Physically they correspond to scattering states that behave like zero-width resonances. For a simple optical system, we show that a spectral singularity appears whenever the gain coefficient coincides with its threshold value and other parameters of the system are selected properly. We explore a concrete realization of spectral singularities for a typical semiconductor gain medium and propose a method of constructing a tunable laser that operates at threshold gain.
A derivation of the stable cavitation threshold accounting for bubble-bubble interactions.
Guédra, Matthieu; Cornu, Corentin; Inserra, Claude
2017-09-01
The subharmonic emission of sound coming from the nonlinear response of a bubble population is the most used indicator for stable cavitation. When driven at twice their resonance frequency, bubbles can exhibit subharmonic spherical oscillations if the acoustic pressure amplitude exceeds a threshold value. Although various theoretical derivations exist for the subharmonic emission by free or coated bubbles, they all rest on the single bubble model. In this paper, we propose an analytical expression of the subharmonic threshold for interacting bubbles in a homogeneous, monodisperse cloud. This theory predicts a shift of the subharmonic resonance frequency and a decrease of the corresponding pressure threshold due to the interactions. For a given sonication frequency, these results show that an optimal value of the interaction strength (i.e. the number density of bubbles) can be found for which the subharmonic threshold is minimum, which is consistent with recently published experiments conducted on ultrasound contrast agents. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Y.; Liu, J.-R.; Luo, Y.; Yang, Y.; Tian, F.; Lei, K.-C.
2015-11-01
Groundwater in Beijing has been excessively exploited in a long time, causing the groundwater level continued to declining and land subsidence areas expanding, which restrained the economic and social sustainable development. Long years of study show good time-space corresponding relationship between groundwater level and land subsidence. To providing scientific basis for the following land subsidence prevention and treatment, quantitative research between groundwater level and settlement is necessary. Multi-linear regression models are set up by long series factual monitoring data about layered water table and settlement in the Tianzhu monitoring station. The results show that: layered settlement is closely related to water table, water level variation and amplitude, especially the water table. Finally, according to the threshold value in the land subsidence prevention and control plan of China (45, 30, 25 mm), the minimum allowable layered water level in this region while settlement achieving the threshold value is calculated between -18.448 and -10.082 m. The results provide a reasonable and operable control target of groundwater level for rational adjustment of groundwater exploited horizon in the future.
NASA Astrophysics Data System (ADS)
Fereydooni, H.; Mojeddifar, S.
2017-09-01
This study introduced a different procedure to implement matched filtering algorithm (MF) on the ASTER images to obtain the distribution map of alteration minerals in the northwestern part of the Kerman Cenozoic Magmatic Arc (KCMA). This region contains many areas with porphyry copper mineralization such as Meiduk, Abdar, Kader, Godekolvari, Iju, Serenu, Chahfiroozeh and Parkam. Also argillization, sericitization and propylitization are the most common types of hydrothermal alteration in the area. Matched filtering results were provided for alteration minerals with a matched filtering score, called MF image. To identify the pixels which contain only one material (endmember), an appropriate threshold value should be used to the MF image. The chosen threshold classifies a MF image into background and target pixels. This article argues that the current thresholding process (the choice of a threshold) shows misclassification for MF image. To address the issue, this paper introduced the directed matched filtering (DMF) algorithm in which a spectral signature-based filter (SSF) was used instead of the thresholding process. SSF is a user-defined rule package which contains numeral descriptions about the spectral reflectance of alteration minerals. On the other hand, the spectral bands are defined by an upper and lower limit in SSF filter for each alteration minerals. SSF was developed for chlorite, kaolinite, alunite, and muscovite minerals to map alteration zones. The validation proved that, at first: selecting a contiguous range of MF values could not identify desirable results, second: unexpectedly, considerable frequency of pure pixels was observed in the MF scores less than threshold value. Also, the comparison between DMF results and field studies showed an accuracy of 88.51%.
Meier, Kimberly; Sum, Brian; Giaschi, Deborah
2016-10-01
Global motion sensitivity in typically developing children depends on the spatial (Δx) and temporal (Δt) displacement parameters of the motion stimulus. Specifically, sensitivity for small Δx values matures at a later age, suggesting it may be the most vulnerable to damage by amblyopia. To explore this possibility, we compared motion coherence thresholds of children with amblyopia (7-14years old) to age-matched controls. Three Δx values were used with two Δt values, yielding six conditions covering a range of speeds (0.3-30deg/s). We predicted children with amblyopia would show normal coherence thresholds for the same parameters on which 5-year-olds previously demonstrated mature performance, and elevated coherence thresholds for parameters on which 5-year-olds demonstrated immaturities. Consistent with this, we found that children with amblyopia showed deficits with amblyopic eye viewing compared to controls for small and medium Δx values, regardless of Δt value. The fellow eye showed similar results at the smaller Δt. These results confirm that global motion perception in children with amblyopia is particularly deficient at the finer spatial scales that typically mature later in development. An additional implication is that carefully designed stimuli that are adequately sensitive must be used to assess global motion function in developmental disorders. Stimulus parameters for which performance matures early in life may not reveal global motion perception deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effect of randomness in logistic maps
NASA Astrophysics Data System (ADS)
Khaleque, Abdul; Sen, Parongama
2015-01-01
We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However
Threshold-based insulin-pump interruption for reduction of hypoglycemia.
Bergenstal, Richard M; Klonoff, David C; Garg, Satish K; Bode, Bruce W; Meredith, Melissa; Slover, Robert H; Ahmann, Andrew J; Welsh, John B; Lee, Scott W; Kaufman, Francine R
2013-07-18
The threshold-suspend feature of sensor-augmented insulin pumps is designed to minimize the risk of hypoglycemia by interrupting insulin delivery at a preset sensor glucose value. We evaluated sensor-augmented insulin-pump therapy with and without the threshold-suspend feature in patients with nocturnal hypoglycemia. We randomly assigned patients with type 1 diabetes and documented nocturnal hypoglycemia to receive sensor-augmented insulin-pump therapy with or without the threshold-suspend feature for 3 months. The primary safety outcome was the change in the glycated hemoglobin level. The primary efficacy outcome was the area under the curve (AUC) for nocturnal hypoglycemic events. Two-hour threshold-suspend events were analyzed with respect to subsequent sensor glucose values. A total of 247 patients were randomly assigned to receive sensor-augmented insulin-pump therapy with the threshold-suspend feature (threshold-suspend group, 121 patients) or standard sensor-augmented insulin-pump therapy (control group, 126 patients). The changes in glycated hemoglobin values were similar in the two groups. The mean AUC for nocturnal hypoglycemic events was 37.5% lower in the threshold-suspend group than in the control group (980 ± 1200 mg per deciliter [54.4 ± 66.6 mmol per liter] × minutes vs. 1568 ± 1995 mg per deciliter [87.0 ± 110.7 mmol per liter] × minutes, P<0.001). Nocturnal hypoglycemic events occurred 31.8% less frequently in the threshold-suspend group than in the control group (1.5 ± 1.0 vs. 2.2 ± 1.3 per patient-week, P<0.001). The percentages of nocturnal sensor glucose values of less than 50 mg per deciliter (2.8 mmol per liter), 50 to less than 60 mg per deciliter (3.3 mmol per liter), and 60 to less than 70 mg per deciliter (3.9 mmol per liter) were significantly reduced in the threshold-suspend group (P<0.001 for each range). After 1438 instances at night in which the pump was stopped for 2 hours, the mean sensor glucose value was 92.6 ± 40.7 mg per deciliter (5.1 ± 2.3 mmol per liter). Four patients (all in the control group) had a severe hypoglycemic event; no patients had diabetic ketoacidosis. This study showed that over a 3-month period the use of sensor-augmented insulin-pump therapy with the threshold-suspend feature reduced nocturnal hypoglycemia, without increasing glycated hemoglobin values. (Funded by Medtronic MiniMed; ASPIRE ClinicalTrials.gov number, NCT01497938.).
Effect of threshold disorder on the quorum percolation model
NASA Astrophysics Data System (ADS)
Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel
2016-07-01
We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1996-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Unipolar terminal-attractor based neural associative memory with adaptive threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1993-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
[Research on the threshold of Chl-a in Lake Taihu based on microcystins].
Wei, Dai-chun; Su, Jing; Ji, Dan-feng; Fu, Xiao-yong; Wang, Ji; Huo, Shou-liang; Cui, Chi-fei; Tang, Jun; Xi, Bei-dou
2014-12-01
Water samples were collected in Lake Taihu from June to October in 2013 in order to investigate the threshold of chlorophyll a (Chl-a). The concentrations of three microcystins isomers (MC-LR, MC-RR, MC-YR) were detected by means of solid phase extraction and high performance liquid chromatography-tandem mass spectrometry. The correlations between various MCs and eutrophication factors, for instance of total nitrogen (TN), total phosphorus (TP), chlorophyll a, permanganate index etc were analyzed. The threshold of Chl-a was studied based on the relationships between MC-LR, MCs and Chl-a. The results showed that Lake Taihu was severely polluted by MCs and its spatial distribution could be described as follows: the concentration in Meiliang Bay was the highest, followed by Gonghu Bay and Western Lake, and Lake Center; the least polluted areas were in Lake Xuhu and Southern Lake. The concentration of MC-LR was the highest among the 3 MCs. The correlation analysis indicated that MC-LR, MC-RR, MC-YR and MCs had very positive correlation with permanganate index, TN, TP and Chl-a (P < 0.01). The threshold value of Chl-a was 12.26 mg x m(-3) according to the standard thresholds of MC-LR and MCs in drinking water. The threshold value of Chl-a in Lake Taihu was very close to the standard in the State of North Carolina, which demonstrated that the threshold value provided in this study was reasonable.
A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.
Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês
2015-12-01
Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Mercury demethylation in waterbird livers: Dose-response thresholds and differences among species
Eagles-Smith, Collin A.; Ackerman, Joshua T.; Julie, Y.E.E.; Adelsbach, T.L.
2009-01-01
We assessed methylmercury (MeHg) demethylation in the livers of adults and chicks of four waterbird species that commonly breed in San Francisco Bay: American avocets, black-necked stilts, Caspian terns, and Forster's terns. In adults (all species combined), we found strong evidence for a threshold, model where MeHg demethylation occurred above a hepatic total mercury concentration threshold of 8.51 ?? 0.93 ??g/g dry weight, and there was a strong decline in %MeHg values as total mercury (THg) concentrations increased above 8.51 ??g/g dry weight. Conversely, there was no evidence for a demethylation threshold in chicks, and we found that %MeHg values declined linearly with increasing THg concentrations. For adults, we also found taxonomie differences in the demethylation responses, with avocets and stilts showing a higher demethylation rate than that of terns when concentrations exceeded the threshold, whereas terns had a lower demethylation threshold (7.48 ?? 1.48 ??g/g dry wt) than that of avocets and stilts (9.91 ?? 1.29 ??g/g dry wt). Finally, we assessed the role of selenium (Se) in the demethylation process. Selenium concentrations were positively correlated with inorganic Hg in livers of birds above the demethylation threshold but not below. This suggests that Se may act as a binding site for demethylated Hg and may reduce the potential for secondary toxicity. Our findings indicate that waterbirds demethylate mercury in their livers if exposure exceeds a threshold value and suggest that taxonomie differences in demethylation ability may be an important factor in evaluating species-specific risk to MeHg exposure. Further, we provide strong evidence for a threshold of approximately 8.5 ??g/g dry weight of THg in the liver where demethylation is initiated. ?? 2009 SETAC.
Oil-in-Water Emulsion Exhibits Bitterness-Suppressing Effects in a Sensory Threshold Study.
Torrico, Damir Dennis; Sae-Eaw, Amporn; Sriwattana, Sujinda; Boeneke, Charles; Prinyawiwatkul, Witoon
2015-06-01
Little is known about how emulsion characteristics affect saltiness/bitterness perception. Sensory detection and recognition thresholds of NaCl, caffeine, and KCl in aqueous solution compared with oil-in-water emulsion systems were evaluated. For emulsions, NaCl, KCl, or caffeine were dissolved in water + emulsifier and mixed with canola oil (20% by weight). Two emulsions were prepared: emulsion 1 (viscosity = 257 cP) and emulsion 2 (viscosity = 59 cP). The forced-choice ascending concentration series method of limits (ASTM E-679-04) was used to determine detection and/or recognition thresholds at 25 °C. Group best estimate threshold (GBET) geometric means were expressed as g/100 mL. Comparing NaCl with KCl, there were no significant differences in detection GBET values for all systems (0.0197 - 0.0354). For saltiness recognition thresholds, KCl GBET values were higher compared with NaCl GBET (0.0822 - 0.1070 compared with 0.0471 - 0.0501). For NaCl and KCl, emulsion 1 and/or emulsion 2 did not significantly affect the saltiness recognition threshold compared with that of the aqueous solution. However, the bitterness recognition thresholds of caffeine and KCl in solution were significantly lower than in the emulsions (0.0242 - 0.0586 compared with 0.0754 - 0.1025). Gender generally had a marginal effect on threshold values. This study showed that, compared with the aqueous solutions, emulsions did not significantly affect the saltiness recognition threshold of NaCl and KCl, but exhibited bitterness-suppressing effects on KCl and/or caffeine. © 2015 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Panagoulia, D.; Trichakis, I.
2012-04-01
Considering the growing interest in simulating hydrological phenomena with artificial neural networks (ANNs), it is useful to figure out the potential and limits of these models. In this study, the main objective is to examine how to improve the ability of an ANN model to simulate extreme values of flow utilizing a priori knowledge of threshold values. A three-layer feedforward ANN was trained by using the back propagation algorithm and the logistic function as activation function. By using the thresholds, the flow was partitioned in low (x < μ), medium (μ ≤ x ≤ μ + 2σ) and high (x > μ + 2σ) values. The employed ANN model was trained for high flow partition and all flow data too. The developed methodology was implemented over a mountainous river catchment (the Mesochora catchment in northwestern Greece). The ANN model received as inputs pseudo-precipitation (rain plus melt) and previous observed flow data. After the training was completed the bootstrapping methodology was applied to calculate the ANN confidence intervals (CIs) for a 95% nominal coverage. The calculated CIs included only the uncertainty, which comes from the calibration procedure. The results showed that an ANN model trained specifically for high flows, with a priori knowledge of the thresholds, can simulate these extreme values much better (RMSE is 31.4% less) than an ANN model trained with all data of the available time series and using a posteriori threshold values. On the other hand the width of CIs increases by 54.9% with a simultaneous increase by 64.4% of the actual coverage for the high flows (a priori partition). The narrower CIs of the high flows trained with all data may be attributed to the smoothing effect produced from the use of the full data sets. Overall, the results suggest that an ANN model trained with a priori knowledge of the threshold values has an increased ability in simulating extreme values compared with an ANN model trained with all the data and a posteriori knowledge of the thresholds.
A Gompertz population model with Allee effect and fuzzy initial values
NASA Astrophysics Data System (ADS)
Amarti, Zenia; Nurkholipah, Nenden Siti; Anggriani, Nursanti; Supriatna, Asep K.
2018-03-01
Growth and population dynamics models are important tools used in preparing a good management for society to predict the future of population or species. This has been done by various known methods, one among them is by developing a mathematical model that describes population growth. Models are usually formed into differential equations or systems of differential equations, depending on the complexity of the underlying properties of the population. One example of biological complexity is Allee effect. It is a phenomenon showing a high correlation between very small population size and the mean individual fitness of the population. In this paper the population growth model used is the Gompertz equation model by considering the Allee effect on the population. We explore the properties of the solution to the model numerically using the Runge-Kutta method. Further exploration is done via fuzzy theoretical approach to accommodate uncertainty of the initial values of the model. It is known that an initial value greater than the Allee threshold will cause the solution rises towards carrying capacity asymptotically. However, an initial value smaller than the Allee threshold will cause the solution decreases towards zero asymptotically, which means the population is eventually extinct. Numerical solutions show that modeling uncertain initial value of the critical point A (the Allee threshold) with a crisp initial value could cause the extinction of population of a certain possibilistic degree, depending on the predetermined membership function of the initial value.
Cost-effectiveness thresholds: pros and cons.
Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R
2016-12-01
Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten
2018-05-01
Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.
I. RENAL THRESHOLDS FOR HEMOGLOBIN IN DOGS
Lichty, John A.; Havill, William H.; Whipple, George H.
1932-01-01
We use the term "renal threshold for hemoglobin" to indicate the smallest amount of hemoglobin which given intravenously will effect the appearance of recognizable hemoglobin in the urine. The initial renal threshold level for dog hemoglobin is established by the methods employed at an average value of 155 mg. hemoglobin per kilo body weight with maximal values of 210 and minimal of 124. Repeated daily injections of hemoglobin will depress this initial renal threshold level on the average 46 per cent with maximal values of 110 and minimal values of 60 mg. hemoglobin per kilo body weight. This minimal or depression threshold is relatively constant if the injections are continued. Rest periods without injections cause a return of the renal threshold for hemoglobin toward the initial threshold levels—recovery threshold level. Injections of hemoglobin below the initial threshold level but above the minimal or depression threshold will eventually reduce the renal threshold for hemoglobin to its depression threshold level. We believe the depression threshold or minimal renal threshold level due to repeated hemoglobin injections is a little above the glomerular threshold which we assume is the base line threshold for hemoglobin. Our reasons for this belief in the glomerular threshold are given above and in the other papers of this series. PMID:19870016
NASA Astrophysics Data System (ADS)
Ampil, L. J. Y.; Yao, J. G.; Lagrosas, N.; Lorenzo, G. R. H.; Simpas, J.
2017-12-01
The Global Precipitation Measurement (GPM) mission is a group of satellites that provides global observations of precipitation. Satellite-based observations act as an alternative if ground-based measurements are inadequate or unavailable. Data provided by satellites however must be validated for this data to be reliable and used effectively. In this study, the Integrated Multisatellite Retrievals for GPM (IMERG) Final Run v3 half-hourly product is validated by comparing against interpolated ground measurements derived from sixteen ground stations in Metro Manila. The area considered in this study is the region 14.4° - 14.8° latitude and 120.9° - 121.2° longitude, subdivided into twelve 0.1° x 0.1° grid squares. Satellite data from June 1 - August 31, 2014 with the data aggregated to 1-day temporal resolution are used in this study. The satellite data is directly compared to measurements from individual ground stations to determine the effect of the interpolation by contrast against the comparison of satellite data and interpolated measurements. The comparisons are calculated by taking a fractional root-mean-square error (F-RMSE) between two datasets. The results show that interpolation improves errors compared to using raw station data except during days with very small amounts of rainfall. F-RMSE reaches extreme values of up to 654 without a rainfall threshold. A rainfall threshold is inferred to remove extreme error values and make the distribution of F-RMSE more consistent. Results show that the rainfall threshold varies slightly per month. The threshold for June is inferred to be 0.5 mm, reducing the maximum F-RMSE to 9.78, while the threshold for July and August is inferred to be 0.1 mm, reducing the maximum F-RMSE to 4.8 and 10.7, respectively. The maximum F-RMSE is reduced further as the threshold is increased. Maximum F-RMSE is reduced to 3.06 when a rainfall threshold of 10 mm is applied over the entire duration of JJA. These results indicate that IMERG performs well for moderate to high intensity rainfall and that the interpolation remains effective only when rainfall exceeds a certain threshold value. Over Metro Manila, an F-RMSE threshold of 0.5 mm indicated better correspondence between ground measured and satellite measured rainfall.
Cheng, H; Zhang, X C; Duan, L; Ma, Y; Wang, J X
1995-01-01
The vibrotactile sense thresholds (VSTs) of the middle fingers of 60 healthy persons and 97 patients with Hand-Arm Vibration Syndrome (HAVS) or subclinical HAVS were measured quantitatively. Intermittent vibratory irritations were adopted, with vibration stimulus frequencies at 8, 16, 31.5, 63, 125, 250, and 500 Hz. The equal VST contours of the fingers were mapped. Results showed that the VSTs of the normal group were not correlated with sex or handedness. From 8 Hz to 250 Hz the equal VST contours of the normal group were relatively flat; at more than 250 Hz the contours began an abrupt ascent. The VST values had a logarithmic rising tendency with the increasing age of subjects. In the equal VST contours the frequency of the most sensitive threshold value was 125 Hz in the normal group and 8 Hz in the HAVS group. The patients' VST values were higher than that of the healthy persons. The vibrotactilegram showed that the VST values of the patient groups first shifted at high frequencies and VST loss displayed a "V"-type hollow at 125 Hz and 250 Hz. The quantitative test method of VST was a valuable auxiliary detection method for HAVS. The "V"-type hollow of VST was an early clinical manifestation of HAVS.
Sharp, Madeleine E.; Viswanathan, Jayalakshmi; Lanyon, Linda J.; Barton, Jason J. S.
2012-01-01
Background There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. Objective We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Design/Methods Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Results Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a ‘risk premium’ of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. Conclusions This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia. PMID:22493669
Sharp, Madeleine E; Viswanathan, Jayalakshmi; Lanyon, Linda J; Barton, Jason J S
2012-01-01
There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a 'risk premium' of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia.
An adaptive design for updating the threshold value of a continuous biomarker
Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2017-01-01
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407
Thresholds of information leakage for speech security outside meeting rooms.
Robinson, Matthew; Hopkins, Carl; Worrall, Ken; Jackson, Tim
2014-09-01
This paper describes an approach to provide speech security outside meeting rooms where a covert listener might attempt to extract confidential information. Decision-based experiments are used to establish a relationship between an objective measurement of the Speech Transmission Index (STI) and a subjective assessment relating to the threshold of information leakage. This threshold is defined for a specific percentage of English words that are identifiable with a maximum safe vocal effort (e.g., "normal" speech) used by the meeting participants. The results demonstrate that it is possible to quantify an offset that links STI with a specific threshold of information leakage which describes the percentage of words identified. The offsets for male talkers are shown to be approximately 10 dB larger than for female talkers. Hence for speech security it is possible to determine offsets for the threshold of information leakage using male talkers as the "worst case scenario." To define a suitable threshold of information leakage, the results show that a robust definition can be based upon 1%, 2%, or 5% of words identified. For these percentages, results are presented for offset values corresponding to different STI values in a range from 0.1 to 0.3.
Yates, K.K.; Halley, R.B.
2006-01-01
The severity of the impact of elevated atmospheric pCO2 to coral reef ecosystems depends, in part, on how sea-water pCO2 affects the balance between calcification and dissolution of carbonate sediments. Presently, there are insufficient published data that relate concentrations of pCO 2 and CO32- to in situ rates of reef calcification in natural settings to accurately predict the impact of elevated atmospheric pCO2 on calcification and dissolution processes. Rates of net calcification and dissolution, CO32- concentrations, and pCO2 were measured, in situ, on patch reefs, bare sand, and coral rubble on the Molokai reef flat in Hawaii. Rates of calcification ranged from 0.03 to 2.30 mmol CaCO3 m-2 h-1 and dissolution ranged from -0.05 to -3.3 mmol CaCO3 m-2 h-1. Calcification and dissolution varied diurnally with net calcification primarily occurring during the day and net dissolution occurring at night. These data were used to calculate threshold values for pCO2 and CO32- at which rates of calcification and dissolution are equivalent. Results indicate that calcification and dissolution are linearly correlated with both CO32- and pCO2. Threshold pCO2 and CO32- values for individual substrate types showed considerable variation. The average pCO2 threshold value for all substrate types was 654??195 ??atm and ranged from 467 to 1003 ??atm. The average CO32- threshold value was 152??24 ??mol kg-1, ranging from 113 to 184 ??mol kg-1. Ambient seawater measurements of pCO2 and CO32- indicate that CO32- and pCO2 threshold values for all substrate types were both exceeded, simultaneously, 13% of the time at present day atmospheric pCO2 concentrations. It is predicted that atmospheric pCO2 will exceed the average pCO2 threshold value for calcification and dissolution on the Molokai reef flat by the year 2100.
[Selection of distance thresholds of urban forest landscape connectivity in Shenyang City].
Liu, Chang-fu; Zhou, Bin; He, Xing-yuan; Chen, Wei
2010-10-01
By using the QuickBird remote sensing image interpretation data of urban forests in Shenyang City in 2006, and with the help of geographical information system, this paper analyzed the landscape patches of the urban forests in the area inside the third ring-road of Shenyang. Based on the habitat availability and the dispersal potential of animal and plant species, 8 distance thresholds (50, 100, 200, 400, 600, 800, 1000, and 1200 m) were selected to compute the integral index of connectivity, probability of connectivity, and important value of the landscape patches, and the computed values were used for analyzing and screening the distance thresholds of urban forest landscape connectivity in the City. The results showed that the appropriate distance thresholds of the urban forest landscape connectivity in Shenyang City in 2006 ranged from 100 to 400 m, with 200 m being most appropriate. It was suggested that the distance thresholds should be increased or decreased according to the performability of urban forest landscape connectivity and the different demands for landscape levels.
Education-Adjusted Normality Thresholds for FDG-PET in the Diagnosis of Alzheimer Disease.
Mainta, Ismini C; Trombella, Sara; Morbelli, Silvia; Frisoni, Giovanni B; Garibotto, Valentina
2018-06-05
A corollary of the reserve hypothesis is that what is regarded as pathological cortical metabolism in patients might vary according to education. The aim of this study is to assess the incremental diagnostic value of education-adjusted over unadjusted thresholds on the diagnostic accuracy of FDG-PET as a biomarker for Alzheimer disease (AD). We compared cortical metabolism in 90 healthy controls and 181 AD patients from the Alzheimer Disease Neuroimaging Initiative (ADNI) database. The AUC of the ROC curve did not differ significantly between the whole group and the higher-education patients or the lower-education subjects. The threshold of wMetaROI values providing 80% sensitivity was lower in higher-education patients and higher in the lower-education patients, compared to the standard threshold derived over the whole AD collective, without, however, significant changes in sensitivity and specificity. These data show that education, as a proxy of reserve, is not a major confounder in the diagnostic accuracy of FDG-PET in AD and the adoption of education-adjusted thresholds is not required in daily practice. © 2018 S. Karger AG, Basel.
CHANGES IN THE ANAEROBIC THRESHOLD IN AN ANNUAL CYCLE OF SPORT TRAINING OF YOUNG SOCCER PLAYERS
Andrzejewski, M.; Wieczorek, A.; Barinow-Wojewódzki, A.; Jadczak, Ł.; Adrian, S.; Pietrzak, M.; Wieczorek, S.
2013-01-01
The aim of the study was to assess changes in the anaerobic threshold of young soccer players in an annual training cycle. A group of highly trained 15-18 year old players of KKS Lech Poznań were tested. The tests included an annual training macrocycle, and its individual stages resulted from the time structure of the sports training. In order to assess the level of exercise capacities of the players, a field exercise test of increasing intensity was carried out on a soccer pitch. The test made it possible to determine the 4 millimolar lactate threshold (T LA 4 mmol · l-1) on the basis of the lactate concentration in blood [LA], to establish the threshold running speed and the threshold heart rate [HR]. The threshold running speed at the level of the 4 millimolar lactate threshold was established using the two-point form of the equation of a straight line. The obtained indicators of the threshold running speed allowed for precise establishment of effort intensity used in individual training in developing aerobic endurance. In order to test the significance of differences in mean values between four dates of tests, a non-parametric Friedman ANOVA test was used. The significance of differences between consecutive dates of tests was determined using a post-hoc Friedman ANOVA test. The tests showed significant differences in values of selected indicators determined at the anaerobic threshold in various stages of an annual training cycle of young soccer players. The most beneficial changes in terms of the threshold running speed were noted on the fourth date of tests, when the participants had the highest values of 4.01 m · s-1 for older juniors, and 3.80 m · s-1 for younger juniors. This may be indicative of effective application of an individualized programme of training loads and of good preparation of teams for competition in terms of players’ aerobic endurance. PMID:24744480
Changes in the anaerobic threshold in an annual cycle of sport training of young soccer players.
Sliwowski, R; Andrzejewski, M; Wieczorek, A; Barinow-Wojewódzki, A; Jadczak, L; Adrian, S; Pietrzak, M; Wieczorek, S
2013-06-01
The aim of the study was to assess changes in the anaerobic threshold of young soccer players in an annual training cycle. A group of highly trained 15-18 year old players of KKS Lech Poznań were tested. The tests included an annual training macrocycle, and its individual stages resulted from the time structure of the sports training. In order to assess the level of exercise capacities of the players, a field exercise test of increasing intensity was carried out on a soccer pitch. The test made it possible to determine the 4 millimolar lactate threshold (T LA 4 mmol · l(-1)) on the basis of the lactate concentration in blood [LA], to establish the threshold running speed and the threshold heart rate [HR]. The threshold running speed at the level of the 4 millimolar lactate threshold was established using the two-point form of the equation of a straight line. The obtained indicators of the threshold running speed allowed for precise establishment of effort intensity used in individual training in developing aerobic endurance. In order to test the significance of differences in mean values between four dates of tests, a non-parametric Friedman ANOVA test was used. The significance of differences between consecutive dates of tests was determined using a post-hoc Friedman ANOVA test. The tests showed significant differences in values of selected indicators determined at the anaerobic threshold in various stages of an annual training cycle of young soccer players. The most beneficial changes in terms of the threshold running speed were noted on the fourth date of tests, when the participants had the highest values of 4.01 m · s(-1) for older juniors, and 3.80 m · s(-1) for younger juniors. This may be indicative of effective application of an individualized programme of training loads and of good preparation of teams for competition in terms of players' aerobic endurance.
The interplay between cooperativity and diversity in model threshold ensembles
Cervera, Javier; Manzanares, José A.; Mafe, Salvador
2014-01-01
The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516
Perceived area and the luminosity threshold.
Bonato, F; Gilchrist, A L
1999-07-01
Observers made forced-choice opaque/luminous responses to targets of varying luminance and varying size presented (1) on the wall of a laboratory, (2) as a disk within an annulus, and (3) embedded within a Mondrian array presented within a vision tunnel. Lightness matches were also made for nearby opaque surfaces. The results show that the threshold luminance value at which a target begins to appear self-luminous increases with its size, defined as perceived size, not retinal size. More generally, the larger the target, the more an increase in its luminance induces grayness/blackness into the surround and the less it induces luminosity into the target, and vice versa. Corresponding to this luminosity/grayness tradeoff, there appears to be an invariant: Across a wide variety of conditions, a target begins to appear luminous when its luminance is about 1.7 times that of a surface that would appear white in the same illumination. These results show that the luminosity threshold behaves like a surface lightness value--the maximum lightness value, in fact--and is subject to the same laws of anchoring (such as the area rule proposed by Li & Gilchrist, 1999) as surface lightness.
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2010 CFR
2010-07-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2011 CFR
2011-01-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2012 CFR
2012-01-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
Methods for automatic trigger threshold adjustment
Welch, Benjamin J; Partridge, Michael E
2014-03-18
Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.
Cost–effectiveness thresholds: pros and cons
Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R
2016-01-01
Abstract Cost–effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost–effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost–effectiveness thresholds allow cost–effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization’s Commission on Macroeconomics in Health suggested cost–effectiveness thresholds based on multiples of a country’s per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this – in addition to uncertainty in the modelled cost–effectiveness ratios – can lead to the wrong decision on how to spend health-care resources. Cost–effectiveness information should be used alongside other considerations – e.g. budget impact and feasibility considerations – in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost–effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair. PMID:27994285
[The new German general threshold limit value for dust--pro and contra the adoption in Austria].
Godnic-Cvar, Jasminka; Ponocny, Ivo
2004-01-01
Since it has been realised that inhalation of inert dust is one of the important confounding variables for the development of chronic bronchitis, the threshold values for occupational exposure to these dusts needs to be further decreased. The German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (MAK-Commission) has set a new threshold (MAK-Value) for inert dusts (4 mg/m3 for inhalable dust, 1.5 mg/m3 for respirable dust) in 1997. This value is much lower than the threshold values currently used world-wide. The aim of the present article is to assess the scientific plausibility of the methodology (databases and statistics) used to set these new German MAK-Values, regarding their adoption in Austria. Although we believe that it is substantial to lower the MAK-Value for inert dust in order to prevent the development of chronic bronchitis as a consequence of occupational exposure to inert dusts, the applied methodology used by the German MAK-Commission in 1997 to set the new MAK-Values does not justify the reduction of the threshold limit value. A carefully designed study to establish an appropriate scientific basis for setting a new threshold value for inert dusts in the workplace should be carried out. Meanwhile, at least the currently internationally applied threshold values should be adopted in Austria.
Boyd, Paul J
2006-12-01
The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical explanation as to why PT has little effect when using the default maplaw of c = 500. Subjective reports of background threshold stimulation showed that most users could perceive a relatively loud auditory percept, in the absence of microphone input, when PT was set to double the behaviorally measured electrical thresholds ([theta]e), but that this produced little intrusion when microphone input was present. The results of these investigations have direct clinical relevance, showing that setting of PT is indeed relatively unimportant in terms of speech discrimination, but that it is worth ensuring that PT is not set excessively high, as this can produce distracting background stimulation. Indeed, it may even be set to minimum values without deleterious effect.
How to Assess the Value of Medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066
How to assess the value of medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.
Fujisawa, Jun-Ichi; Osawa, Ayumi; Hanaya, Minoru
2016-08-10
Photoinduced carrier injection from dyes to inorganic semiconductors is a crucial process in various dye-sensitized solar energy conversions such as photovoltaics and photocatalysis. It has been reported that an energy offset larger than 0.2-0.3 eV (threshold value) is required for efficient electron injection from excited dyes to metal-oxide semiconductors such as titanium dioxide (TiO2). Because the energy offset directly causes loss in the potential of injected electrons, it is a crucial issue to minimize the energy offset for efficient solar energy conversions. However, a fundamental understanding of the energy offset, especially the threshold value, has not been obtained yet. In this paper, we report the origin of the threshold value of the energy offset, solving the long-standing questions of why such a large energy offset is necessary for the electron injection and which factors govern the threshold value, and suggest a strategy to minimize the threshold value. The threshold value is determined by the sum of two reorganization energies in one-electron reduction of semiconductors and typically-used donor-acceptor (D-A) dyes. In fact, the estimated values (0.21-0.31 eV) for several D-A dyes are in good agreement with the threshold value, supporting our conclusion. In addition, our results reveal that the threshold value is possible to be reduced by enlarging the π-conjugated system of the acceptor moiety in dyes and enhancing its structural rigidity. Furthermore, we extend the analysis to hole injection from excited dyes to semiconductors. In this case, the threshold value is given by the sum of two reorganization energies in one-electron oxidation of semiconductors and D-A dyes.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
NASA Astrophysics Data System (ADS)
Sheshukov, Aleksey Y.; Sekaluvu, Lawrence; Hutchinson, Stacy L.
2018-04-01
Topographic index (TI) models have been widely used to predict trajectories and initiation points of ephemeral gullies (EGs) in agricultural landscapes. Prediction of EGs strongly relies on the selected value of critical TI threshold, and the accuracy depends on topographic features, agricultural management, and datasets of observed EGs. This study statistically evaluated the predictions by TI models in two paired watersheds in Central Kansas that had different levels of structural disturbances due to implemented conservation practices. Four TI models with sole dependency on topographic factors of slope, contributing area, and planform curvature were used in this study. The observed EGs were obtained by field reconnaissance and through the process of hydrological reconditioning of digital elevation models (DEMs). The Kernel Density Estimation analysis was used to evaluate TI distribution within a 10-m buffer of the observed EG trajectories. The EG occurrence within catchments was analyzed using kappa statistics of the error matrix approach, while the lengths of predicted EGs were compared with the observed dataset using the Nash-Sutcliffe Efficiency (NSE) statistics. The TI frequency analysis produced bi-modal distribution of topographic indexes with the pixels within the EG trajectory having a higher peak. The graphs of kappa and NSE versus critical TI threshold showed similar profile for all four TI models and both watersheds with the maximum value representing the best comparison with the observed data. The Compound Topographic Index (CTI) model presented the overall best accuracy with NSE of 0.55 and kappa of 0.32. The statistics for the disturbed watershed showed higher best critical TI threshold values than for the undisturbed watershed. Structural conservation practices implemented in the disturbed watershed reduced ephemeral channels in headwater catchments, thus producing less variability in catchments with EGs. The variation in critical thresholds for all TI models suggested that TI models tend to predict EG occurrence and length over a range of thresholds rather than find a single best value.
A Size Effect on the Fatigue Crack Growth Rate Threshold of Alloy 718
NASA Technical Reports Server (NTRS)
Garr, K. R.; Hresko, G. C., III
1998-01-01
Fatigue crack growth rate (FCGR) tests were conducted on Alloy 718 in the solution annealed and aged condition at room temperature. In each test, the FCGR threshold was measured using the decreasing (Delta)K method. Initial testing was at two facilities, one of which used C(T) specimens with W = 127 mm. Previous data at the other facility had been obtained with specimens with W = 50.8 mm. A comparison of test results at R = 0.1 showed that the threshold for the 127 mm specimen was considerably higher than that of the 50.8 mm specimen. A check showed that this difference was not due to a heat-to-heat or lab-to-lab variation. Additional tests were conducted on specimens with W = 25.4 mm and at other R values. Data for the various specimens is presented along with parameters usually used to describe threshold behavior.
Dervaux, Benoît; Baseilhac, Eric; Fagon, Jean-Yves; Biot, Claire; Blachier, Corinne; Braun, Eric; Debroucker, Frédérique; Detournay, Bruno; Ferretti, Carine; Granger, Muriel; Jouan-Flahault, Chrystel; Lussier, Marie-Dominique; Meyer, Arlette; Muller, Sophie; Pigeon, Martine; De Sahb, Rima; Sannié, Thomas; Sapède, Claudine; Vray, Muriel
2014-01-01
Decree No. 2012-1116 of 2 October 2012 on medico-economic assignments of the French National Authority for Health (Haute autorité de santé, HAS) significantly alters the conditions for accessing the health products market in France. This paper presents a theoretical framework for interpreting the results of the economic evaluation of health technologies and summarises the facts available in France for developing benchmarks that will be used to interpret incremental cost-effectiveness ratios. This literature review shows that it is difficult to determine a threshold value but it is also difficult to interpret then incremental cost effectiveness ratio (ICER) results without a threshold value. In this context, round table participants favour a pragmatic approach based on "benchmarks" as opposed to a threshold value, based on an interpretative and normative perspective, i.e. benchmarks that can change over time based on feedback. © 2014 Société Française de Pharmacologie et de Thérapeutique.
An improved PRoPHET routing protocol in delay tolerant network.
Han, Seung Deok; Chung, Yun Won
2015-01-01
In delay tolerant network (DTN), an end-to-end path is not guaranteed and packets are delivered from a source node to a destination node via store-carry-forward based routing. In DTN, a source node or an intermediate node stores packets in buffer and carries them while it moves around. These packets are forwarded to other nodes based on predefined criteria and finally are delivered to a destination node via multiple hops. In this paper, we improve the dissemination speed of PRoPHET (probability routing protocol using history of encounters and transitivity) protocol by employing epidemic protocol for disseminating message m, if forwarding counter and hop counter values are smaller than or equal to the threshold values. The performance of the proposed protocol was analyzed from the aspect of delivery probability, average delay, and overhead ratio. Numerical results show that the proposed protocol can improve the delivery probability, average delay, and overhead ratio of PRoPHET protocol by appropriately selecting the threshold forwarding counter and threshold hop counter values.
Simple Model for Identifying Critical Regions in Atrial Fibrillation
NASA Astrophysics Data System (ADS)
Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.
2015-01-01
Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.
Lindenblatt, G.; Silny, J.
2006-01-01
Leakage currents, tiny currents flowing from an everyday-life appliance through the body to the ground, can cause a non-adequate perception (called electrocutaneous sensation, ECS) or even pain and should be avoided. Safety standards for low-frequency range are based on experimental results of current thresholds of electrocutaneous sensations, which however show a wide range between about 50 μA (rms) and 1000 μA (rms). In order to be able to explain these differences, the perception threshold was measured repeatedly in experiments with test persons under identical experimental setup, but by means of different methods (measuring strategies), namely: direct adjustment, classical threshold as amperage of 50% perception probability, and confidence rating procedure of signal detection theory. The current is injected using a 1 cm2 electrode at the highly touch sensitive part of the index fingertip. These investigations show for the first time that the threshold of electrocutaneous sensations is influenced both by adaptation to the non-adequate stimulus and individual, emotional factors. Therefore, classical methods, on which the majority of the safety investigations are based, cannot be used to determine a leakage current threshold. The confidence rating procedure of the modern signal detection theory yields a value of 179.5 μA (rms) at 50 Hz power supply net frequency as the lower end of the 95% confidence range considering the variance in the investigated group. This value is expected to be free of adaptation influences, and is distinctly lower than the European limits and supports the stricter regulations of Canada and USA. PMID:17111461
An adaptive design for updating the threshold value of a continuous biomarker.
Spencer, Amy V; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2016-11-30
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker 'positive' and 'negative' is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that 'no population subset exists in which the novel treatment has a desirable response rate' to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
McCaffrey, Nikki; Agar, Meera; Harlum, Janeane; Karnon, Jonathon; Currow, David; Eckermann, Simon
2015-01-01
Introduction Comparing multiple, diverse outcomes with cost-effectiveness analysis (CEA) is important, yet challenging in areas like palliative care where domains are unamenable to integration with survival. Generic multi-attribute utility values exclude important domains and non-health outcomes, while partial analyses—where outcomes are considered separately, with their joint relationship under uncertainty ignored—lead to incorrect inference regarding preferred strategies. Objective The objective of this paper is to consider whether such decision making can be better informed with alternative presentation and summary measures, extending methods previously shown to have advantages in multiple strategy comparison. Methods Multiple outcomes CEA of a home-based palliative care model (PEACH) relative to usual care is undertaken in cost disutility (CDU) space and compared with analysis on the cost-effectiveness plane. Summary measures developed for comparing strategies across potential threshold values for multiple outcomes include: expected net loss (ENL) planes quantifying differences in expected net benefit; the ENL contour identifying preferred strategies minimising ENL and their expected value of perfect information; and cost-effectiveness acceptability planes showing probability of strategies minimising ENL. Results Conventional analysis suggests PEACH is cost-effective when the threshold value per additional day at home ( 1) exceeds $1,068 or dominated by usual care when only the proportion of home deaths is considered. In contrast, neither alternative dominate in CDU space where cost and outcomes are jointly considered, with the optimal strategy depending on threshold values. For example, PEACH minimises ENL when 1=$2,000 and 2=$2,000 (threshold value for dying at home), with a 51.6% chance of PEACH being cost-effective. Conclusion Comparison in CDU space and associated summary measures have distinct advantages to multiple domain comparisons, aiding transparent and robust joint comparison of costs and multiple effects under uncertainty across potential threshold values for effect, better informing net benefit assessment and related reimbursement and research decisions. PMID:25751629
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
de Smet, Filip; Aeyels, Dirk
2010-12-01
We consider the stationary and the partially synchronous regimes in an all-to-all coupled neural network consisting of an infinite number of leaky integrate-and-fire neurons. Using analytical tools as well as simulation results, we show that two threshold values for the coupling strength may be distinguished. Below the lower threshold, no synchronization is possible; above the upper threshold, the stationary regime is unstable and partial synchrony prevails. In between there is a range of values for the coupling strength where both regimes may be observed. The assumption of an infinite number of neurons is crucial: simulations with a finite number of neurons indicate that above the lower threshold partial synchrony always prevails—but with a transient time that may be unbounded with increasing system size. For values of the coupling strength in a neighborhood of the lower threshold, the finite model repeatedly builds up toward synchronous behavior, followed by a sudden breakdown, after which the synchronization is slowly built up again. The “transient” time needed to build up synchronization again increases with increasing system size, and in the limit of an infinite number of neurons we retrieve stationary behavior. Similarly, within some range for the coupling strength in this neighborhood, a stable synchronous solution may exist for an infinite number of neurons.
The interplay between cooperativity and diversity in model threshold ensembles.
Cervera, Javier; Manzanares, José A; Mafe, Salvador
2014-10-06
The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Ko, Hae-Youn; Kang, Si-Mook; Kim, Hee Eun; Kwon, Ho-Keun; Kim, Baek-Il
2015-05-01
Detection of approximal caries lesions can be difficult due to their anatomical position. This study aimed to assess the ability of the quantitative light-induced fluorescence-digital (QLF-D) in detecting approximal caries, and to compare the performance with those of the International Caries Detection and Assessment System II (ICDAS II) and digital radiography (DR). Extracted permanent teeth (n=100) were selected and mounted in pairs. The simulation pairs were assessed by one calibrated dentist using each detection method. After all the examinations, the teeth (n=95) were sectioned and examined histologically as gold standard. The modalities were compared in terms of sensitivity, specificity, areas under receiver operating characteristic curves (AUROC) for enamel (D1) and dentine (D3) levels. The intra-examiner reliability was assessed for all modalities. At D1 threshold, the ICDAS II presented the highest sensitivity (0.80) while the DR showed the highest specificity (0.89); however, the methods with the greatest AUC values at D1 threshold were DR and QLF-D (0.80 and 0.80 respectively). At D3 threshold, the methods with the highest sensitivity were ICDAS II and QLF-D (0.64 and 0.64 respectively) while the method with the lowest sensitivity was DR (0.50). However, with regard to the AUC values at D3 threshold, the QLF-D presented the highest value (0.76). All modalities showed to have excellent intra-examiner reliability. The newly developed QLF-D was not only able to detect proximal caries, but also showed to have comparable performance to the visual inspection and radiography in detecting proximal caries. QLF-D has the potential to be a useful detection method for proximal caries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Randomness fault detection system
NASA Technical Reports Server (NTRS)
Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)
1996-01-01
A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.
El-Zaatari, Bassil M; Shete, Abhishek U; Adzima, Brian J; Kloxin, Christopher J
2016-09-14
The kinetic behaviour of the photo-induced copper(i) catalyzed azide-alkyne cycloaddition (CuAAC) reaction was studied in detail using real-time Fourier transform infrared (FTIR) spectroscopy on both a solvent-based monofunctional and a neat polymer network forming system. The results in the solvent-based system showed near first-order kinetics on copper and photoinitiator concentrations up to a threshold value in which the kinetics switch to zeroth-order. This kinetic shift shows that the photo-CuAAC reaction is not susceptible from side reactions such as copper disproportionation, copper(i) reduction, and radical termination at the early stages of the reaction. The overall reaction rate and conversion is highly dependent on the initial concentrations of photoinitiator and copper(ii) as well as their relative ratios. The conversion was decreased when an excess of photoinitiator was utilized compared to its threshold value. Interestingly, the reaction showed an induction period at relatively low intensities. The induction period is decreased by increasing light intensity and photoinitiator concentration. The reaction trends and limitations were further observed in a solventless polymer network forming system, exhibiting a similar copper and photoinitiator threshold behaviour.
El-Zaatari, Bassil M.; Shete, Abhishek U.; Adzima, Brian J.; Kloxin, Christopher J.
2016-01-01
The kinetic behaviour of the photo-induced copper(I) catalyzed azide—alkyne cycloaddition (CuAAC) reaction was studied in detail using real-time Fourier Transform Infrared Spectroscopy (FTIR) on both a solvent-based monofunctional and a neat polymer network forming system. The results in the solvent-based system showed near first-order kinetics on copper and photoinitiator concentrations up to a threshold value in which the kinetics switch to zeroth-order. This kinetic shift shows that the photo-CuAAC reaction is not suseptible from side reactions such as copper disproportionation, copper(I) reduction, and radical termination at the early stages of the reaction. The overall reaction rate and conversion is highly dependent on the initial concentrations of photoinitiator and copper(II), as well as their relative ratios. The conversion was decreased when an excess of photoinitiator was utilized compared to its threshold value. Interestingly, the reaction showed an induction period at relatively low intensities. The induction period is decreased by increasing light intensity, and photoinitiator concentration. The reaction trends and limitations were further observed in a solventless polymer network forming system, exhibiting a similar copper and photoinitiator threshold behaviour. PMID:27711587
On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?
Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro
2016-01-01
Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
The limits of thresholds: silica and the politics of science, 1935 to 1990.
Markowitz, G; Rosner, D
1995-01-01
Since the 1930s threshold limit values have been presented as an objectively established measure of US industrial safety. However, there have been important questions raised regarding the adequacy of these thresholds for protecting workers from silicosis. This paper explores the historical debates over silica threshold limit values and the intense political negotiation that accompanied their establishment. In the 1930s and early 1940s, a coalition of business, public health, insurance, and political interests formed in response to a widely perceived "silicosis crisis." Part of the resulting program aimed at containing the crisis was the establishment of threshold limit values. Yet silicosis cases continued to be documented. By the 1960s these cases had become the basis for a number of revisions to the thresholds. In the 1970s, following a National Institute for Occupational Safety and Health recommendation to lower the threshold limit value for silica and to eliminate sand as an abrasive in blasting, industry fought attempts to make the existing values more stringent. This paper traces the process by which threshold limit values became part of a compromise between the health of workers and the economic interests of industry. Images p254-a p256-a p257-a p259-a PMID:7856788
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
Hill, Emily; Bleck, Thomas P; Singh, Kamaljit; Ouyang, Bichun; Busl, Katharina M
2017-06-01
In a febrile patient with a ventriculostomy, diagnosing or excluding bacterial or microbial ventriculitis is difficult, as conventional markers in analysis of cerebrospinal fluid (CSF) are not applicable due to presence of blood and inflammation. CSF lactate has been shown to be a useful indicator of bacterial meningitis in CSF obtained via lumbar puncture, but little and heterogenous data exist on patients with ventriculostomies. We reviewed all CSF analyses obtained via ventriculostomy in patients admitted to our tertiary medical center between 2008 and 2013, and constructed receiver operating characteristic (ROC) curves to evaluate the accuracy of CSF lactate concentration in discriminating a positive CSF culture from a negative one in setting of ventriculostomy and prophylactic antibiosis. Among 467 CSF lactate values, there were 22 corresponding CSF cultures with bacterial growth. Sensitivities and specificities for CSF lactate at threshold values 3, 4, 5 and 6mmol/L showed sensitivity and specificity greater than 70% for CSF lactate threshold 4mmol/L. The lowest threshold value of 3mmol/L resulted in higher sensitivity of 81.8%, and the highest chosen threshold value resulted in high specificity of 94.2%, but these values had poor corresponding specificity and sensitivity, respectively. The area under the curve was 0.82 (95% CI 0.72, 0.91). Our data from a large sample of CSF studies in patients with ventriculostomy indicate that no single value of CSF lactate provided both sensitivity and specificity high enough to be regarded as reliable test. Copyright © 2017 Elsevier B.V. All rights reserved.
Fernández-Caballero Rico, Jose Ángel; Chueca Porcuna, Natalia; Álvarez Estévez, Marta; Mosquera Gutiérrez, María Del Mar; Marcos Maeso, María Ángeles; García, Federico
2018-02-01
To show how to generate a consensus sequence from the information of massive parallel sequences data obtained from routine HIV anti-retroviral resistance studies, and that may be suitable for molecular epidemiology studies. Paired Sanger (Trugene-Siemens) and next-generation sequencing (NGS) (454 GSJunior-Roche) HIV RT and protease sequences from 62 patients were studied. NGS consensus sequences were generated using Mesquite, using 10%, 15%, and 20% thresholds. Molecular evolutionary genetics analysis (MEGA) was used for phylogenetic studies. At a 10% threshold, NGS-Sanger sequences from 17/62 patients were phylogenetically related, with a median bootstrap-value of 88% (IQR83.5-95.5). Association increased to 36/62 sequences, median bootstrap 94% (IQR85.5-98)], using a 15% threshold. Maximum association was at the 20% threshold, with 61/62 sequences associated, and a median bootstrap value of 99% (IQR98-100). A safe method is presented to generate consensus sequences from HIV-NGS data at 20% threshold, which will prove useful for molecular epidemiological studies. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
NASA Astrophysics Data System (ADS)
Chandra, B. P.; Chandra, V. K.; Jha, Piyush; Sonwane, V. D.
2016-06-01
The threshold pressure for elastico-mechanoluminescence (EML) of ZnS:Mn macrocrystals is 20 MPa, and ZnS:Cu,Al macrocrystals do not show ML during elastic deformation. However, the threshold pressure for EML of ZnS:Mn and ZnS:Cu,Cl microcrystals and nanocrystals is nearly 1 MPa. Thus, it seems that high concentration of defects in microcrystalline and nanocrystalline ZnS:Mn and ZnS:Cu,Cl produces disorder and distortion in lattice and changes the local crystal-structure near impurities, and consequently, the enhanced piezoelectric constant of local region produces EML for low value of applied pressure. The threshold pressure for the ML of ZnS:Mn and ZnS:Cu,Al single macrocrystals is higher because such crystals possess comparatively less number of defects near the impurities where the phase-transition is not possible and their ML is caused for high value of stress because the bulk piezoelectric constant is less. Thus, size-dependent threshold pressure for ML supports the origin of EML from piezoelectricity in local region of the crystals. The finding of present investigation may be useful in tailoring phosphors emitting intense EML of different colours.
Dobie, Robert A; Wojcik, Nancy C
2015-01-01
Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804
Dobie, Robert A; Wojcik, Nancy C
2015-07-13
The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Hinsby, K.; Markager, S.; Kronvang, B.; Windolf, J.; Sonnenborg, T. O.; Thorling, L.
2012-02-01
Intensive farming has severe impacts on the chemical status of groundwater and streams and consequently on the ecological status of dependent ecosystems. Eutrophication is a widespread problem in lakes and marine waters. Common problems are hypoxia, algal blooms and fish kills, and loss of water clarity, underwater vegetation, biodiversity, and recreational value. In this paper we evaluate the nitrogen (N) and phosphorus (P) chemistry of groundwater and surface water in a coastal catchment, the loadings and sources of N and P and their effect on the ecological status of an estuary. We calculate the necessary reductions in N and P loadings to the estuary for obtaining a good ecological status, which we define based on the number of days with N and P limitation, and the equivalent stream and groundwater threshold values assuming two different management options. The calculations are performed by the combined use of empirical models and a physically based 3-D integrated hydrological model of the whole catchment. The assessment of the ecological status indicates that the N and P loads to the investigated estuary should be reduced by a factor of 0.52 and 0.56, respectively, to restore good ecological status. Model estimates show that threshold total N concentrations should be in the range of 2.9 to 3.1 mg l-1 in inlet freshwater to Horsens Estuary and 6.0 to 9.3 mg l-1 in shallow aerobic groundwater (∼27-41 mg l-1 of nitrate), depending on the management measures implemented in the catchment. The situation for total P is more complex but data indicate that groundwater threshold values are not needed. The inlet freshwater threshold value for total P to Horsens Estuary for the selected management options is 0.084 mg l-1. Regional climate models project increasing winter precipitation and runoff in the investigated region resulting in increasing runoff and nutrient loads to coastal waters if present land use and farming practices continue. Hence, lower threshold values are required in the future to ensure good status of all water bodies and ecosystems.
NASA Astrophysics Data System (ADS)
Hinsby, K.; Markager, S.; Kronvang, B.; Windolf, J.; Sonnenborg, T. O.; Thorling, L.
2012-08-01
Intensive farming has severe impacts on the chemical status of groundwater and streams and consequently on the ecological status of dependent ecosystems. Eutrophication is a widespread problem in lakes and marine waters. Common problems are hypoxia, algal blooms, fish kills, and loss of water clarity, underwater vegetation, biodiversity and recreational value. In this paper we evaluate the nitrogen (N) and phosphorus (P) concentrations of groundwater and surface water in a coastal catchment, the loadings and sources of N and P, and their effect on the ecological status of an estuary. We calculate the necessary reductions in N and P loadings to the estuary for obtaining a good ecological status, which we define based on the number of days with N and P limitation, and the corresponding stream and groundwater threshold values assuming two different management options. The calculations are performed by the combined use of empirical models and a physically based 3-D integrated hydrological model of the whole catchment. The assessment of the ecological status indicates that the N and P loads to the investigated estuary should be reduced to levels corresponding to 52 and 56% of the current loads, respectively, to restore good ecological status. Model estimates show that threshold total N (TN) concentrations should be in the range of 2.9 to 3.1 mg l-1 in inlet freshwater (streams) to Horsens estuary and 6.0 to 9.3 mg l-1 in shallow aerobic groundwater (∼ 27-41 mg l-1 of nitrate), depending on the management measures implemented in the catchment. The situation for total P (TP) is more complex, but data indicate that groundwater threshold values are not needed. The stream threshold value for TP to Horsens estuary for the selected management options is 0.084 mg l-1. Regional climate models project increasing winter precipitation and runoff in the investigated region resulting in increasing runoff and nutrient loads to the Horsens estuary and many other coastal waters if present land use and farming practices continue. Hence, lower threshold values are required in many coastal catchments in the future to ensure good status of water bodies and ecosystems.
Aeolian Erosion on Mars - a New Threshold for Saltation
NASA Astrophysics Data System (ADS)
Teiser, J.; Musiolik, G.; Kruss, M.; Demirci, T.; Schrinski, B.; Daerden, F.; Smith, M. D.; Neary, L.; Wurm, G.
2017-12-01
The Martian atmosphere shows a large variety of dust activity, ranging from local dust devils to global dust storms. Also, sand motion has been observed in form of moving dunes. The dust entrainment into the Martian atmosphere is not well understood due to the small atmospheric pressure of only a few mbar. Laboratory experiments on Earth and numerical models were developed to understand these processes leading to dust lifting and saltation. Experiments so far suggested that large wind velocities are needed to reach the threshold shear velocity and to entrain dust into the atmosphere. In global circulation models this threshold shear velocity is typically reduced artificially to reproduce the observed dust activity. Although preceding experiments were designed to simulate Martian conditions, no experiment so far could scale all parameters to Martian conditions, as either the atmospheric or the gravitational conditions were not scaled. In this work, a first experimental study of saltation under Martian conditions is presented. Martian gravity is reached by a centrifuge on a parabolic flight, while pressure (6 mbar) and atmospheric composition (95% CO2, 5% air) are adjusted to Martian levels. A sample of JSC 1A (grain sizes from 10 - 100 µm) was used to simulate Martian regolith. The experiments showed that the reduced gravity (0.38 g) not only affects the weight of the dust particles, but also influences the packing density within the soil and therefore also the cohesive forces. The measured threshold shear velocity of 0.82 m/s is significantly lower than the measured value for 1 g in ground experiments (1.01 m/s). Feeding the measured value into a Global Circulation Model showed that no artificial reduction of the threshold shear velocity might be needed to reproduce the global dust distribution in the Martian atmosphere.
High efficiency low threshold current 1.3 μm InAs quantum dot lasers on on-axis (001) GaP/Si
NASA Astrophysics Data System (ADS)
Jung, Daehwan; Norman, Justin; Kennedy, M. J.; Shang, Chen; Shin, Bongki; Wan, Yating; Gossard, Arthur C.; Bowers, John E.
2017-09-01
We demonstrate highly efficient, low threshold InAs quantum dot lasers epitaxially grown on on-axis (001) GaP/Si substrates using molecular beam epitaxy. Electron channeling contrast imaging measurements show a threading dislocation density of 7.3 × 106 cm-2 from an optimized GaAs template grown on GaP/Si. The high-quality GaAs templates enable as-cleaved quantum dot lasers to achieve a room-temperature continuous-wave (CW) threshold current of 9.5 mA, a threshold current density as low as 132 A/cm2, a single-side output power of 175 mW, and a wall-plug-efficiency of 38.4% at room temperature. As-cleaved QD lasers show ground-state CW lasing up to 80 °C. The application of a 95% high-reflectivity coating on one laser facet results in a CW threshold current of 6.7 mA, which is a record-low value for any kind of Fabry-Perot laser grown on Si.
Impulsive control of a continuous-culture and flocculation harvest chemostat model
NASA Astrophysics Data System (ADS)
Zhang, Tongqian; Ma, Wanbiao; Meng, Xinzhu
2017-12-01
In this paper, a new mathematical model describing the process of continuous culture and harvest of microalgaes is proposed. By inputting medium and flocculant at two different fixed moments periodically, continuous culture and harvest of microalgaes is implemented. The mathematical analysis is conducted and the whole dynamics of model is investigated by using theory of impulsive differential equations. We find that the model has a microalgaes-extinction periodic solution and it is globally asymptotically stable when some certain threshold value is less than the unit. And the model is permanent when some certain threshold value is larger than the unit. Then, according to the threshold, the control strategies of continuous culture and harvest of microalgaes are discussed. The results show that continuous culture and harvest of microalgaes can be archived by adjusting suitable input time, input amount of medium or flocculant. Finally, some numerical simulations are carried out to verify the control strategy.
Threshold network of a financial market using the P-value of correlation coefficients
NASA Astrophysics Data System (ADS)
Ha, Gyeong-Gyun; Lee, Jae Woo; Nobi, Ashadun
2015-06-01
Threshold methods in financial networks are important tools for obtaining important information about the financial state of a market. Previously, absolute thresholds of correlation coefficients have been used; however, they have no relation to the length of time. We assign a threshold value depending on the size of the time window by using the P-value concept of statistics. We construct a threshold network (TN) at the same threshold value for two different time window sizes in the Korean Composite Stock Price Index (KOSPI). We measure network properties, such as the edge density, clustering coefficient, assortativity coefficient, and modularity. We determine that a significant difference exists between the network properties of the two time windows at the same threshold, especially during crises. This implies that the market information depends on the length of the time window when constructing the TN. We apply the same technique to Standard and Poor's 500 (S&P500) and observe similar results.
NASA Astrophysics Data System (ADS)
Leonarduzzi, Elena; Molnar, Peter; McArdell, Brian W.
2017-08-01
A high-resolution gridded daily precipitation data set was combined with a landslide inventory containing over 2000 events in the period 1972-2012 to analyze rainfall thresholds which lead to landsliding in Switzerland. We colocated triggering rainfall to landslides, developed distributions of triggering and nontriggering rainfall event properties, and determined rainfall thresholds and intensity-duration ID curves and validated their performance. The best predictive performance was obtained by the intensity-duration ID threshold curve, followed by peak daily intensity Imax and mean event intensity Imean. Event duration by itself had very low predictive power. A single country-wide threshold of Imax = 28 mm/d was extended into space by regionalization based on surface erodibility and local climate (mean daily precipitation). It was found that wetter local climate and lower erodibility led to significantly higher rainfall thresholds required to trigger landslides. However, we showed that the improvement in model performance due to regionalization was marginal and much lower than what can be achieved by having a high-quality landslide database. Reference cases in which the landslide locations and timing were randomized and the landslide sample size was reduced showed the sensitivity of the Imax rainfall threshold model. Jack-knife and cross-validation experiments demonstrated that the model was robust. The results reported here highlight the potential of using rainfall ID threshold curves and rainfall threshold values for predicting the occurrence of landslides on a country or regional scale with possible applications in landslide warning systems, even with daily data.
Ab initio molecular dynamics simulations of low energy recoil events in MgO
NASA Astrophysics Data System (ADS)
Petersen, B. A.; Liu, B.; Weber, W. J.; Zhang, Y.
2017-04-01
Low-energy recoil events in MgO are studied using ab intio molecular dynamics simulations to reveal the dynamic displacement processes and final defect configurations. Threshold displacement energies, Ed, are obtained for Mg and O along three low-index crystallographic directions, [100], [110], and [111]. The minimum values for Ed are found along the [110] direction consisting of the same element, either Mg or O atoms. Minimum threshold values of 29.5 eV for Mg and 25.5 eV for O, respectively, are suggested from the calculations. For other directions, the threshold energies are considerably higher, 65.5 and 150.0 eV for O along [111] and [100], and 122.5 eV for Mg along both [111] and [100] directions, respectively. These results show that the recoil events in MgO are partial-charge transfer assisted processes where the charge transfer plays an important role. There is a similar trend found in other oxide materials, where the threshold displacement energy correlates linearly with the peak partial-charge transfer, suggesting this behavior might be universal in ceramic oxides.
Buttery, Ron G; Takeoka, Gary R
2013-09-25
MS with GC-RI evidence was found for the presence of linden ether in cooked carrot (Daucus carota). Evaluation of the GC effluent from cooked carrot volatiles using aroma extract dilution analysis (AEDA) found linden ether with the highest flavor dilution (FD) factor. Others with 10-fold lower FD factors were β-ionone, eugenol, the previously unidentified β-damascenone, (E)-2-nonenal, octanal (+ myrcene), and heptanal. All other previously identified volatiles showed lower FD factors. Odor thresholds, concentrations, and odor activity values of previously identified compounds are reviewed. This indicated that at least 20 compounds occur in cooked carrots above their odor thresholds (in water). Compounds showing the highest odor activity values included β-damascenone, (E)-2-nonenal, (E,E)-2,4-decadienal, β-ionone, octanal, (E)-2-decenal, eugenol, and p-vinylguaiacol.
2014-01-01
Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900
Abboud, Tammam; Schaper, Miriam; Dührsen, Lasse; Schwarz, Cindy; Schmidt, Nils Ole; Westphal, Manfred; Martens, Tobias
2016-10-01
OBJECTIVE Warning criteria for monitoring of motor evoked potentials (MEP) after direct cortical stimulation during surgery for supratentorial tumors have been well described. However, little is known about the value of MEP after transcranial electrical stimulation (TES) in predicting postoperative motor deficit when monitoring threshold level. The authors aimed to evaluate the feasibility and value of this method in glioma surgery by using a new approach for interpreting changes in threshold level involving contra- and ipsilateral MEP. METHODS Between November 2013 and December 2014, 93 patients underwent TES-MEP monitoring during resection of gliomas located close to central motor pathways but not involving the primary motor cortex. The MEP were elicited by transcranial repetitive anodal train stimulation. Bilateral MEP were continuously evaluated to assess percentage increase of threshold level (minimum voltage needed to evoke a stable motor response from each of the muscles being monitored) from the baseline set before dural opening. An increase in threshold level on the contralateral side (facial, arm, or leg muscles contralateral to the affected hemisphere) of more than 20% beyond the percentage increase on the ipsilateral side (facial, arm, or leg muscles ipsilateral to the affected hemisphere) was considered a significant alteration. Recorded alterations were subsequently correlated with postoperative neurological deterioration and MRI findings. RESULTS TES-MEP could be elicited in all patients, including those with recurrent glioma (31 patients) and preoperative paresis (20 patients). Five of 73 patients without preoperative paresis showed a significant increase in threshold level, and all of them developed new paresis postoperatively (transient in 4 patients and permanent in 1 patient). Eight of 20 patients with preoperative paresis showed a significant increase in threshold level, and all of them developed postoperative neurological deterioration (transient in 4 patients and permanent in 4 patients). In 80 patients no significant change in threshold level was detected, and none of them showed postoperative neurological deterioration. The specificity and sensitivity in this series were estimated at 100%. Postoperative MRI revealed gross-total tumor resection in 56 of 82 patients (68%) in whom complete tumor resection was attainable; territorial ischemia was detected in 4 patients. CONCLUSIONS The novel threshold criterion has made TES-MEP a useful method for predicting postoperative motor deficit in patients who undergo glioma surgery, and has been feasible in patients with preoperative paresis as well as in patients with recurrent glioma. Including contra- and ipsilateral changes in threshold level has led to a high sensitivity and specificity.
Estimation of the geochemical threshold and its statistical significance
Miesch, A.T.
1981-01-01
A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Inhalation hazards; threshold limit values for... SURFACE WORK AREAS OF UNDERGROUND COAL MINES Airborne Contaminants § 71.700 Inhalation hazards; threshold... containing quartz, and asbestos dust) in excess of, on the basis of a time-weighted average, the threshold...
Spatial modes of cooperation based on bounded rationality
NASA Astrophysics Data System (ADS)
Pan, Qiuhui; Wang, Lingxiao; Shi, Rongrong; Wang, Huan; He, Mingfeng
2014-12-01
Social factors, such as public opinion, values, ethics, moral standards, could guide people’s behavior to some degree. In this paper, we introduce social orientation as a motivator factor into the Nowak model, and discuss the variation of cooperation proportion under the function of motivator factor and betrayal temptation. Results show that motivator factors can promote cooperation proportion, and there is a motivator factor threshold. And a jump point is present in the value, on each side of which cooperation proportion has a small change. Reduction of betrayal temptation can also promote cooperation proportion, and there is a betrayal temptation threshold. And the value is corresponding with a jump point. And cooperation proportion changes very little on each side of the value. In addition, when betrayal temptation and motivator factor both play a role in a system, there are always cooperators and defectors in coexistence.
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
Manning, F.W.; Groothuis, S.E.; Lykins, J.H.; Papke, D.M.
1962-06-12
S>An improved area radiation dose monitor is designed which is adapted to compensate continuously for background radiation below a threshold dose rate and to give warning when the dose integral of the dose rate of an above-threshold radiation excursion exceeds a selected value. This is accomplished by providing means for continuously charging an ionization chamber. The chamber provides a first current proportional to the incident radiation dose rate. Means are provided for generating a second current including means for nulling out the first current with the second current at all values of the first current corresponding to dose rates below a selected threshold dose rate value. The second current has a maximum value corresponding to that of the first current at the threshold dose rate. The excess of the first current over the second current, which occurs above the threshold, is integrated and an alarm is given at a selected integrated value of the excess corresponding to a selected radiation dose. (AEC)
NASA Astrophysics Data System (ADS)
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
The variance of length of stay and the optimal DRG outlier payments.
Felder, Stefan
2009-09-01
Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L
2013-01-01
Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
NASA Astrophysics Data System (ADS)
Román, Sebastián; Lund, Fernando; Bustos, Javier; Palza, Humberto
2018-01-01
In several technological applications, carbon nanotubes (CNT) are added to a polymer matrix in order to develop electrically conductive composite materials upon percolation of the CNT network. This percolation state depends on several parameters such as particle characteristics, degree of dispersion, and filler orientation. For instance, CNT aggregation is currently avoided because it is thought that it will have a negative effect on the electrical behavior despite some experimental evidence showing the contrary. In this study, the effect of CNT waviness, degree of agglomeration, and external strain, on the electrical percolation of polymer composites is studied by a three dimensional Monte-Carlo simulation. The simulation shows that the percolation threshold of CNT depends on the particle waviness, with rigid particles displaying the lowest values. Regarding the effect of CNT dispersion, our numerical results confirm that low levels of agglomeration reduce the percolation threshold of the composite. However, the threshold is shifted to larger values at high agglomeration states because of the appearance of isolated areas of high CNT concentrations. These results imply, therefore, an optimum of agglomeration that further depends on the waviness and concentration of CNT. Significantly, CNT agglomeration can further explain the broad percolation transition found in these systems. When an external strain is applied to the composites, the percolation concentration shifts to higher values because CNT alignment increases the inter-particle distances. The strain sensitivity of the composites is affected by the percolation state of CNT showing a maximum value at certain filler concentration. These results open up the discussion about the relevance in polymer composites of the dispersion state of CNT and filler flexibility towards electrically conductive composites.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
Borgeat, F; Pannetier, M F
1982-01-01
This exploratory study examined the usefulness of averaging electrodermal potential responses for research on subliminal auditory perception. Eighteen female subjects were exposed to three kinds (emotional, neutral and 1000 Hz tone) of auditory stimulation which were repeated six times at three intensities (detection threshold, 10 dB under this threshold and 10 dB above identification threshold). Analysis of electrodermal potential responses showed that the number of responses was related to the emotionality of subliminal stimuli presented at detection threshold but not at 10 dB under it. The data interpretation proposed refers to perceptual defence theory. This study indicates that electrodermal response count constitutes a useful measure for subliminal auditory perception research, but averaging those responses was not shown to bring additional information.
Occurrence analysis of daily rainfalls through non-homogeneous Poissonian processes
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Ferrari, E.; de Luca, D. L.
2011-06-01
A stochastic model based on a non-homogeneous Poisson process, characterised by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. The data modelling has been performed with a partition of observed daily rainfall data into a calibration period for parameter estimation and a validation period for checking on occurrence process changes. The model has been applied to a set of rain gauges located in different geographical areas of Southern Italy. The results show a good fit for time-varying intensity of rainfall occurrence process by 2-harmonic Fourier law and no statistically significant evidence of changes in the validation period for different threshold values.
Determination of the measurement threshold in gamma-ray spectrometry.
Korun, M; Vodenik, B; Zorko, B
2017-03-01
In gamma-ray spectrometry the measurement threshold describes the lover boundary of the interval of peak areas originating in the response of the spectrometer to gamma-rays from the sample measured. In this sense it presents a generalization of the net indication corresponding to the decision threshold, which is the measurement threshold at the quantity value zero for a predetermined probability for making errors of the first kind. Measurement thresholds were determined for peaks appearing in the spectra of radon daughters 214 Pb and 214 Bi by measuring the spectrum 35 times under repeatable conditions. For the calculation of the measurement threshold the probability for detection of the peaks and the mean relative uncertainty of the peak area were used. The relative measurement thresholds, the ratios between the measurement threshold and the mean peak area uncertainty, were determined for 54 peaks where the probability for detection varied between some percent and about 95% and the relative peak area uncertainty between 30% and 80%. The relative measurement thresholds vary considerably from peak to peak, although the nominal value of the sensitivity parameter defining the sensitivity for locating peaks was equal for all peaks. At the value of the sensitivity parameter used, the peak analysis does not locate peaks corresponding to the decision threshold with the probability in excess of 50%. This implies that peaks in the spectrum may not be located, although the true value of the measurand exceeds the decision threshold. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pressure and cold pain threshold reference values in a large, young adult, pain-free population.
Waller, Robert; Smith, Anne Julia; O'Sullivan, Peter Bruce; Slater, Helen; Sterling, Michele; McVeigh, Joanne Alexandra; Straker, Leon Melville
2016-10-01
Currently there is a lack of large population studies that have investigated pain sensitivity distributions in healthy pain free people. The aims of this study were: (1) to provide sex-specific reference values of pressure and cold pain thresholds in young pain-free adults; (2) to examine the association of potential correlates of pain sensitivity with pain threshold values. This study investigated sex specific pressure and cold pain threshold estimates for young pain free adults aged 21-24 years. A cross-sectional design was utilised using participants (n=617) from the Western Australian Pregnancy Cohort (Raine) Study at the 22-year follow-up. The association of site, sex, height, weight, smoking, health related quality of life, psychological measures and activity with pain threshold values was examined. Pressure pain threshold (lumbar spine, tibialis anterior, neck and dorsal wrist) and cold pain threshold (dorsal wrist) were assessed using standardised quantitative sensory testing protocols. Reference values for pressure pain threshold (four body sites) stratified by sex and site, and cold pain threshold (dorsal wrist) stratified by sex are provided. Statistically significant, independent correlates of increased pressure pain sensitivity measures were site (neck, dorsal wrist), sex (female), higher waist-hip ratio and poorer mental health. Statistically significant, independent correlates of increased cold pain sensitivity measures were, sex (female), poorer mental health and smoking. These data provide the most comprehensive and robust sex specific reference values for pressure pain threshold specific to four body sites and cold pain threshold at the dorsal wrist for young adults aged 21-24 years. Establishing normative values in this young age group is important given that the transition from adolescence to adulthood is a critical temporal period during which trajectories for persistent pain can be established. These data will provide an important research resource to enable more accurate profiling and interpretation of pain sensitivity in clinical pain disorders in young adults. The robust and comprehensive data can assist interpretation of future clinical pain studies and provide further insight into the complex associations of pain sensitivity that can be used in future research. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Gustafson, Samantha; Pittman, Andrea; Fanning, Robert
2013-06-01
This tutorial demonstrates the effects of tubing length and coupling type (i.e., foam tip or personal earmold) on hearing threshold and real-ear-to-coupler difference (RECD) measures. Hearing thresholds from 0.25 kHz through 8 kHz are reported at various tubing lengths for 28 normal-hearing adults between the ages of 22 and 31 years. RECD values are reported for 14 of the adults. All measures were made with an insert earphone coupled to a standard foam tip and with an insert earphone coupled to each participant's personal earmold. Threshold and RECD measures obtained with a personal earmold were significantly different from those obtained with a foam tip on repeated measures analyses of variance. One-sample t tests showed these differences to vary systematically with increasing tubing length, with the largest average differences (7-8 dB) occurring at 4 kHz. This systematic examination demonstrates the equal and opposite effects of tubing length on threshold and acoustic measures. Specifically, as tubing length increased, sound pressure level in the ear canal decreased, affecting both hearing thresholds and the real-ear portion of the RECDs. This demonstration shows that when the same coupling method is used to obtain the hearing thresholds and RECD, equal and accurate estimates of real-ear sound pressure level are obtained.
An n -material thresholding method for improving integerness of solutions in topology optimization
Watts, Seth; Tortorelli, Daniel A.
2016-04-10
It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less
NASA Astrophysics Data System (ADS)
Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan
2015-01-01
Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.
Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets
NASA Astrophysics Data System (ADS)
Cifter, Atilla
2011-06-01
This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.
Alasil, Tarek; Wang, Kaidi; Yu, Fei; Field, Matthew G.; Lee, Hang; Baniasadi, Neda; de Boer, Johannes F.; Coleman, Anne L.; Chen, Teresa C.
2015-01-01
Purpose To determine the retinal nerve fiber layer (RNFL) thickness at which visual field (VF) damage becomes detectable and associated with structural loss. Design Retrospective cross-sectional study. Methods Eighty seven healthy and 108 glaucoma subjects (one eye per subject) were recruited from an academic institution. All patients had VF examinations (Swedish Interactive Threshold Algorithm 24-2 test of the Humphrey visual field analyzer 750i; Carl Zeiss Meditec, Dublin, CA) and spectral domain optical coherence tomography RNFL scans (Spectralis, Heidelberg Engineering, Heidelberg, Germany). Comparison of RNFL thicknesses values with VF threshold values showed a plateau of VF threshold values at high RNFL thickness values and then a sharp decrease at lower RNFL thickness values. A broken stick statistical analysis was utilized to estimate the tipping point at which RNFL thickness values are associated with VF defects. The slope for the association between structure and function was computed for data above and below the tipping point. Results The mean RNFL thickness value that was associated with initial VF loss was 89 μm. The superior RNFL thickness value that was associated with initial corresponding inferior VF loss was 100 μm. The inferior RNFL thickness value that was associated with initial corresponding superior VF loss was 73 μm. The differences between all the slopes above and below the aforementioned tipping points were statistically significant (p<0.001). Conclusions In open angle glaucoma, substantial RNFL thinning or structural loss appears to be necessary before functional visual field defects become detectable. PMID:24487047
Midline Shift Threshold Value for Hemiparesis in Chronic Subdural Hematoma.
Juković, Mirela F; Stojanović, Dejan B
2015-01-01
Chronic subdural hematoma (CSDH) has a variety of clinical presentations, with numerous neurological symptoms and signs. Hemiparesis is one of the leading signs that potentially indicates CSDH. Purpose of this study was to determine the threshold (cut-off) value of midsagittal line (MSL) shift after which hemiparesis is likely to appear. The study evaluated 83 patients with 53 unilateral and 30 bilateral CSDHs in period of three years. Evaluated computed tomography (CT) findings in patients with CSDH were diameter of the hematoma and midsagittal line shift, measured on non-contrast CT scan in relation with occurrence of hemiparesis. Threshold values of MSL shift for both types of CSDHs were obtained as maximal (equal) sensitivity and specificity (intersection of the curves). MSL is a good predictor for hemiparesis occurrence (total sample, AUROC 0.75, p=0.0001). Unilateral and bilateral CSDHs had different threshold values of the MSL for hemiparesis development. Results suggested that in unilateral CSDH the threshold values of MSL could be at 10 mm (AUROC=0.65; p=0.07). For bilateral CSDH the threshold level of MSL shift was 4.5 mm (AUROC=0.77; p=0.01). Our study pointed on the phenomenon that midsagittal line shift can predict hemiparesis occurrence. Hemiparesis in patients with bilateral CSDH was more related to midsagittal line shift compared with unilateral CSDH. When value of midsagittal line shift exceed the threshold level, hemiparesis occurs with certain probability.
Thresholds and the Evolution of Bedrock Channels on the Hawaiian Islands
NASA Astrophysics Data System (ADS)
Raming, L. W.; Whipple, K. X.
2017-12-01
Erosional thresholds are a key component of the non-linear dynamics of bedrock channel incision and long-term landscape evolution. Erosion thresholds, however, have remained difficult to quantify and uniquely identify in landscape evolution. Here we present an analysis of the morphology of canyons on the Hawaiian Islands and put forth the hypothesis that they are threshold-dominated landforms. Geologic(USGS), topographic (USGS 10m DEM), runoff (USGS) and meteorological data (Rainfall Atlas of Hawai`i) were used in an analysis of catchments on the islands of Hawai`i, Kaua`i, Lāna`i, Maui, and Moloka'i. Channel incision was estimated by differencing the present topography from reconstructed pre-incision volcanic surfaces. Four key results were obtained from our analysis: (1) Mean total incision ranged from 11 to 684 m and exhibited no correlation with incision duration. (2) In major canyons on the Islands of Hawaii and Kauai rejuvenated-stage basalt flow outcrops at river level show incision effectively ceased after a period no longer than 100 ka and 1.4 Ma, respectively. (3) Mean canyon wall gradient below knickpoints decreases with volcano age, with a median value of 1 measured on Hawaii and of 0.7 on Kauai. (4) Downstream of major knickpoints which demarcate the upper limits of deep canyons, channel profiles have near uniform channel steepness with most values ranging between 60 and 100. The presence of uniform channel steepness (KSN) implies uniform bed shear stress and typically is interpreted as a steady-state balance between uplift and incision in tectonically active landscapes. However, this is untenable for Hawaiian canyons and subsequently we posit that uniform KSN represents a condition where flood shear stress has been reduced to threshold values and incision reduced to near zero. Uniform KSN values decrease with rainfall, consistent with wetter regions generating threshold shear stress at lower KSN. This suggests that rapid incision occurred during brief intervals where thresholds were exceeded through a combination of initial slope, over-steeping due to cliff formation, and available runoff as function of climate. From this analysis, we find significant evidence of the role of thresholds in landscape evolution and an alternative framework for viewing the evolution of the Hawaiian Islands.
Characterization of local thermodynamic equilibrium in a laser-induced aluminum alloy plasma.
Zhang, Yong; Zhao, Zhenyang; Xu, Tao; Niu, GuangHui; Liu, Ying; Duan, Yixiang
2016-04-01
The electron temperature was evaluated using the line-to-continuum ratio method, and whether the plasma was close to the local thermodynamic equilibrium (LTE) state was investigated in detail. The results showed that approximately 5 μs after the plasma formed, the changes in the electron and excitation temperatures, which were determined using a Boltzmann plot, overlapped in the 15% error range, which indicated that the LTE state was reached. The recombination of electrons and ions and the free electron expansion process led to the deviation from the LTE state. The plasma's expansion rate slowed over time, and when the expansion time was close to the ionization equilibrium time, the LTE state was almost reached. The McWhirter criterion was adopted to calculate the threshold electron density for different species, and the results showed that experimental electron density was greater than the threshold electron density, which meant that the LTE state may have existed. However, for the nonmetal element N, the threshold electron density was greater than the value experimental value approximately 0.8 μs after the plasma formed, which meant that LTE state did not exist for N.
Accuracy of cancellous bone volume fraction measured by micro-CT scanning.
Ding, M; Odgaard, A; Hvid, I
1999-03-01
Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.
[The analysis of threshold effect using Empower Stats software].
Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan
2013-11-01
In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.
Barboni, M.T.S.; Gomes, B.D.; Souza, G.S.; Rodrigues, A.R.; Ventura, D.F.; Silveira, L.C.L.
2013-01-01
The purpose of the present study was to measure contrast sensitivity to equiluminant gratings using steady-state visual evoked cortical potential (ssVECP) and psychophysics. Six healthy volunteers were evaluated with ssVECPs and psychophysics. The visual stimuli were red-green or blue-yellow horizontal sinusoidal gratings, 5° × 5°, 34.3 cd/m2 mean luminance, presented at 6 Hz. Eight spatial frequencies from 0.2 to 8 cpd were used, each presented at 8 contrast levels. Contrast threshold was obtained by extrapolating second harmonic amplitude values to zero. Psychophysical contrast thresholds were measured using stimuli at 6 Hz and static presentation. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. ssVECP and both psychophysical contrast sensitivity functions (CSFs) were low-pass functions for red-green gratings. For electrophysiology, the highest contrast sensitivity values were found at 0.4 cpd (1.95 ± 0.15). ssVECP CSF was similar to dynamic psychophysical CSF, while static CSF had higher values ranging from 0.4 to 6 cpd (P < 0.05, ANOVA). Blue-yellow chromatic functions showed no specific tuning shape; however, at high spatial frequencies the evoked potentials showed higher contrast sensitivity than the psychophysical methods (P < 0.05, ANOVA). Evoked potentials can be used reliably to evaluate chromatic red-green CSFs in agreement with psychophysical thresholds, mainly if the same temporal properties are applied to the stimulus. For blue-yellow CSF, correlation between electrophysiology and psychophysics was poor at high spatial frequency, possibly due to a greater effect of chromatic aberration on this kind of stimulus. PMID:23369980
Threshold concepts: implications for the management of natural resources
Guntenspergen, Glenn R.; Gross, John
2014-01-01
Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.
Near-Threshold Fatigue Crack Growth Behavior of Fine-Grain Nickel-Based Alloys
NASA Technical Reports Server (NTRS)
Newman, John A.; Piascik, Robert S.
2003-01-01
Constant-Kmax fatigue crack growth tests were performed on two finegrain nickel-base alloys Inconel 718 (DA) and Ren 95 to determine if these alloys exhibit near-threshold time-dependent crack growth behavior observed for fine-grain aluminum alloys in room-temperature laboratory air. Test results showed that increases in K(sub max) values resulted in increased crack growth rates, but no evidence of time-dependent crack growth was observed for either nickel-base alloy at room temperature.
A new iterative triclass thresholding technique in image segmentation.
Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin
2014-03-01
We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.
3D SAPIV particle field reconstruction method based on adaptive threshold.
Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi
2018-03-01
Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.
NASA Technical Reports Server (NTRS)
Iversen, J. D.; White, B. R.; Pollack, J. B.; Greeley, R.
1976-01-01
Results are reported for wind-tunnel experiments performed to determine the threshold friction speed of particles with different densities. Experimentally determined threshold speeds are plotted as a function of particle diameter and in terms of threshold parameter vs particle friction Reynolds number. The curves are compared with those of previous experiments, and an A-B curve is plotted to show differences in threshold speed due to differences in size distributions and particle shapes. Effects of particle diameter are investigated, an expression for threshold speed is derived by considering the equilibrium forces acting on a single particle, and other approximately valid expressions are evaluated. It is shown that the assumption of universality of the A-B curve is in error at very low pressures for small particles and that only predictions which take account of both Reynolds number and effects of interparticle forces yield reasonable agreement with experimental data. Effects of nonerodible surface roughness are examined, and threshold speeds computed with allowance for this factor are compared with experimental values. Threshold friction speeds on Mars are then estimated for a surface pressure of 5 mbar, taking into account all the factors considered.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Xiong, Zhihua
2016-10-01
The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.
Edwards, Rachael M; Godwin, J David; Hippe, Dan S; Kicska, Gregory
2016-01-01
It is known that atelectasis demonstrates greater contrast enhancement than pneumonia on computed tomography (CT). However, the effectiveness of using a Hounsfield unit (HU) threshold to distinguish pneumonia from atelectasis has never been shown. The objective of the study is to demonstrate that an HU threshold can be quantitatively used to effectively distinguish pneumonia from atelectasis. Retrospectively identified CT pulmonary angiogram examinations that did not show pulmonary embolism but contained nonaerated lungs were classified as atelectasis or pneumonia based on established clinical criteria. The HU attenuation was measured in these nonaerated lungs. Receiver operating characteristic (ROC) analysis was performed to determine the area under the ROC curve, sensitivity, and specificity of using the attenuation to distinguish pneumonia from atelectasis. Sixty-eight nonaerated lungs were measured in 55 patients. The mean (SD) enhancement was 62 (18) HU in pneumonia and 119 (24) HU in atelectasis (P < 0.001). A threshold of 92 HU diagnosed pneumonia with 97% sensitivity (confidence interval [CI], 80%-99%) and 85% specificity (CI, 70-93). Accuracy, measured as area under the ROC curve, was 0.97 (CI, 0.89-0.99). We have established that a threshold HU value can be used to confidently distinguish pneumonia from atelectasis with our standard CT pulmonary angiogram imaging protocol and patient population. This suggests that a similar threshold HU value may be determined for other scanning protocols, and application of this threshold may facilitate a more confident diagnosis of pneumonia and thus speed treatment.
Different binarization processes validated against manual counts of fluorescent bacterial cells.
Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W
2016-09-01
State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.
Soil contamination in landfills: a case study of a landfill in Czech Republic
NASA Astrophysics Data System (ADS)
Adamcová, D.; Vaverková, M. D.; Bartoň, S.; Havlíček, Z.; Břoušková, E.
2016-02-01
A phytotoxicity test was determined to assess ecotoxicity of landfill soil. Sinapis alba L. was used as a bioindicator of heavy metals. Soil samples 1-8, which were taken from the landfill body, edge of the landfill body, and its vicinity meet the limits for heavy metals Co, Cd, Pb, and Zn specified in the applicable legislation. Hg and Mn threshold values are not established in legislation, but values have been determined for the needs of the landfill operator. For heavy metals Cr, Cu, and Ni sample 2 exceeded the threshold values, which attained the highest values of all the samples tested for Cr, Cu, and Ni. For Cr and Ni the values were several times higher than values of the other samples. The second highest values for Cr, Cu, and Ni showed sample 6 and 7. Both samples exceeded the set limits. An increase in plant biomass was observed in plants growing on plates with soil samples, but no changes in appearance, slow growth, or necrotic lesions appeared. Ecotoxicity tests show that tested soils (concentration of 50 %) collected from the landfill body, edge of the landfill body, and its vicinity reach high percentage values of germination capacity of seeds of Sinapis alba L. (101-137 %). At a concentration of 25 %, tested soil samples exhibit lower values of germination capacity - in particular samples 3 to 8 - yet the seed germination capacity in all eight samples of tested soils ranges between 86 and 137 %.
Soil contaminations in landfill: a case study of the landfill in Czech Republic
NASA Astrophysics Data System (ADS)
Adamcová, D.; Vaverková, M. D.; Bartoň, S.; Havlíček, Z.; Břoušková, E.
2015-10-01
Phytotoxicity test was determined to assess ecotoxicity of landfill soil. Sinapis alba L. was used as heavy metals bioindicator. Soil samples 1-8, which were taken from the landfill body, edge of the landfill body and its vicinity meet the limits for heavy metals Co, Cd, Pb, and Zn specified in the applicable legislation. Hg and Mn threshold values are not established in legislation, but values have been determined for the needs of the landfill operator. For heavy metals Cr, Cu, and Ni sample 2 exceeded the threshold values, which attained the highest values of all the samples tested for Cr, Cu and Ni. For Cr and Ni the values were several times higher than values of the other samples. The second highest values for Cr, Cu, and Ni showed sample 6 and 7. Both samples exceeded the set limits. An increase in plant biomass was observed in plants growing on plates with soil samples, but no changes in appearance, slow growth or necrotic lesions appeared. Ecotoxicity tests show that tested soils (concentration of 50 %) collected from the landfill body, edge of the landfill body and its vicinity reach high percentage values of germination capacity of seeds of Sinapis alba L. (101-137 %). At a concentration of 25 %, tested soil samples exhibit lower values of germination capacity; in particular samples 3 to 8, yet the seed germination capacity in all 8 samples of tested soils range between 86 and 137 %.
Vuralli, Doga; Evren Boran, H; Cengiz, Bulent; Coskun, Ozlem; Bolay, Hayrunnisa
2016-10-01
Migraine headache attacks have been shown to be accompanied by significant prolongation of somatosensory temporal discrimination threshold values, supporting signs of disrupted sensorial processing in migraine. Chronic migraine is one of the most debilitating and challenging headache disorders with no available biomarker. We aimed to test the diagnostic value of somatosensory temporal discrimination for chronic migraine in this prospective, controlled study. Fifteen chronic migraine patients and 15 healthy controls completed the study. Chronic migraine patients were evaluated twice, during a headache and headache-free period. Somatosensory temporal discrimination threshold values were evaluated in both hands. Duration of migraine and chronic migraine, headache intensity, clinical features accompanying headache such as nausea, photophobia, phonophobia and osmophobia, and pressure pain thresholds were also recorded. In the chronic migraine group, somatosensory temporal discrimination threshold values on the headache day (138.8 ± 21.8 ms for the right hand and 141.2 ± 17.4 ms for the left hand) were significantly higher than somatosensory temporal discrimination threshold values on the headache free day (121.5 ± 13.8 ms for the right hand and 122.8 ± 12.6 ms for the left hand, P = .003 and P < .0001, respectively) and somatosensory temporal discrimination thresholds of healthy volunteers (35.4 ± 5.5 ms for the right hand and 36.4 ± 5.4 ms for the left hand, P < .0001 and P < .0001, respectively). Somatosensory temporal discrimination threshold values of chronic migraine patients on the headache free day were significantly prolonged compared to somatosensory temporal discrimination threshold values of the control group (121.5 ± 13.8 ms vs 35.4 ± 5.5 ms for the right hand, P < .0001 and 122.8 ± 12.6 ms vs 36.4 ± 5.4 ms for the left hand, P < .0001). Somatosensory temporal discrimination threshold values of the hand contralateral to the headache lateralization (153.3 ± 13.7 ms) were significantly higher (P < .0001) than the ipsilateral hand (118.2 ± 11.9 ms) in chronic migraine patients when headache was lateralized. The headache intensity of chronic migraine patients rated with visual analog score was positively correlated with the contralateral somatosensory temporal discrimination threshold values. Somatosensory temporal discrimination thresholds persist elevated during the headache-free intervals in patients with chronic migraine. By providing evidence for the first time for unremitting disruption of central sensory processing, somatosensory temporal discrimination test stands out as a promising neurophysiological biomarker for chronic migraine. © 2016 American Headache Society.
Ab initio molecular dynamics simulations of low energy recoil events in MgO
Petersen, B. A.; Liu, B.; Weber, W. J.; ...
2017-01-11
In this paper, low-energy recoil events in MgO are studied using ab initio molecular dynamics simulations to reveal the dynamic displacement processes and final defect configurations. Threshold displacement energies, E d, are obtained for Mg and O along three low-index crystallographic directions, [100], [110], and [111]. The minimum values for E d are found along the [110] direction consisting of the same element, either Mg or O atoms. Minimum threshold values of 29.5 eV for Mg and 25.5 eV for O, respectively, are suggested from the calculations. For other directions, the threshold energies are considerably higher, 65.5 and 150.0 eVmore » for O along [111] and [100], and 122.5 eV for Mg along both [111] and [100] directions, respectively. These results show that the recoil events in MgO are partial-charge transfer assisted processes where the charge transfer plays an important role. Finally, there is a similar trend found in other oxide materials, where the threshold displacement energy correlates linearly with the peak partial-charge transfer, suggesting this behavior might be universal in ceramic oxides.« less
Liu, Xin; Klinkhammer, Sönke; Wang, Ziyao; Wienhold, Tobias; Vannahme, Christoph; Jakobs, Peter-Jürgen; Bacher, Andreas; Muslija, Alban; Mappes, Timo; Lemmer, Uli
2013-11-18
Optically excited organic semiconductor distributed feedback (DFB) lasers enable efficient lasing in the visible spectrum. Here, we report on the rapid and parallel fabrication of DFB lasers via transferring a nanograting structure from a flexible mold onto an unstructured film of the organic gain material. This geometrically well-defined structure allows for a systematic investigation of the laser threshold behavior. The laser thresholds for these devices show a strong dependence on the pump spot diameter. This experimental finding is in good qualitative agreement with calculations based on coupled-wave theory. With further investigations on various DFB laser geometries prepared by different routes and based on different organic gain materials, we found that these findings are quite general. This is important for the comparison of threshold values of various devices characterized under different excitation areas.
Yang, Junyuan; Martcheva, Maia; Wang, Lin
2015-10-01
Vaccination is the most effective method of preventing the spread of infectious diseases. For many diseases, vaccine-induced immunity is not life long and the duration of immunity is not always fixed. In this paper, we propose an SIVS model taking the waning of vaccine-induced immunity and general nonlinear incidence into consideration. Our analysis shows that the model exhibits global threshold dynamics in the sense that if the basic reproduction number is less than 1, then the disease-free equilibrium is globally asymptotically stable implying the disease dies out; while if the basic reproduction number is larger than 1, then the endemic equilibrium is globally asymptotically stable indicating that the disease persists. This global threshold result indicates that if the vaccination coverage rate is below a critical value, then the disease always persists and only if the vaccination coverage rate is above the critical value, the disease can be eradicated. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zierenberg, Johannes; Fricke, Niklas; Marenz, Martin; Spitzner, F. P.; Blavatska, Viktoria; Janke, Wolfhard
2017-12-01
We study long-range power-law correlated disorder on square and cubic lattices. In particular, we present high-precision results for the percolation thresholds and the fractal dimension of the largest clusters as a function of the correlation strength. The correlations are generated using a discrete version of the Fourier filtering method. We consider two different metrics to set the length scales over which the correlations decay, showing that the percolation thresholds are highly sensitive to such system details. By contrast, we verify that the fractal dimension df is a universal quantity and unaffected by the choice of metric. We also show that for weak correlations, its value coincides with that for the uncorrelated system. In two dimensions we observe a clear increase of the fractal dimension with increasing correlation strength, approaching df→2 . The onset of this change does not seem to be determined by the extended Harris criterion.
An evaluation of the Johnson-Cook model to simulate puncture of 7075 aluminum plates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corona, Edmundo; Orient, George Edgar
The objective of this project was to evaluate the use of the Johnson-Cook strength and failure models in an adiabatic finite element model to simulate the puncture of 7075- T651 aluminum plates that were studied as part of an ASC L2 milestone by Corona et al (2012). The Johnson-Cook model parameters were determined from material test data. The results show a marked improvement, in particular in the calculated threshold velocity between no puncture and puncture, over those obtained in 2012. The threshold velocity calculated using a baseline model is just 4% higher than the mean value determined from experiment, inmore » contrast to 60% in the 2012 predictions. Sensitivity studies showed that the threshold velocity predictions were improved by calibrating the relations between the equivalent plastic strain at failure and stress triaxiality, strain rate and temperature, as well as by the inclusion of adiabatic heating.« less
Two-Photon Absorption and Two-Photon-Induced Gain in Perovskite Quantum Dots.
Nagamine, Gabriel; Rocha, Jaqueline O; Bonato, Luiz G; Nogueira, Ana F; Zaharieva, Zhanet; Watt, Andrew A R; de Brito Cruz, Carlos H; Padilha, Lazaro A
2018-06-21
Perovskite quantum dots (PQDs) emerged as a promising class of material for applications in lighting devices, including light emitting diodes and lasers. In this work, we explore nonlinear absorption properties of PQDs showing the spectral signatures and the size dependence of their two-photon absorption (2PA) cross-section, which can reach values higher than 10 6 GM. The large 2PA cross section allows for low threshold two-photon induced amplified spontaneous emission (ASE), which can be as low as 1.6 mJ/cm 2 . We also show that the ASE properties are strongly dependent on the nanomaterial size, and that the ASE threshold, in terms of the average number of excitons, decreases for smaller PQDs. Investigating the PQDs biexciton binding energy, we observe strong correlation between the increasing on the biexciton binding energy and the decreasing on the ASE threshold, suggesting that ASE in PQDs is a biexciton-assisted process.
Study of blur discrimination for 3D stereo viewing
NASA Astrophysics Data System (ADS)
Subedar, Mahesh; Karam, Lina J.
2014-03-01
Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.
1992-03-01
In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less
Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling T-cell proliferation: an investigation of the consequences of the Hayflick limit.
Pilyugin, S; Mittler, J; Antia, R
1997-05-07
Somatic cells, including immune cells such as T-cells have a limited capacity for proliferation and can only replicate for a finite number of generations (known as the Hayflick limit) before dying. In this paper we use mathematical models to investigate the consequences of introducing a Hayflick limit on the dynamics of T-cells stimulated with specific antigen. We show that while the Hayflick limit does not alter the dynamics of T-cell response to antigen over the short term, it may have a profound effect on the long-term immune response. In particular we show that over the long term the Hayflick limit may be important in determining whether an immune response can be maintained to a persistent antigen (or parasite). The eventual outcome is determined by the magnitude of the Hayflick limit, the extent to which antigen reduces the input of T-cells from the thymus, and the rate of antigen-induced proliferation of T-cells. Counter to what might be expected we show that the persistence of an immune response (immune memory) requires the density of persistent antigen to be less than a defined threshold value. If the amount of persistent antigen (or parasite) is greater than this threshold value then immune memory will be relatively short lived. The consequences of this threshold for persistent mycobacterial and HIV infections and for the generation of vaccines are discussed.
Is ``No-Threshold'' a ``Non-Concept''?
NASA Astrophysics Data System (ADS)
Schaeffer, David J.
1981-11-01
A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.
Adaptive time-sequential binary sensing for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Hu, Chenhui; Lu, Yue M.
2012-06-01
We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.
NASA Astrophysics Data System (ADS)
Obayashi, Takeshi; Kinoshita, Kengo
2013-01-01
Gene coexpression analysis is a powerful approach to elucidate gene function. We have established and developed this approach using vast amount of publicly available gene expression data measured by microarray techniques. The coexpressed genes are used to estimate gene function of the guide gene or to construct gene coexpression networks. In the case to construct gene networks, researchers should introduce an arbitrary threshold of gene coexpression, because gene coexpression value is continuous value. In the viewpoint to introduce common threshold of gene coexpression, we previously reported rank of Pearson's correlation coefficient (PCC) is more useful than the original PCC value. In this manuscript, we re-assessed the measure of gene coexpression to construct gene coexpression network, and found that mutual rank (MR) of PCC showed better performance than rank of PCC and the original PCC in low false positive rate.
NASA Technical Reports Server (NTRS)
Janoudi, A.; Poff, K. L.
1990-01-01
The relationship between the amount of light and the amount of response for any photobiological process can be based on the number of incident quanta per unit time (fluence rate-response) or on the number of incident quanta during a given period of irradiation (fluence-response). Fluence-response and fluence rate-response relationships have been measured for second positive phototropism by seedlings of Arabidopsis thaliana. The fluence-response relationships exhibit a single limiting threshold at about 0.01 micromole per square meter when measured at fluence rates from 2.4 x 10(-5) to 6.5 x 10(-3) micromoles per square meter per second. The threshold values in the fluence rate-response curves decrease with increasing time of irradiation, but show a common fluence threshold at about 0.01 micromole per square meter. These thresholds are the same as the threshold of about 0.01 micromole per square meter measured for first positive phototropism. Based on these data, it is suggested that second positive curvature has a threshold in time of about 10 minutes. Moreover, if the times of irradiation exceed the time threshold, there is a single limiting fluence threshold at about 0.01 micromole per square meter. Thus, the limiting fluence threshold for second positive phototropism is the same as the fluence threshold for first positive phototropism. Based on these data, we suggest that this common fluence threshold for first positive and second positive phototropism is set by a single photoreceptor pigment system.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L.
2013-01-01
Background Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. Methodology/Principal Findings We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45–87.96% forest cover for persistence and 50.82–91.02% for extinction dynamics. Conclusions/Significance Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation. PMID:23409106
Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding
Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing
2017-01-01
The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305
Mitochondrial threshold effects.
Rossignol, Rodrigue; Faustin, Benjamin; Rocher, Christophe; Malgat, Monique; Mazat, Jean-Pierre; Letellier, Thierry
2003-01-01
The study of mitochondrial diseases has revealed dramatic variability in the phenotypic presentation of mitochondrial genetic defects. To attempt to understand this variability, different authors have studied energy metabolism in transmitochondrial cell lines carrying different proportions of various pathogenic mutations in their mitochondrial DNA. The same kinds of experiments have been performed on isolated mitochondria and on tissue biopsies taken from patients with mitochondrial diseases. The results have shown that, in most cases, phenotypic manifestation of the genetic defect occurs only when a threshold level is exceeded, and this phenomenon has been named the 'phenotypic threshold effect'. Subsequently, several authors showed that it was possible to inhibit considerably the activity of a respiratory chain complex, up to a critical value, without affecting the rate of mitochondrial respiration or ATP synthesis. This phenomenon was called the 'biochemical threshold effect'. More recently, quantitative analysis of the effects of various mutations in mitochondrial DNA on the rate of mitochondrial protein synthesis has revealed the existence of a 'translational threshold effect'. In this review these different mitochondrial threshold effects are discussed, along with their molecular bases and the roles that they play in the presentation of mitochondrial diseases. PMID:12467494
Excavated substrate modulates growth instability during nest building in ants
Toffin, Etienne; Kindekens, Jonathan; Deneubourg, Jean-Louis
2010-01-01
In social insects, the nests of the same species can show a large difference in size and shape. Despite these large variations, the nests share the same substructures, some appearing during nest growth. In ants, the interplay between nest size and digging activity leads to two successive morphological transitions from circular to branched shapes (budding along the perimeter of the circular cavity and tunnelling of the galleries). Like several other self-organized collective behaviours, this phenomenon, as well as the entire nest-digging process, is thought to be modulated by environmental properties. The present study investigates the effect of excavated substrate on the nest morphogenesis and the morphological transitions by using two materials with different cohesions. Here, we show that the two morphological transitions occur more frequently with a cohesive substrate than with a granular one: 96 per cent of cohesive experiments showed both transitions, whereas only 50 per cent did in granular experiments. We found that transitions and excavation cessation follow area–response thresholds: the shape transitions take place and the digging activity stops when the dug area reaches the corresponding threshold values. The shape transition thresholds are lower with the cohesive substrate and that of stopping digging is independent of nest shape and material. According to simulations, the experimental frequencies of transitions found their origin in the competition between transitions and activity cessation and in the difference between the transition threshold values of each substrate. Our results demonstrate how the substrate properties modulate the collective response and lead to various patterns. Considering the non-specific mechanisms at work, such effects of substrate coarseness have their counterparts in various collective behaviours, generating alternative patterns to colonize and exploit the environment. PMID:20410036
Novel methodologies for spectral classification of exon and intron sequences
NASA Astrophysics Data System (ADS)
Kwan, Hon Keung; Kwan, Benjamin Y. M.; Kwan, Jennifer Y. Y.
2012-12-01
Digital processing of a nucleotide sequence requires it to be mapped to a numerical sequence in which the choice of nucleotide to numeric mapping affects how well its biological properties can be preserved and reflected from nucleotide domain to numerical domain. Digital spectral analysis of nucleotide sequences unfolds a period-3 power spectral value which is more prominent in an exon sequence as compared to that of an intron sequence. The success of a period-3 based exon and intron classification depends on the choice of a threshold value. The main purposes of this article are to introduce novel codes for 1-sequence numerical representations for spectral analysis and compare them to existing codes to determine appropriate representation, and to introduce novel thresholding methods for more accurate period-3 based exon and intron classification of an unknown sequence. The main findings of this study are summarized as follows: Among sixteen 1-sequence numerical representations, the K-Quaternary Code I offers an attractive performance. A windowed 1-sequence numerical representation (with window length of 9, 15, and 24 bases) offers a possible speed gain over non-windowed 4-sequence Voss representation which increases as sequence length increases. A winner threshold value (chosen from the best among two defined threshold values and one other threshold value) offers a top precision for classifying an unknown sequence of specified fixed lengths. An interpolated winner threshold value applicable to an unknown and arbitrary length sequence can be estimated from the winner threshold values of fixed length sequences with a comparable performance. In general, precision increases as sequence length increases. The study contributes an effective spectral analysis of nucleotide sequences to better reveal embedded properties, and has potential applications in improved genome annotation.
Olfactory Threshold of Chlorine in Oxygen.
1977-09-01
The odor threshold of chlorine in oxygen was determined. Measurements were conducted in an altitude chamber, which provided an odor-free and noise...free background. Human male volunteers, with no previous olfactory acuity testing experience, served as panelists. Threshold values were affected by...time intervals between trials and by age differences. The mean threshold value for 11 subjects was 0.08 ppm obtained by positive responses to the lowest detectable level of chlorine in oxygen, 50% of the time. (Author)
Wahren, L K
1990-09-01
In a previous study, allodynia to cold and vibratory stimuli was found in the finger stumps of 24 patients with amputations, control values being obtained from fingers of the intact contralateral hand. When treated with regional intravenous guanethidine block (RGB), some of the patients only had short-lasting relief of symptoms, whereas others experienced a more long-lasting beneficial effect. In the present long-term follow-up study the patients were re-examined 6 years after the RGB treatment. The aim was to investigate whether the earlier symptoms and signs persisted, and whether there were any differences in these respects, between patients with long-lasting (group 1) and short-lasting relief of symptoms after RGB (group 2). All 24 patients were asked to answer a questionnaire concerning their clinical symptoms. In addition, 14 of them visited the laboratory for determination of thermal and vibration-induced pain thresholds. Comparisons were made with values obtained at the first examination before RGB treatment and with values from 14 healthy subjects tested in a similar way on 2 occasions with an interval of 8 years. Twenty of 23 patients reported that cold exposure still evoked stump pain. However, the threshold measurements showed that with time the patients had become more tolerant to thermal stimuli not only in the injured but also in the uninjured hand. A rise in pain threshold was also observed when vibration-induced pain was tested in the injured hand. There was no significant difference between groups 1 and 2. Similar changes in pain thresholds with time were not observed in the group of healthy control subjects.
de Lemos Zingano, Bianca; Guarnieri, Ricardo; Diaz, Alexandre Paim; Schwarzbold, Marcelo Liborio; Bicalho, Maria Alice Horta; Claudino, Lucia Sukys; Markowitsch, Hans J; Wolf, Peter; Lin, Katia; Walz, Roger
2015-09-01
This study aimed to evaluate the diagnostic accuracy of the Hamilton Rating Scale for Depression (HRSD), the Beck Depression Inventory (BDI), the Hospital Anxiety and Depression Scale (HADS), and the Hospital Anxiety and Depression Scale-Depression subscale (HADS-D) as diagnostic tests for depressive disorder in drug-resistant mesial temporal lobe epilepsy with hippocampal sclerosis (MTLE-HS). One hundred three patients with drug-resistant MTLE-HS were enrolled. All patients underwent a neurological examination, interictal and ictal video-electroencephalogram (V-EEG) analyses, and magnetic resonance imaging (MRI). Psychiatric interviews were based on DSM-IV-TR criteria and ILAE Commission of Psychobiology classification as a gold standard; HRSD, BDI, HADS, and HADS-D were used as psychometric diagnostic tests, and receiver operating characteristic (ROC) curves were used to determine the optimal threshold scores. For all the scales, the areas under the curve (AUCs) were approximately 0.8, and they were able to identify depression in this sample. A threshold of ≥9 on the HRSD and a threshold of ≥8 on the HADS-D showed a sensitivity of 70% and specificity of 80%. A threshold of ≥19 on the BDI and HADS-D total showed a sensitivity of 55% and a specificity of approximately 90%. The instruments showed a negative predictive value of approximately 87% and a positive predictive value of approximately 65% for the BDI and HADS total and approximately 60% for the HRSD and HADS-D. HRSD≥9 and HADS-D≥8 had the best balance between sensitivity (approximately 70%) and specificity (approximately 80%). However, with these thresholds, these diagnostic tests do not appear useful in identifying depressive disorder in this population with epilepsy, and their specificity (approximately 80%) and PPV (approximately 55%) were lower than those of the other scales. We believe that the BDI and HADS total are valid diagnostic tests for depressive disorder in patients with MTLE-HS, as both scales showed acceptable (though not high) specificity and PPV for this type of study. Copyright © 2015 Elsevier Inc. All rights reserved.
Wang, Yi-Ting; Sung, Pei-Yuan; Lin, Peng-Lin; Yu, Ya-Wen; Chung, Ren-Hua
2015-05-15
Genome-wide association studies (GWAS) have become a common approach to identifying single nucleotide polymorphisms (SNPs) associated with complex diseases. As complex diseases are caused by the joint effects of multiple genes, while the effect of individual gene or SNP is modest, a method considering the joint effects of multiple SNPs can be more powerful than testing individual SNPs. The multi-SNP analysis aims to test association based on a SNP set, usually defined based on biological knowledge such as gene or pathway, which may contain only a portion of SNPs with effects on the disease. Therefore, a challenge for the multi-SNP analysis is how to effectively select a subset of SNPs with promising association signals from the SNP set. We developed the Optimal P-value Threshold Pedigree Disequilibrium Test (OPTPDT). The OPTPDT uses general nuclear families. A variable p-value threshold algorithm is used to determine an optimal p-value threshold for selecting a subset of SNPs. A permutation procedure is used to assess the significance of the test. We used simulations to verify that the OPTPDT has correct type I error rates. Our power studies showed that the OPTPDT can be more powerful than the set-based test in PLINK, the multi-SNP FBAT test, and the p-value based test GATES. We applied the OPTPDT to a family-based autism GWAS dataset for gene-based association analysis and identified MACROD2-AS1 with genome-wide significance (p-value=2.5×10(-6)). Our simulation results suggested that the OPTPDT is a valid and powerful test. The OPTPDT will be helpful for gene-based or pathway association analysis. The method is ideal for the secondary analysis of existing GWAS datasets, which may identify a set of SNPs with joint effects on the disease.
NASA Astrophysics Data System (ADS)
Panziera, Luca; Gabella, Marco; Zanini, Stefano; Hering, Alessandro; Germann, Urs; Berne, Alexis
2016-06-01
This paper presents a regional extreme rainfall analysis based on 10 years of radar data for the 159 regions adopted for official natural hazard warnings in Switzerland. Moreover, a nowcasting tool aimed at issuing heavy precipitation regional alerts is introduced. The two topics are closely related, since the extreme rainfall analysis provides the thresholds used by the nowcasting system for the alerts. Warm and cold seasons' monthly maxima of several statistical quantities describing regional rainfall are fitted to a generalized extreme value distribution in order to derive the precipitation amounts corresponding to sub-annual return periods for durations of 1, 3, 6, 12, 24 and 48 h. It is shown that regional return levels exhibit a large spatial variability in Switzerland, and that their spatial distribution strongly depends on the duration of the aggregation period: for accumulations of 3 h and shorter, the largest return levels are found over the northerly alpine slopes, whereas for longer durations the southern Alps exhibit the largest values. The inner alpine chain shows the lowest values, in agreement with previous rainfall climatologies. The nowcasting system presented here is aimed to issue heavy rainfall alerts for a large variety of end users, who are interested in different precipitation characteristics and regions, such as, for example, small urban areas, remote alpine catchments or administrative districts. The alerts are issued not only if the rainfall measured in the immediate past or forecast in the near future exceeds some predefined thresholds but also as soon as the sum of past and forecast precipitation is larger than threshold values. This precipitation total, in fact, has primary importance in applications for which antecedent rainfall is as important as predicted one, such as urban floods early warning systems. The rainfall fields, the statistical quantity representing regional rainfall and the frequency of alerts issued in case of continuous threshold exceedance are some of the configurable parameters of the tool. The analysis of the urban flood which occurred in the city of Schaffhausen in May 2013 suggests that this alert tool might have complementary skill with respect to radar-based thunderstorm nowcasting systems for storms which do not show a clear convective signature.
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Bhargava, P.; Biswas, A. K.; Sahu, Shasikiran; Mandloi, V.; Ittoop, M. O.; Khattak, B. Q.; Tiwari, M. K.; Kukreja, L. M.
2013-03-01
It is shown that the threshold fluence for laser paint stripping can be accurately estimated from the heat of gasification and the absorption coefficient of the epoxy-paint. The threshold fluence determined experimentally by stripping of the epoxy-paint on a substrate using a TEA CO2 laser matches closely with the calculated value. The calculated threshold fluence and the measured absorption coefficient of the paint allowed us to determine the epoxy paint thickness that would be removed per pulse at a given laser fluence even without experimental trials. This was used to predict the optimum scan speed required to strip the epoxy-paint of a given thickness using a high average power TEA CO2 laser. Energy Dispersive X-Ray Fluorescence (EDXRF) studies were also carried out on laser paint-stripped concrete substrate to show high efficacy of this modality.
Lerman, Tamara; Depenbusch, Marion; Schultze-Mosgau, Askan; von Otte, Soeren; Scheinhardt, Markus; Koenig, Inke; Kamischke, Axel; Macek, Milan; Schwennicke, Arne; Segerer, Sabine; Griesinger, Georg
2017-05-01
The incidence of low (<6 oocytes) and high (>18 oocytes) ovarian response to 150 µg corifollitropin alfa in relation to anti-Müllerian hormone (AMH) and other biomarkers was studied in a multi-centre (n = 5), multi-national, prospective, investigator-initiated, observational cohort study. Infertile women (n = 212), body weight >60 kg, underwent controlled ovarian stimulation in a gonadotrophin-releasing hormone-antagonist multiple-dose protocol. Demographic, sonographic and endocrine parameters were prospectively assessed on cycle day 2 or 3 of a spontaneous menstruation before the administration of 150 µg corifollitropin alfa. Serum AMH showed the best correlation with the number of oocytes obtained among all predictor variables. In receiver-operating characteristic analysis, AMH at a threshold of 0.91 ng/ml showed a sensitivity of 82.4%, specificity of 82.4%, positive predictive value 52.9%and negative predictive value 95.1% for predicting low response (area under the curve [AUC], 95% CI; P-value: 0.853, 0.769-0.936; <0.0001). For predicting high response, the optimal threshold for AMH was 2.58 ng/ml, relating to a sensitivity of 80.0%, specificity 82.1%, positive predictive value 42.5% and negative predictive value 96.1% (AUC, 95% CI; P-value: 0.871, 0.787-0.955; <0.0001). In conclusion, patients with serum AMH concentrations between approximately 0.9 and 2.6 ng/ml were unlikely to show extremes of response. Copyright © 2017. Published by Elsevier Ltd.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
Oldenkamp, Rik; Huijbregts, Mark A J; Ragas, Ad M J
2016-05-01
The selection of priority APIs (Active Pharmaceutical Ingredients) can benefit from a spatially explicit approach, since an API might exceed the threshold of environmental concern in one location, while staying below that same threshold in another. However, such a spatially explicit approach is relatively data intensive and subject to parameter uncertainty due to limited data. This raises the question to what extent a spatially explicit approach for the environmental prioritisation of APIs remains worthwhile when accounting for uncertainty in parameter settings. We show here that the inclusion of spatially explicit information enables a more efficient environmental prioritisation of APIs in Europe, compared with a non-spatial EU-wide approach, also under uncertain conditions. In a case study with nine antibiotics, uncertainty distributions of the PAF (Potentially Affected Fraction) of aquatic species were calculated in 100∗100km(2) environmental grid cells throughout Europe, and used for the selection of priority APIs. Two APIs have median PAF values that exceed a threshold PAF of 1% in at least one environmental grid cell in Europe, i.e., oxytetracycline and erythromycin. At a tenfold lower threshold PAF (i.e., 0.1%), two additional APIs would be selected, i.e., cefuroxime and ciprofloxacin. However, in 94% of the environmental grid cells in Europe, no APIs exceed either of the thresholds. This illustrates the advantage of following a location-specific approach in the prioritisation of APIs. This added value remains when accounting for uncertainty in parameter settings, i.e., if the 95th percentile of the PAF instead of its median value is compared with the threshold. In 96% of the environmental grid cells, the location-specific approach still enables a reduction of the selection of priority APIs of at least 50%, compared with a EU-wide prioritisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Thresholds for the cost-effectiveness of interventions: alternative approaches.
Marseille, Elliot; Larson, Bruce; Kazi, Dhruv S; Kahn, James G; Rosen, Sydney
2015-02-01
Many countries use the cost-effectiveness thresholds recommended by the World Health Organization's Choosing Interventions that are Cost-Effective project (WHO-CHOICE) when evaluating health interventions. This project sets the threshold for cost-effectiveness as the cost of the intervention per disability-adjusted life-year (DALY) averted less than three times the country's annual gross domestic product (GDP) per capita. Highly cost-effective interventions are defined as meeting a threshold per DALY averted of once the annual GDP per capita. We argue that reliance on these thresholds reduces the value of cost-effectiveness analyses and makes such analyses too blunt to be useful for most decision-making in the field of public health. Use of these thresholds has little theoretical justification, skirts the difficult but necessary ranking of the relative values of locally-applicable interventions and omits any consideration of what is truly affordable. The WHO-CHOICE thresholds set such a low bar for cost-effectiveness that very few interventions with evidence of efficacy can be ruled out. The thresholds have little value in assessing the trade-offs that decision-makers must confront. We present alternative approaches for applying cost-effectiveness criteria to choices in the allocation of health-care resources.
Viewer. Reaction Q-Values and Thresholds This tool computes reaction Q-values and thresholds using , uncertainties, and correlations using 30 energy ranges. Simple tables of reaction uncertainties are also
The influence of music and stress on musicians' hearing
NASA Astrophysics Data System (ADS)
Kähäri, Kim; Zachau, Gunilla; Eklöf, Mats; Möller, Claes
2004-10-01
Hearing and hearing disorders among classical and rock/jazz musicians was investigated. Pure tone audiometry was done in 140 classical and 139 rock/jazz musicians. The rock/jazz musicians answered a questionnaire concerning hearing disorders and psychosocial exposure. All results were compared to age appropriate reference materials. Hearing thresholds showed a notch configuration in both classical and rock/jazz musicians indicating the inclusion of high sound levels but an overall well-preserved hearing thresholds. Female musicians had significantly better hearing thresholds in the high-frequency area than males. Rock/jazz musicians showed slight worse hearing thresholds as compared to classical musicians. When assessing hearing disorders, a large number of rock/jazz musicians suffered from different hearing disorders (74%). Hearing loss, tinnitus and hyperacusis were the most common disorders and were significantly more frequent in comparison with different reference populations. Among classical musicians, no extended negative progress of the pure tone hearing threshold values was found in spite of the continued 16 years of musical noise exposure. In rock/jazz musicians, there was no relationships between psychosocial factors at work and hearing disorders. The rock/jazz musicians reported low stress and high degree of energy. On the average, the rock/jazz musicians reported higher control, lower stress and higher energy than a reference material of white-collar workers.
NASA Astrophysics Data System (ADS)
Miao, Qinghua; Yang, Dawen; Yang, Hanbo; Li, Zhe
2016-10-01
Flash flooding is one of the most common natural hazards in China, particularly in mountainous areas, and usually causes heavy damage and casualties. However, the forecasting of flash flooding in mountainous regions remains challenging because of the short response time and limited monitoring capacity. This paper aims to establish a strategy for flash flood warnings in mountainous ungauged catchments across humid, semi-humid and semi-arid regions of China. First, we implement a geomorphology-based hydrological model (GBHM) in four mountainous catchments with drainage areas that ranges from 493 to 1601 km2. The results show that the GBHM can simulate flash floods appropriately in these four study catchments. We propose a method to determine the rainfall threshold for flood warning by using frequency analysis and binary classification based on long-term GBHM simulations that are forced by historical rainfall data to create a practically easy and straightforward approach for flash flood forecasting in ungauged mountainous catchments with drainage areas from tens to hundreds of square kilometers. The results show that the rainfall threshold value decreases significantly with increasing antecedent soil moisture in humid regions, while this value decreases slightly with increasing soil moisture in semi-humid and semi-arid regions. We also find that accumulative rainfall over a certain time span (or rainfall over a long time span) is an appropriate threshold for flash flood warnings in humid regions because the runoff is dominated by excess saturation. However, the rainfall intensity (or rainfall over a short time span) is more suitable in semi-humid and semi-arid regions because excess infiltration dominates the runoff in these regions. We conduct a comprehensive evaluation of the rainfall threshold and find that the proposed method produces reasonably accurate flash flood warnings in the study catchments. An evaluation of the performance at uncalibrated interior points in the four gauged catchments provides results that are indicative of the expected performance at ungauged locations. We also find that insufficient historical data lengths (13 years with a 5-year flood return period in this study) may introduce uncertainty in the estimation of the flood/rainfall threshold because of the small number of flood events that are used in binary classification. A data sample that contains enough flood events (10 events suggested in the present study) that exceed the threshold value is necessary to obtain acceptable results from binary classification.
Thresholds of Extinction: Simulation Strategies in Environmental Values Education.
ERIC Educational Resources Information Center
Glew, Frank
1990-01-01
Describes a simulation exercise for campers and an accompanying curriculum unit--"Thresholds of Extinction"--that addresses the issues of endangered species. Uses this context to illustrate steps in the process of values development: awareness, gathering data, resolution (decision making), responsibility (acting on values), and…
NASA Astrophysics Data System (ADS)
Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande
2017-11-01
This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.
Gözübüyük, Gökhan; Koç, Mevlüt; Kaypaklı, Onur; Şahin, Durmuş Yıldıray
2016-11-01
There are not enough data about threshold changes in patients with CRT. In this study, we aimed to investigate frequency of significant threshold increase of left ventricle lead and to determine clinical, demographic, medical and laboratory parameters that associated with threshold increase in CRT implanted patients. We included CRT implanted 200 patients (124 males, 76 females; mean age 65.8 ± 10.3 years) to this study. Basal and third month LV R wave amplitude, electrode impedance, and threshold values were recorded. Threshold increase was accepted as ≥0.1 V and significant increase as >1 V. Patients were divided into two groups: increased threshold and non-increased threshold for LV lead. Number of patients with increased LV threshold was 68 (37.6 %). Furthermore, 8 % of patients had severe increase (≥1 V) in LV threshold. We observed that serum levels of hs-CRP and 1,25 (OH)2 vitamin D were independently associated with increased LV threshold. We showed that 1 mg/dl increase in hs-CRP and the 1 mg/dl decrease in vitamin D are associated with 25.3 and 4.5 % increase in the odds of increased LV threshold, respectively. Increased hs-CRP and decreased 1,25 (OH)2 vitamin D are the strongest predictors of increased LV lead thresholds. We suggest that hs-CRP and 1,25 (OH)2 vitamin D may be used as markers to predict and follow the patients with increased thresholds. It may be useful to finalize CRT procedure with more appropriate basal threshold in patients with high serum hs-CRP and low 1,25 (OH)2 vitamin D levels.
Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O
2017-06-01
In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.
Movement asymmetry in working polo horses.
Pfau, T; Parkes, R S; Burden, E R; Bell, N; Fairhurst, H; Witte, T H
2016-07-01
The high, repetitive demands imposed on polo horses in training and competition may predispose them to musculoskeletal injuries and lameness. To quantify movement symmetry and lameness in a population of polo horses, and to investigate the existence of a relationship with age. Convenience sampled cross-sectional study. Sixty polo horses were equipped with inertial measurement units (IMUs) attached to the poll, and between the tubera sacrale. Six movement symmetry measures were calculated for vertical head and pelvic displacement during in-hand trot and compared with values for perfect symmetry, compared between left and right limb lame horses, and compared with published thresholds for lameness. Regression lines were calculated as a function of age of horse. Based on 2 different sets of published asymmetry thresholds 52-53% of the horses were quantified with head movement asymmetry and 27-50% with pelvic movement asymmetry resulting in 60-67% of horses being classified with movement asymmetry outside published guideline values for either the forelimbs, hindlimbs or both. Neither forelimb nor hindlimb asymmetries were preferentially left or right sided, with directional asymmetry values across all horses not different from perfect symmetry and absolute values not different between left and right lame horses (P values >0.6 for all forelimb symmetry measures and >0.2 for all hindlimb symmetry measures). None of the symmetry parameters increased or decreased significantly with age. A large proportion of polo horses show gait asymmetries consistent with previously defined thresholds for lameness. These do not appear to be lateralised or associated with age. © 2015 EVJ Ltd.
NASA Astrophysics Data System (ADS)
Nlandu Kamavuako, Ernest; Scheme, Erik Justin; Englehart, Kevin Brian
2016-08-01
Objective. For over two decades, Hudgins’ set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. Approach. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. Main results. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. Significance. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.
Kamavuako, Ernest Nlandu; Scheme, Erik Justin; Englehart, Kevin Brian
2016-08-01
For over two decades, Hudgins' set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.
Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M
2017-06-30
There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.
An enhanced fast scanning algorithm for image segmentation
NASA Astrophysics Data System (ADS)
Ismael, Ahmed Naser; Yusof, Yuhanis binti
2015-12-01
Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.
A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements
NASA Astrophysics Data System (ADS)
Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.
2014-12-01
A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.
Precipitation phase partitioning variability across the Northern Hemisphere
NASA Astrophysics Data System (ADS)
Jennings, K. S.; Winchell, T. S.; Livneh, B.; Molotch, N. P.
2017-12-01
Precipitation phase drives myriad hydrologic, climatic, and biogeochemical processes. Despite its importance, many of the land surface models used to simulate such processes and their sensitivity to climate warming rely on simple, spatially uniform air temperature thresholds to partition rainfall and snowfall. Our analysis of a 29-year dataset with 18.7 million observations of precipitation phase from 12,143 stations across the Northern Hemisphere land surface showed marked spatial variability in the near-surface air temperature at which precipitation is equally likely to fall as rain and snow, the 50% rain-snow threshold. This value averaged 1.0°C and ranged from -0.4°C to 2.4°C for 95% of the stations analyzed. High-elevation continental areas such as the Rocky Mountains of the western U.S. and the Tibetan Plateau of central Asia generally exhibited the warmest thresholds, in some cases exceeding 3.0°C. Conversely, the coldest thresholds were observed on the Pacific Coast of North America, the southeast U.S., and parts of Eurasia, with values dropping below -0.5°C. Analysis of the meteorological conditions during storm events showed relative humidity exerted the strongest control on phase partitioning, with surface pressure playing a secondary role. Lower relative humidity and surface pressure were both associated with warmer 50% rain-snow thresholds. Additionally, we trained a binary logistic regression model on the observations to classify rain and snow events and found including relative humidity as a predictor variable significantly increased model performance between 0.6°C and 3.8°C when phase partitioning is most uncertain. We then used the optimized model and a spatially continuous reanalysis product to map the 50% rain-snow threshold across the Northern Hemisphere. The map reproduced patterns in the observed thresholds with a mean bias of 0.5°C relative to the station data. The above results suggest land surface models could be improved by incorporating relative humidity into their precipitation phase prediction schemes or by using a spatially variable, optimized rain-snow temperature threshold. This is particularly important for climate warming simulations where misdiagnosing a shift from snow to rain or inaccurately quantifying snowfall fraction would likely lead to biased results.
Defining operating rules for mitigation of drought effects on water supply systems
NASA Astrophysics Data System (ADS)
Rossi, G.; Caporali, E.; Garrote, L.; Federici, G. V.
2012-04-01
Reservoirs play a pivotal role for water supply systems regulation and management especially during drought periods. Optimization of reservoir releases, related to drought mitigation rules is particularly required. The hydrologic state of the system is evaluated defining some threshold values, expressed in probabilistic terms. Risk deficit curves are used to reduce the ensemble of possible rules for simulation. Threshold values can be linked to specific actions in an operational context in different levels of severity, i.e. normal, pre-alert, alert and emergency scenarios. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are validated with a long term simulation that takes into account the characteristics of the evaluated system. The threshold levels determine some curves that define reservoir releases as a function of existing storage volume. A demand reduction is related to each threshold level. The rules to manage the system in drought conditions, the threshold levels and the reductions are optimized using long term simulations with different hypothesized states of the system. Synthetic sequences of flows with the same statistical properties of the historical ones are produced to evaluate the system behaviour. Performances of different values of reduction and different threshold curves are evaluated using different objective function and performances indices. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in Central Italy. The considered demand centres are Firenze and Bagno a Ripoli that have, accordingly to the census ISTAT 2001, a total of 395.000 inhabitants.
Sato, Atsushi; Shimizu, Yusaku; Koyama, Junichi; Hongo, Kazuhiro
2017-06-01
Tissue plasminogen activator (tPA) is effective for the treatment of acute brain ischemia, but may trigger fatal brain edema or hemorrhage if the brain ischemia results in a large infarct. Herein, we attempted to predict the extent of infarcts by determining the optimal threshold of ADC values on DWI that predictively distinguishes between infarct and reversible areas, and by reconstructing color-coded images based on this threshold. The study subjects consisted of 36 patients with acute brain ischemia in whom MRA had confirmed reopening of the occluded arteries in a short time (mean: 99min) after tPA treatment. We measured the apparetnt diffusion coefficient (ADC) values in several small regions of interest over the white matter within high-intensity areas on the initial diffusion weighted image (DWI); then, by comparing the findings to the follow-up images, we obtained the optimal threshold of ADC values using receiver-operating characteristic analysis. The threshold obtained (583×10 -6 m 2 /s) was lower than those previously reported; this threshold could distinguish between infarct and reversible areas with considerable accuracy (sensitivity: 0.87, specificity: 0.94). The threshold obtained and the reconstructed images were predictive of the final radiological result of tPA treatment, and this threshold may be helpful in determining the appropriate management of patients with acute brain ischemia. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Epidemic spreading with activity-driven awareness diffusion on multiplex network.
Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming
2016-04-01
There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.
Epidemic spreading with activity-driven awareness diffusion on multiplex network
NASA Astrophysics Data System (ADS)
Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming
2016-04-01
There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.
Janoudi, Abdul; Poff, Kenneth L.
1990-01-01
The relationship between the amount of light and the amount of response for any photobiological process can be based on the number of incident quanta per unit time (fluence rate-response) or on the number of incident quanta during a given period of irradiation (fluence-response). Fluence-response and fluence rate-response relationships have been measured for second positive phototropism by seedlings of Arabidopsis thaliana. The fluence-response relationships exhibit a single limiting threshold at about 0.01 micromole per square meter when measured at fluence rates from 2.4 × 10−5 to 6.5 × 10−3 micromoles per square meter per second. The threshold values in the fluence rateresponse curves decrease with increasing time of irradiation, but show a common fluence threshold at about 0.01 micromole per square meter. These thresholds are the same as the threshold of about 0.01 micromole per square meter measured for first positive phototropism. Based on these data, it is suggested that second positive curvature has a threshold in time of about 10 minutes. Moreover, if the times of irradiation exceed the time threshold, there is a single limiting fluence threshold at about 0.01 micromole per square meter. Thus, the limiting fluence threshold for second positive phototropism is the same as the fluence threshold for first positive phototropism. Based on these data, we suggest that this common fluence threshold for first positive and second positive phototropism is set by a single photoreceptor pigment system. PMID:11537470
Novel threshold pressure sensors based on nonlinear dynamics of MEMS resonators
NASA Astrophysics Data System (ADS)
Hasan, Mohammad H.; Alsaleem, Fadi M.; Ouakad, Hassen M.
2018-06-01
Triggering an alarm in a car for low air-pressure in the tire or tripping an HVAC compressor if the refrigerant pressure is lower than a threshold value are examples for applications where measuring the amount of pressure is not as important as determining if the pressure has exceeded a threshold value for an action to occur. Unfortunately, current technology still relies on analog pressure sensors to perform this functionality by adding a complex interface (extra circuitry, controllers, and/or decision units). In this paper, we demonstrate two new smart tunable-threshold pressure switch concepts that can reduce the complexity of a threshold pressure sensor. The first concept is based on the nonlinear subharmonic resonance of a straight double cantilever microbeam with a proof mass and the other concept is based on the snap-through bi-stability of a clamped-clamped MEMS shallow arch. In both designs, the sensor operation concept is simple. Any actuation performed at a certain pressure lower than a threshold value will activate a nonlinear dynamic behavior (subharmonic resonance or snap-through bi-stability) yielding a large output that would be interpreted as a logic value of ONE, or ON. Once the pressure exceeds the threshold value, the nonlinear response ceases to exist, yielding a small output that would be interpreted as a logic value of ZERO, or OFF. A lumped, single degree of freedom model for the double cantilever beam, that is validated using experimental data, and a continuous beam model for the arch beam, are used to simulate the operation range of the proposed sensors by identifying the relationship between the excitation signal and the critical cut-off pressure.
Martin, Joannie; Beauparlant, Martin; Sauvé, Sébastien; L'Espérance, Gilles
2016-12-01
Asbestos amosite fibers were investigated to evaluate the damage caused by a transmission electron microscope (TEM) electron beam. Since elemental x-ray intensity ratios obtained by energy dispersive x-ray spectroscopy (EDS) are commonly used for asbestos identification, the impact of beam damage on these ratios was evaluated. It was determined that the magnesium/silicon ratio best represented the damage caused to the fiber. Various tests showed that most fibers have a current density threshold above which the chemical composition of the fiber is modified. The value of this threshold current density varied depending on the fiber, regardless of fiber diameter, and in some cases could not be determined. The existence of a threshold electron dose was also demonstrated. This value was dependent on the current density used and can be increased by providing a recovery period between exposures to the electron beam. This study also established that the electron beam current is directly related to the damage rate above a current density of 165 A/cm 2 . The large number of different results obtained suggest, that in order to ensure that the amosite fibers are not damaged, analysis should be conducted below a current density of 100 A/cm 2 .
On plant detection of intact tomato fruits using image analysis and machine learning methods.
Yamamoto, Kyosuke; Guo, Wei; Yoshioka, Yosuke; Ninomiya, Seishi
2014-07-09
Fully automated yield estimation of intact fruits prior to harvesting provides various benefits to farmers. Until now, several studies have been conducted to estimate fruit yield using image-processing technologies. However, most of these techniques require thresholds for features such as color, shape and size. In addition, their performance strongly depends on the thresholds used, although optimal thresholds tend to vary with images. Furthermore, most of these techniques have attempted to detect only mature and immature fruits, although the number of young fruits is more important for the prediction of long-term fluctuations in yield. In this study, we aimed to develop a method to accurately detect individual intact tomato fruits including mature, immature and young fruits on a plant using a conventional RGB digital camera in conjunction with machine learning approaches. The developed method did not require an adjustment of threshold values for fruit detection from each image because image segmentation was conducted based on classification models generated in accordance with the color, shape, texture and size of the images. The results of fruit detection in the test images showed that the developed method achieved a recall of 0.80, while the precision was 0.88. The recall values of mature, immature and young fruits were 1.00, 0.80 and 0.78, respectively.
Ji, Qing; Li, Fei; Pang, Xiaoping; Luo, Cong
2018-04-05
The threshold of sea ice concentration (SIC) is the basis for accurately calculating sea ice extent based on passive microwave (PM) remote sensing data. However, the PM SIC threshold at the sea ice edge used in previous studies and released sea ice products has not always been consistent. To explore the representable value of the PM SIC threshold corresponding on average to the position of the Arctic sea ice edge during summer in recent years, we extracted sea ice edge boundaries from the Moderate-resolution Imaging Spectroradiometer (MODIS) sea ice product (MOD29 with a spatial resolution of 1 km), MODIS images (250 m), and sea ice ship-based observation points (1 km) during the fifth (CHINARE-2012) and sixth (CHINARE-2014) Chinese National Arctic Research Expeditions, and made an overlay and comparison analysis with PM SIC derived from Special Sensor Microwave Imager Sounder (SSMIS, with a spatial resolution of 25 km) in the summer of 2012 and 2014. Results showed that the average SSMIS SIC threshold at the Arctic sea ice edge based on ice-water boundary lines extracted from MOD29 was 33%, which was higher than that of the commonly used 15% discriminant threshold. The average SIC threshold at sea ice edge based on ice-water boundary lines extracted by visual interpretation from four scenes of the MODIS image was 35% when compared to the average value of 36% from the MOD29 extracted ice edge pixels for the same days. The average SIC of 31% at the sea ice edge points extracted from ship-based observations also confirmed that choosing around 30% as the SIC threshold during summer is recommended for sea ice extent calculations based on SSMIS PM data. These results can provide a reference for further studying the variation of sea ice under the rapidly changing Arctic.
A null model for Pearson coexpression networks.
Gobbi, Andrea; Jurman, Giuseppe
2015-01-01
Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent simple but effective structures for discovering and interpreting linear gene relationships. In recent years, several approaches have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is most crucial when the number of samples is small, yielding a non-negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The threshold is theoretically derived by means of an analytic approach and, as a deterministic independent null model, it depends only on the dimensions of the starting data matrix, with assumptions on the skewness of the data distribution compatible with the structure of gene expression levels data. We show, on synthetic and array datasets, that the proposed threshold is effective in eliminating all false positive links, with an offsetting cost in terms of false negative detected edges.
Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin
2018-03-05
The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.
2012-01-01
values of EAFP, EAFN, and EAF, can be compared with three user-defined threshold values, TAFP, TAFN, and TAF . These threshold values determine the update...values were chosen as TAFP = E0AFP + 0.02, TAFN = E0AFN + 0.02, and TAF = E0AF + 0.02). We called the value of 0.02 the margin of error tolerance. In
Leischik, Roman; Spelsberg, Norman; Niggemann, Hiltrud; Dworrak, Birgit; Tiroch, Klaus
2014-01-01
Background : Exercise-induced arterial hypertension (EIAH) leads to myocardial hypertrophy and is associated with a poor prognosis. EIAH might be related to the “cardiac fatigue” caused by endurance training. The goal of this study was to examine whether there is any relationship between EIAH and left ventricular hypertrophy in Ironman-triathletes. Methods: We used echocardiography and spiroergometry to determine the left ventricular mass (LVM), the aerobic/anaerobic thresholds and the steady-state blood pressure of 51 healthy male triathletes. The main inclusion criterion was the participation in at least one middle or long distance triathlon. Results: When comparing triathletes with LVM <220g and athletes with LVM >220g there was a significant difference between blood pressure values (BP) at the anaerobic threshold (185.2± 21.5 mmHg vs. 198.8 ±22.3 mmHg, p=0.037). The spiroergometric results were: maximum oxygen uptake (relative VO 2max) 57.3 ±7.5ml/min/kg vs. 59.8±9.5ml/min/kg (p=ns). Cut-point analysis for the relationship of BP >170 mmHg at the aerobic threshold and the probability of LVM >220g showed a sensitivity of 95.8%, a specificity of 33.3%, with a positive predictive value of 56.8 %, a good negative predictive value of 90%. The probability of LVM >220g increased with higher BP during exercise (OR: 1.027, 95% CI 1.002-1.052, p= 0.034) or with higher training volume (OR: 1.23, 95% CI 1.04 -1.47, p = 0.019). Echocardiography showed predominantly concentric remodelling, followed by concentric hypertrophy. Conclusion: Significant left ventricular hypertrophy with LVM >220g is associated with higher arterial blood pressure at the aerobic or anaerobic threshold. The endurance athletes with EIAH may require a therapeutic intervention to at least prevent extensive stiffening of the heart muscle and exercise-induced cardiac fatigue. PMID:25132960
Leischik, Roman; Spelsberg, Norman; Niggemann, Hiltrud; Dworrak, Birgit; Tiroch, Klaus
2014-01-01
Background : Exercise-induced arterial hypertension (EIAH) leads to myocardial hypertrophy and is associated with a poor prognosis. EIAH might be related to the "cardiac fatigue" caused by endurance training. The goal of this study was to examine whether there is any relationship between EIAH and left ventricular hypertrophy in Ironman-triathletes. We used echocardiography and spiroergometry to determine the left ventricular mass (LVM), the aerobic/anaerobic thresholds and the steady-state blood pressure of 51 healthy male triathletes. The main inclusion criterion was the participation in at least one middle or long distance triathlon. When comparing triathletes with LVM <220g and athletes with LVM >220g there was a significant difference between blood pressure values (BP) at the anaerobic threshold (185.2± 21.5 mmHg vs. 198.8 ±22.3 mmHg, p=0.037). The spiroergometric results were: maximum oxygen uptake (relative VO 2max) 57.3 ±7.5ml/min/kg vs. 59.8±9.5ml/min/kg (p=ns). Cut-point analysis for the relationship of BP >170 mmHg at the aerobic threshold and the probability of LVM >220g showed a sensitivity of 95.8%, a specificity of 33.3%, with a positive predictive value of 56.8 %, a good negative predictive value of 90%. The probability of LVM >220g increased with higher BP during exercise (OR: 1.027, 95% CI 1.002-1.052, p= 0.034) or with higher training volume (OR: 1.23, 95% CI 1.04 -1.47, p = 0.019). Echocardiography showed predominantly concentric remodelling, followed by concentric hypertrophy. Significant left ventricular hypertrophy with LVM >220g is associated with higher arterial blood pressure at the aerobic or anaerobic threshold. The endurance athletes with EIAH may require a therapeutic intervention to at least prevent extensive stiffening of the heart muscle and exercise-induced cardiac fatigue.
2010-01-01
Background The origin and stability of cooperation is a hot topic in social and behavioural sciences. A complicated conundrum exists as defectors have an advantage over cooperators, whenever cooperation is costly so consequently, not cooperating pays off. In addition, the discovery that humans and some animal populations, such as lions, are polymorphic, where cooperators and defectors stably live together -- while defectors are not being punished--, is even more puzzling. Here we offer a novel explanation based on a Threshold Public Good Game (PGG) that includes the interaction of individual and group level selection, where individuals can contribute to multiple collective actions, in our model group hunting and group defense. Results Our results show that there are polymorphic equilibria in Threshold PGGs; that multi-level selection does not select for the most cooperators per group but selects those close to the optimum number of cooperators (in terms of the Threshold PGG). In particular for medium cost values division of labour evolves within the group with regard to the two types of cooperative actions (hunting vs. defense). Moreover we show evidence that spatial population structure promotes cooperation in multiple PGGs. We also demonstrate that these results apply for a wide range of non-linear benefit function types. Conclusions We demonstrate that cooperation can be stable in Threshold PGG, even when the proportion of so called free riders is high in the population. A fundamentally new mechanism is proposed how laggards, individuals that have a high tendency to defect during one specific group action can actually contribute to the fitness of the group, by playing part in an optimal resource allocation in Threshold Public Good Games. In general, our results show that acknowledging a multilevel selection process will open up novel explanations for collective actions. PMID:21044340
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
Erosive Augmentation of Solid Propellant Burning Rate: Motor Size Scaling Effect
NASA Technical Reports Server (NTRS)
Strand, L. D.; Cohen, Norman S.
1990-01-01
Two different independent variable forms, a difference form and a ratio form, were investigated for correlating the normalized magnitude of the measured erosive burning rate augmentation above the threshold in terms of the amount that the driving parameter (mass flux or Reynolds number) exceeds the threshold value for erosive augmentation at the test condition. The latter was calculated from the previously determined threshold correlation. Either variable form provided a correlation for each of the two motor size data bases individually. However, the data showed a motor size effect, supporting the general observation that the magnitude of erosive burning rate augmentation is reduced for larger rocket motors. For both independent variable forms, the required motor size scaling was attained by including the motor port radius raised to a power in the independent parameter. A boundary layer theory analysis confirmed the experimental finding, but showed that the magnitude of the scale effect is itself dependent upon scale, tending to diminish with increasing motor size.
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
NASA Astrophysics Data System (ADS)
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
NASA Astrophysics Data System (ADS)
Zhu, Yanli; Chen, Haiqiang
2017-05-01
In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.
Prell, D; Kalender, W A; Kyriakou, Y
2010-12-01
The purpose of this study was to develop, implement and evaluate a dedicated metal artefact reduction (MAR) method for flat-detector CT (FDCT). The algorithm uses the multidimensional raw data space to calculate surrogate attenuation values for the original metal traces in the raw data domain. The metal traces are detected automatically by a three-dimensional, threshold-based segmentation algorithm in an initial reconstructed image volume, based on twofold histogram information for calculating appropriate metal thresholds. These thresholds are combined with constrained morphological operations in the projection domain. A subsequent reconstruction of the modified raw data yields an artefact-reduced image volume that is further processed by a combining procedure that reinserts the missing metal information. For image quality assessment, measurements on semi-anthropomorphic phantoms containing metallic inserts were evaluated in terms of CT value accuracy, image noise and spatial resolution before and after correction. Measurements of the same phantoms without prostheses were used as ground truth for comparison. Cadaver measurements were performed on complex and realistic cases and to determine the influences of our correction method on the tissue surrounding the prostheses. The results showed a significant reduction of metal-induced streak artefacts (CT value differences were reduced to below 22 HU and image noise reduction of up to 200%). The cadaver measurements showed excellent results for imaging areas close to the implant and exceptional artefact suppression in these areas. Furthermore, measurements in the knee and spine regions confirmed the superiority of our method to standard one-dimensional, linear interpolation.
NASA Astrophysics Data System (ADS)
Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.
2015-07-01
The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.
NASA Astrophysics Data System (ADS)
Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo
2014-05-01
This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.
Change of wandering pattern with anisotropy in step kinetics
NASA Astrophysics Data System (ADS)
Sato, Masahide; Uwaha, Makio
1999-03-01
We study the effect of anisotropy in step kinetics on the wandering instability of an isolated step. With the asymmetry of the step kinetics, a straight step becomes unstable for long wavelength fluctuations and wanders when the step velocity exceeds a critical value. Near the threshold of the instability, an isotropic step obeys the Kuramoto-Sivashinsky equation, HT=- HXX- HXXXX+( H2X/2), and shows a chaotic pattern. A step with anisotropic kinetics obeys the Benney equation, HT=- HXX- δHXXX- HXXXX+( H2X/2), and the wandering pattern changes: when the anisotropy is strong, δ≫1, the step shows a regular pattern. Near the threshold of the instability, the anisotropy effect becomes strong while that of the step stiffness becomes weak.
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Branion-Calles, Michael C; Nelson, Trisalyn A; Henderson, Sarah B
2015-11-19
There is no safe concentration of radon gas, but guideline values provide threshold concentrations that are used to map areas at higher risk. These values vary between different regions, countries, and organizations, which can lead to differential classification of risk. For example the World Health Organization suggests a 100 Bq m(-3)value, while Health Canada recommends 200 Bq m(-3). Our objective was to describe how different thresholds characterized ecological radon risk and their visual association with lung cancer mortality trends in British Columbia, Canada. Eight threshold values between 50 and 600 Bq m(-3) were identified, and classes of radon vulnerability were defined based on whether the observed 95(th) percentile radon concentration was above or below each value. A balanced random forest algorithm was used to model vulnerability, and the results were mapped. We compared high vulnerability areas, their estimated populations, and differences in lung cancer mortality trends stratified by smoking prevalence and sex. Classification accuracy improved as the threshold concentrations decreased and the area classified as high vulnerability increased. Majority of the population lived within areas of lower vulnerability regardless of the threshold value. Thresholds as low as 50 Bq m(-3) were associated with higher lung cancer mortality, even in areas with low smoking prevalence. Temporal trends in lung cancer mortality were increasing for women, while decreasing for men. Radon contributes to lung cancer in British Columbia. The results of the study contribute evidence supporting the use of a reference level lower than the current guideline of 200 Bq m(-3) for the province.
Jafri, Nazia F; Newitt, David C; Kornak, John; Esserman, Laura J; Joe, Bonnie N; Hylton, Nola M
2014-08-01
To evaluate optimal contrast kinetics thresholds for measuring functional tumor volume (FTV) by breast magnetic resonance imaging (MRI) for assessment of recurrence-free survival (RFS). In this Institutional Review Board (IRB)-approved retrospective study of 64 patients (ages 29-72, median age of 48.6) undergoing neoadjuvant chemotherapy (NACT) for breast cancer, all patients underwent pre-MRI1 and postchemotherapy MRI4 of the breast. Tumor was defined as voxels meeting thresholds for early percent enhancement (PEthresh) and early-to-late signal enhancement ratio (SERthresh); and FTV (PEthresh, SERthresh) by summing all voxels meeting threshold criteria and minimum connectivity requirements. Ranges of PEthresh from 50% to 220% and SERthresh from 0.0 to 2.0 were evaluated. A Cox proportional hazard model determined associations between change in FTV over treatment and RFS at different PE and SER thresholds. The plot of hazard ratios for change in FTV from MRI1 to MRI4 showed a broad peak with the maximum hazard ratio and highest significance occurring at PE threshold of 70% and SER threshold of 1.0 (hazard ratio = 8.71, 95% confidence interval 2.86-25.5, P < 0.00015), indicating optimal model fit. Enhancement thresholds affect the ability of MRI tumor volume to predict RFS. The value is robust over a wide range of thresholds, supporting the use of FTV as a biomarker. © 2013 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Heene, Moritz; Hilbert, Sven; Draxler, Clemens; Ziegler, Matthias; Buhner, Markus
2011-01-01
Fit indices are widely used in order to test the model fit for structural equation models. In a highly influential study, Hu and Bentler (1999) showed that certain cutoff values for these indices could be derived, which, over time, has led to the reification of these suggested thresholds as "golden rules" for establishing the fit or other aspects…
Can gravity waves significantly impact PSC occurrence in the Antarctic?
NASA Astrophysics Data System (ADS)
McDonald, A. J.; George, S. E.; Woollands, R. M.
2009-11-01
A combination of POAM III aerosol extinction and CHAMP RO temperature measurements are used to examine the role of atmospheric gravity waves in the formation of Antarctic Polar Stratospheric Clouds (PSCs). POAM III aerosol extinction observations and quality flag information are used to identify Polar Stratospheric Clouds using an unsupervised clustering algorithm. A PSC proxy, derived by thresholding Met Office temperature analyses with the PSC Type Ia formation temperature (TNAT), shows general agreement with the results of the POAM III analysis. However, in June the POAM III observations of PSC are more abundant than expected from temperature threshold crossings in five out of the eight years examined. In addition, September and October PSC identified using temperature thresholding is often significantly higher than that derived from POAM III; this observation probably being due to dehydration and denitrification. Comparison of the Met Office temperature analyses with corresponding CHAMP observations also suggests a small warm bias in the Met Office data in June. However, this bias cannot fully explain the differences observed. Analysis of CHAMP data indicates that temperature perturbations associated with gravity waves may partially explain the enhanced PSC incidence observed in June (relative to the Met Office analyses). For this month, approximately 40% of the temperature threshold crossings observed using CHAMP RO data are associated with small-scale perturbations. Examination of the distribution of temperatures relative to TNAT shows a large proportion of June data to be close to this threshold, potentially enhancing the importance of gravity wave induced temperature perturbations. Inspection of the longitudinal structure of PSC occurrence in June 2005 also shows that regions of enhancement are geographically associated with the Antarctic Peninsula; a known mountain wave "hotspot". The latitudinal variation of POAM III observations means that we only observe this region in June-July, and thus the true pattern of enhanced PSC production may continue operating into later months. The analysis has shown that early in the Antarctic winter stratospheric background temperatures are close to the TNAT threshold (and PSC formation), and are thus sensitive to temperature perturbations associated with mountain wave activity near the Antarctic peninsula (40% of PSC formation). Later in the season, and at latitudes away from the peninsula, temperature perturbations associated with gravity waves contribute to about 15% of the observed PSC (a value which corresponds well to several previous studies). This lower value is likely to be due to colder background temperatures already achieving the TNAT threshold unaided. Additionally, there is a reduction in the magnitude of gravity waves perturbations observed as POAM III samples poleward of the peninsula.
NASA Technical Reports Server (NTRS)
Moore, E. N.; Altick, P. L.
1972-01-01
The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.
Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; White, Stuart C.
1992-05-01
This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.
Monitoring Start of Season in Alaska
NASA Astrophysics Data System (ADS)
Robin, J.; Dubayah, R.; Sparrow, E.; Levine, E.
2006-12-01
In biomes that have distinct winter seasons, start of spring phenological events, specifically timing of budburst and green-up of leaves, coincides with transpiration. Seasons leave annual signatures that reflect the dynamic nature of the hydrologic cycle and link the different spheres of the Earth system. This paper evaluates whether continuity between AVHRR and MODIS normalized difference vegetation index (NDVI) is achievable for monitoring land surface phenology, specifically start of season (SOS), in Alaska. Additionally, two thresholds, one based on NDVI and the other on accumulated growing degree-days (GDD), are compared to determine which most accurately predicts SOS for Fairbanks. Ratio of maximum greenness at SOS was computed from biweekly AVHRR and MODIS composites for 2001 through 2004 for Anchorage and Fairbanks regions. SOS dates were determined from annual green-up observations made by GLOBE students. Results showed that different processing as well as spectral characteristics of each sensor restrict continuity between the two datasets. MODIS values were consistently higher and had less inter-annual variability during the height of the growing season than corresponding AVHRR values. Furthermore, a threshold of 131-175 accumulated GDD was a better predictor of SOS for Fairbanks than a NDVI threshold applied to AVHRR and MODIS datasets. The NDVI threshold was developed from biweekly AVHRR composites from 1982 through 2004 and corresponding annual green-up observations at University of Alaska-Fairbanks (UAF). The GDD threshold was developed from 20+ years of historic daily mean air temperature data and the same green-up observations. SOS dates computed with the GDD threshold most closely resembled actual green-up dates observed by GLOBE students and UAF researchers. Overall, biweekly composites and effects of clouds, snow, and conifers limit the ability of NDVI to monitor phenological changes in Alaska.
Damian, Anne M; Jacobson, Sandra A; Hentz, Joseph G; Belden, Christine M; Shill, Holly A; Sabbagh, Marwan N; Caviness, John N; Adler, Charles H
2011-01-01
To perform an item analysis of the Montreal Cognitive Assessment (MoCA) versus the Mini-Mental State Examination (MMSE) in the prediction of cognitive impairment, and to examine the characteristics of different MoCA threshold scores. 135 subjects enrolled in a longitudinal clinicopathologic study were administered the MoCA by a single physician and the MMSE by a trained research assistant. Subjects were classified as cognitively impaired or cognitively normal based on independent neuropsychological testing. 89 subjects were found to be cognitively normal, and 46 cognitively impaired (20 with dementia, 26 with mild cognitive impairment). The MoCA was superior in both sensitivity and specificity to the MMSE, although not all MoCA tasks were of equal predictive value. A MoCA threshold score of 26 had a sensitivity of 98% and a specificity of 52% in this population. In a population with a 20% prevalence of cognitive impairment, a threshold of 24 was optimal (negative predictive value 96%, positive predictive value 47%). This analysis suggests the potential for creating an abbreviated MoCA. For screening in primary care, the MoCA threshold of 26 appears optimal. For testing in a memory disorders clinic, a lower threshold has better predictive value. Copyright © 2011 S. Karger AG, Basel.
Grant, Wally; Curthoys, Ian
2017-09-01
Vestibular otolithic organs are recognized as transducers of head acceleration and they function as such up to their corner frequency or undamped natural frequency. It is well recognized that these organs respond to frequencies above their corner frequency up to the 2-3 kHz range (Curthoys et al., 2016). A mechanics model for the transduction of these organs is developed that predicts the response below the undamped natural frequency as an accelerometer and above that frequency as a seismometer. The model is converted to a transfer function using hair cell bundle deflection. Measured threshold acceleration stimuli are used along with threshold deflections for threshold transfer function values. These are compared to model predicted values, both below and above their undamped natural frequency. Threshold deflection values are adjusted to match the model transfer function. The resulting threshold deflection values were well within in measure threshold bundle deflection ranges. Vestibular Evoked Myogenic Potentials (VEMPs) today routinely uses stimulus frequencies of 500 and 1000 Hz, and otoliths have been established incontrovertibly by clinical and neural evidence as the stimulus source. The mechanism for stimulus at these frequencies above the undamped natural frequency of otoliths is presented where otoliths are utilizing a seismometer mode of response for VEMP transduction. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fan, Kuangang; Zhang, Yan; Gao, Shujing; Wei, Xiang
2017-09-01
A class of SIR epidemic model with generalized nonlinear incidence rate is presented in this paper. Temporary immunity and stochastic perturbation are also considered. The existence and uniqueness of the global positive solution is achieved. Sufficient conditions guaranteeing the extinction and persistence of the epidemic disease are established. Moreover, the threshold behavior is discussed, and the threshold value R0 is obtained. We show that if R0 < 1, the disease eventually becomes extinct with probability one, whereas if R0 > 1, then the system remains permanent in the mean.
Minding Impacting Events in a Model of Stochastic Variance
Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.
2011-01-01
We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864
Analysis of novel stochastic switched SILI epidemic models with continuous and impulsive control
NASA Astrophysics Data System (ADS)
Gao, Shujing; Zhong, Deming; Zhang, Yan
2018-04-01
In this paper, we establish two new stochastic switched epidemic models with continuous and impulsive control. The stochastic perturbations are considered for the natural death rate in each equation of the models. Firstly, a stochastic switched SILI model with continuous control schemes is investigated. By using Lyapunov-Razumikhin method, the sufficient conditions for extinction in mean are established. Our result shows that the disease could be die out theoretically if threshold value R is less than one, regardless of whether the disease-free solutions of the corresponding subsystems are stable or unstable. Then, a stochastic switched SILI model with continuous control schemes and pulse vaccination is studied. The threshold value R is derived. The global attractivity of the model is also obtained. At last, numerical simulations are carried out to support our results.
Epidemic thresholds for bipartite networks
NASA Astrophysics Data System (ADS)
Hernández, D. G.; Risau-Gusman, S.
2013-11-01
It is well known that sexually transmitted diseases (STD) spread across a network of human sexual contacts. This network is most often bipartite, as most STD are transmitted between men and women. Even though network models in epidemiology have quite a long history now, there are few general results about bipartite networks. One of them is the simple dependence, predicted using the mean field approximation, between the epidemic threshold and the average and variance of the degree distribution of the network. Here we show that going beyond this approximation can lead to qualitatively different results that are supported by numerical simulations. One of the new features, that can be relevant for applications, is the existence of a critical value for the infectivity of each population, below which no epidemics can arise, regardless of the value of the infectivity of the other population.
Discounting and decision making in the economic evaluation of health-care technologies.
Claxton, Karl; Paulden, Mike; Gravelle, Hugh; Brouwer, Werner; Culyer, Anthony J
2011-01-01
Discounting costs and health benefits in cost-effectiveness analysis has been the subject of recent debate - some authors suggesting a common rate for both and others suggesting a lower rate for health. We show how these views turn on key judgments of fact and value: on whether the social objective is to maximise discounted health outcomes or the present consumption value of health; on whether the budget for health care is fixed; on the expected growth in the cost-effectiveness threshold; and on the expected growth in the consumption value of health. We demonstrate that if the budget for health care is fixed and decisions are based on incremental cost effectiveness ratios (ICERs), discounting costs and health gains at the same rate is correct only if the threshold remains constant. Expecting growth in the consumption value of health does not itself justify differential rates but implies a lower rate for both. However, whether one believes that the objective should be the maximisation of the present value of health or the present consumption value of health, adopting the social time preference rate for consumption as the discount rate for costs and health gains is valid only under strong and implausible assumptions about values and facts. 2010 John Wiley & Sons, Ltd.
The mutation-drift balance in spatially structured populations.
Schneider, David M; Martins, Ayana B; de Aguiar, Marcus A M
2016-08-07
In finite populations the action of neutral mutations is balanced by genetic drift, leading to a stationary distribution of alleles that displays a transition between two different behaviors. For small mutation rates most individuals will carry the same allele at equilibrium, whereas for high mutation rates of the alleles will be randomly distributed with frequencies close to one half for a biallelic gene. For well-mixed haploid populations the mutation threshold is μc=1/2N, where N is the population size. In this paper we study how spatial structure affects this mutation threshold. Specifically, we study the stationary allele distribution for populations placed on regular networks where connected nodes represent potential mating partners. We show that the mutation threshold is sensitive to spatial structure only if the number of potential mates is very small. In this limit, the mutation threshold decreases substantially, increasing the diversity of the population at considerably low mutation rates. Defining kc as the degree of the network for which the mutation threshold drops to half of its value in well-mixed populations we show that kc grows slowly as a function of the population size, following a power law. Our calculations and simulations are based on the Moran model and on a mapping between the Moran model with mutations and the voter model with opinion makers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data
2014-01-01
Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.
Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried
2014-01-01
Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.
Detection and modulation of capsaicin perception in the human oral cavity.
Smutzer, Gregory; Jacob, Jeswin C; Tran, Joseph T; Shah, Darshan I; Gambhir, Shilpa; Devassy, Roni K; Tran, Eric B; Hoang, Brian T; McCune, Joseph F
2018-05-09
Capsaicin causes a burning or spicy sensation when this vanilloid compound comes in contact with trigeminal neurons of the tongue. This compound has low solubility in water, which presents difficulties in examining the psychophysical properties of capsaicin by standard aqueous chemosensory tests. This report describes a new approach that utilizes edible strips for delivering precise amounts of capsaicin to the human oral cavity for examining threshold and suprathreshold amounts of this irritant. When incorporated into pullulan-based edible strips, recognition thresholds for capsaicin occurred over a narrow range, with a mean value near 1 nmol. When incorporated into edible strips at suprathreshold amounts, capsaicin yielded robust intensity values that were readily measured in our subject population. Maximal capsaicin intensity was observed 20 s after strips dissolved on the tongue surface, and then decreased in intensity. Suprathreshold studies showed that complete blockage of nasal airflow diminished capsaicin perception in the oral cavity. Oral rinses with vanillin-linoleic acid emulsions decreased mean intensity values for capsaicin by approximately 75%, but only modestly affected recognition threshold values. Also, oral rinses with isointense amounts of aqueous sucrose and sucralose solutions decreased mean intensity values for capsaicin by approximately 50%. In addition, this decrease in capsaicin intensity following an oral rinse with sucrose was partially reversed by the sweet taste inhibitor lactisole. These results suggest that blockage of nasal airflow, vanillin, sucrose, and sucralose modulate capsaicin perception in the human oral cavity. The results further suggest a chemosensory link between receptor cells that detect sweet taste stimuli and trigeminal neurons that detect capsaicin. Copyright © 2018 Elsevier Inc. All rights reserved.
A new function for estimating local rainfall thresholds for landslide triggering
NASA Astrophysics Data System (ADS)
Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.
2009-04-01
The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.
Effects of Acupuncture on Sensory Perception: A Systematic Review and Meta-Analysis
Baeumler, Petra I.; Fleckenstein, Johannes; Takayama, Shin; Simang, Michael; Seki, Takashi; Irnich, Dominik
2014-01-01
Background The effect of acupuncture on sensory perception has never been systematically reviewed; although, studies on acupuncture mechanisms are frequently based on the idea that changes in sensory thresholds reflect its effect on the nervous system. Methods Pubmed, EMBASE and Scopus were screened for studies investigating the effect of acupuncture on thermal or mechanical detection or pain thresholds in humans published in English or German. A meta-analysis of high quality studies was performed. Results Out of 3007 identified articles 85 were included. Sixty five studies showed that acupuncture affects at least one sensory threshold. Most studies assessed the pressure pain threshold of which 80% reported an increase after acupuncture. Significant short- and long-term effects on the pressure pain threshold in pain patients were revealed by two meta-analyses including four and two high quality studies, respectively. In over 60% of studies, acupuncture reduced sensitivity to noxious thermal stimuli, but measuring methods might influence results. Few but consistent data indicate that acupuncture reduces pin-prick like pain but not mechanical detection. Results on thermal detection are heterogeneous. Sensory threshold changes were equally frequent reported after manual acupuncture as after electroacupuncture. Among 48 sham-controlled studies, 25 showed stronger effects on sensory thresholds through verum than through sham acupuncture, but in 9 studies significant threshold changes were also observed after sham acupuncture. Overall, there is a lack of high quality acupuncture studies applying comprehensive assessments of sensory perception. Conclusions Our findings indicate that acupuncture affects sensory perception. Results are most compelling for the pressure pain threshold, especially in pain conditions associated with tenderness. Sham acupuncture can also cause such effects. Future studies should incorporate comprehensive, standardized assessments of sensory profiles in order to fully characterize its effect on sensory perception and to explore the predictive value of sensory profiles for the effectiveness of acupuncture. PMID:25502787
Effects of acupuncture on sensory perception: a systematic review and meta-analysis.
Baeumler, Petra I; Fleckenstein, Johannes; Takayama, Shin; Simang, Michael; Seki, Takashi; Irnich, Dominik
2014-01-01
The effect of acupuncture on sensory perception has never been systematically reviewed; although, studies on acupuncture mechanisms are frequently based on the idea that changes in sensory thresholds reflect its effect on the nervous system. Pubmed, EMBASE and Scopus were screened for studies investigating the effect of acupuncture on thermal or mechanical detection or pain thresholds in humans published in English or German. A meta-analysis of high quality studies was performed. Out of 3007 identified articles 85 were included. Sixty five studies showed that acupuncture affects at least one sensory threshold. Most studies assessed the pressure pain threshold of which 80% reported an increase after acupuncture. Significant short- and long-term effects on the pressure pain threshold in pain patients were revealed by two meta-analyses including four and two high quality studies, respectively. In over 60% of studies, acupuncture reduced sensitivity to noxious thermal stimuli, but measuring methods might influence results. Few but consistent data indicate that acupuncture reduces pin-prick like pain but not mechanical detection. Results on thermal detection are heterogeneous. Sensory threshold changes were equally frequent reported after manual acupuncture as after electroacupuncture. Among 48 sham-controlled studies, 25 showed stronger effects on sensory thresholds through verum than through sham acupuncture, but in 9 studies significant threshold changes were also observed after sham acupuncture. Overall, there is a lack of high quality acupuncture studies applying comprehensive assessments of sensory perception. Our findings indicate that acupuncture affects sensory perception. Results are most compelling for the pressure pain threshold, especially in pain conditions associated with tenderness. Sham acupuncture can also cause such effects. Future studies should incorporate comprehensive, standardized assessments of sensory profiles in order to fully characterize its effect on sensory perception and to explore the predictive value of sensory profiles for the effectiveness of acupuncture.
Dependence of Interfacial Excess on the Threshold Value of the Isoconcentration Surface
NASA Technical Reports Server (NTRS)
Yoon, Kevin E.; Noebe, Ronald D.; Hellman, Olof C.; Seidman, David N.
2004-01-01
The proximity histogram (or proxigram for short) is used for analyzing data collected by a three-dimensional atom probe microscope. The interfacial excess of Re (2.41 +/- 0.68 atoms/sq nm) is calculated by employing a proxigram in a completely geometrically independent way for gamma/gamma' interfaces in Rene N6, a third-generation single-crystal Ni-based superalloy. A possible dependence of interfacial excess on the variation of the threshold value of an isoconcentration surface is investigated using the data collected for Rene N6 alloy. It is demonstrated that the dependence of the interfacial excess value on the threshold value of the isoconcentration surface is weak.
Vernon, John A; Goldberg, Robert; Golec, Joseph
2009-01-01
In this article we describe how reimbursement cost-effectiveness thresholds, per unit of health benefit, whether set explicitly or observed implicitly via historical reimbursement decisions, serve as a signal to firms about the commercial viability of their R&D projects (including candidate products for in-licensing). Traditional finance methods for R&D project valuations, such as net present value analyses (NPV), incorporate information from these payer reimbursement signals to help determine which R&D projects should be continued and which should be terminated (in the case of the latter because they yield an NPV < 0). Because the influence these signals have for firm R&D investment decisions is so significant, we argue that it is important for reimbursement thresholds to reflect the economic value of the unit of health benefit being considered for reimbursement. Thresholds set too low (below the economic value of the health benefit) will result in R&D investment levels that are too low relative to the economic value of R&D (on the margin). Similarly, thresholds set too high (above the economic value of the health benefit) will result in inefficiently high levels of R&D spending. The US in particular, which represents approximately half of the global pharmaceutical market (based on sales), and which seems poised to begin undertaking cost effectiveness in a systematic way, needs to exert caution in setting policies that explicitly or implicitly establish cost-effectiveness reimbursement thresholds for healthcare products and technologies, such as pharmaceuticals.
Crack Growth Behavior in the Threshold Region for High Cycle Fatigue Loading
NASA Technical Reports Server (NTRS)
Forman, Royce G.; Figert, J.; Beek, J.; Ventura, J.; Martinez, J.; Samonski, F.
2011-01-01
This presentation describes results obtained from a research project conducted at the NASA Johnson Space Center (JSC) that was jointly supported by the FAA Technical Center and JSC. The JSC effort was part of a multi-task FAA program involving several U.S. laboratories and initiated for the purpose of developing enhanced analysis tools to assess damage tolerance of rotorcraft and aircraft propeller systems. The research results to be covered in this presentation include a new understanding of the behavior of fatigue crack growth in the threshold region. This behavior is important for structural life analysis of aircraft propeller systems and certain rotorcraft structural components (e.g., the mast). These components are often designed to not allow fatigue crack propagation to exceed an experimentally determined fatigue crack growth threshold value. During the FAA review meetings for the program, disagreements occurred between the researchers regarding the observed fanning (spread between the da/dN curves of constant R) in the threshold region at low stress ratios, R. Some participants believed that the fanning was a result of the ASTM load shedding test method for threshold testing, and thus did not represent the true characteristics of the material. If the fanning portion of the threshold value is deleted or not included in a life analysis, a significant penalty in the calculated life and design of the component would occur. The crack growth threshold behavior was previously studied and reported by several research investigators in the time period: 1970-1980. Those investigators used electron microscopes to view the crack morphology of the fatigue fracture surfaces. Their results showed that just before reaching threshold, the crack morphology often changed from a striated to a faceted or cleavage-like morphology. This change was reported to have been caused by particular dislocation properties of the material. Based on the results of these early investigations, a program was initiated at JSC to repeat these examinations on a number of aircraft structural alloys that were currently being tested for obtaining fatigue crack growth properties. These new scanning electron microscope (SEM) examinations of the fatigue fracture faces confirmed the change in crack morphology in the threshold crack tip region. In addition, SEM examinations were further performed in the threshold crack-tip region before breaking the specimens open (not done in the earlier published studies). In these examinations, extensive crack forking and even 90-degree crack bifurcations were found to have occurred in the final threshold crack-tip region. The forking and bifurcations caused numerous closure points to occur that prevented full crack closure in the threshold region, and thus were the cause of the fanning at low-R values. Therefore, we have shown that the fanning behavior was caused by intrinsic dislocation properties of the different alloy materials and were not the result of a plastic wake that remains from the load-shedding test phase. Also, to accommodate the use of da/dN data which includes fanning at low R-values, an updated fanning factor term has been developed and will be implemented into the NASGRO fatigue crack growth software. The term can be set to zero if it is desired that the fanning behavior is not be modeled for particular cases, such as when fanning is not a result of the intrinsic properties of a material.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
Low-threshold field emission in planar cathodes with nanocarbon materials
NASA Astrophysics Data System (ADS)
Zhigalov, V.; Petukhov, V.; Emelianov, A.; Timoshenkov, V.; Chaplygin, Yu.; Pavlov, A.; Shamanaev, A.
2016-12-01
Nanocarbon materials are of great interest as field emission cathodes due to their low threshold voltage. In this work current-voltage characteristics of nanocarbon electrodes were studied. Low-threshold emission was found in planar samples where field enhancement is negligible (<10). Electron work function values, calculated by Fowler-Nordheim theory, are anomalous low (<1 eV) and come into collision with directly measured work function values in fabricated planar samples (4.1-4.4 eV). Non-applicability of Fowler-Nordheim theory for the nanocarbon materials was confirmed. The reasons of low-threshold emission in nanocarbon materials are discussed.
Kwuimy, C A Kitio; Nataraj, C; Litak, G
2011-12-01
We consider the problems of chaos and parametric control in nonlinear systems under an asymmetric potential subjected to a multiscale type excitation. The lower bound line for horseshoes chaos is analyzed using the Melnikov's criterion for a transition to permanent or transient nonperiodic motions, complement by the fractal or regular shape of the basin of attraction. Numerical simulations based on the basins of attraction, bifurcation diagrams, Poincaré sections, Lyapunov exponents, and phase portraits are used to show how stationary dissipative chaos occurs in the system. Our attention is focussed on the effects of the asymmetric potential term and the driven frequency. It is shown that the threshold amplitude ∣γ(c)∣ of the excitation decreases for small values of the driven frequency ω and increases for large values of ω. This threshold value decreases with the asymmetric parameter α and becomes constant for sufficiently large values of α. γ(c) has its maximum value for asymmetric load in comparison with the symmetric load. Finally, we apply the Melnikov theorem to the controlled system to explore the gain control parameter dependencies.
Talukder, S; Thomson, P C; Kerrisk, K L; Clark, C E F; Celi, P
2015-03-01
This study was conducted to test the hypothesis that the specificity of infrared thermography (IRT) in detecting cows about to ovulate could be improved using different body parts that are less likely to be contaminated by fecal matter. In addition, the combined activity and rumination data captured by accelerometers were evaluated to provide a more accurate indication of ovulation than the activity and rumination data alone. Thermal images of 30 cows were captured for different body areas (eye, ear, muzzle, and vulva) twice daily after AM and PM milking sessions during the entire experimental period. Milk progesterone data and insemination records were used to determine the date of ovulation. Cows were fitted with SCR heat and rumination long-distance tags (SCR HR LD) for 1 month. Activity- and rumination-based estrus alerts were initially identified using default threshold values set by the manufacturer; however, a range of thresholds was also created and tested for both activity and rumination to determine the potential for higher levels of accuracy of ovulation detection. Visual assessment of mounting indicators resulted in 75% sensitivity (Se), 100% specificity (Sp), and 100% positive predictive value (PPV). Overall, IRT showed poor performance for detecting cows about to ovulate. Vulval temperature resulted in the greatest (80%) Sp but the poorest (21%) Se compared with the IRT temperatures of other body areas. The SCR HR LD tags default threshold value resulted in 78% Se, 57% Sp, and 70% PPV. Lowering the activity threshold from the default value improved the sensitivity but created a large number of false positives, which resulted in a decrease in specificity. Lowering the activity threshold to 20 resulted in a detection performance of 80% Se, 94% Sp, and 67% PPV, whereas the rumination levels achieved 35% Se, 69% Sp, and 14% PPV. The area under the curve for the activity level, rumination level, and the combined measures of activity and rumination levels were 0.82, 0.54, and 0.75, respectively. Alerts generated by SCR HR LD tags based on a lower activity threshold level had high sensitivity and may be able to detect a high proportion of cows in ovulatory periods in pasture-based system; however, the specificities and positive predictive value were lower than the visual assessment of mounting indicators. Copyright © 2015 Elsevier Inc. All rights reserved.
Precision Laser Annealing of Focal Plane Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Daniel A.; DeRose, Christopher; Starbuck, Andrew Lea
2015-09-01
We present results from laser annealing experiments in Si using a passively Q-switched Nd:YAG microlaser. Exposure with laser at fluence values above the damage threshold of commercially available photodiodes results in electrical damage (as measured by an increase in photodiode dark current). We show that increasing the laser fluence to values in excess of the damage threshold can result in annealing of a damage site and a reduction in detector dark current by as much as 100x in some cases. A still further increase in fluence results in irreparable damage. Thus we demonstrate the presence of a laser annealing windowmore » over which performance of damaged detectors can be at least partially reconstituted. Moreover dark current reduction is observed over the entire operating range of the diode indicating that device performance has been improved for all values of reverse bias voltage. Additionally, we will present results of laser annealing in Si waveguides. By exposing a small (<10 um) length of a Si waveguide to an annealing laser pulse, the longitudinal phase of light acquired in propagating through the waveguide can be modified with high precision, <15 milliradian per laser pulse. Phase tuning by 180 degrees is exhibited with multiple exposures to one arm of a Mach-Zehnder interferometer at fluence values below the morphological damage threshold of an etched Si waveguide. No reduction in optical transmission at 1550 nm was found after 220 annealing laser shots. Modeling results for laser annealing in Si are also presented.« less
Comparison of Various Anthropometric Indices as Risk Factors for Hearing Impairment in Asian Women.
Kang, Seok Hui; Jung, Da Jung; Lee, Kyu Yup; Choi, Eun Woo; Do, Jun Young
2015-01-01
The objective of the present study was to examine the associations between various anthropometric measures and metabolic syndrome and hearing impairment in Asian women. We identified 11,755 women who underwent voluntary routine health checkups at Yeungnam University Hospital between June 2008 and April 2014. Among these patients, 2,485 participants were <40 years old, and 1,072 participants lacked information regarding their laboratory findings or hearing and were therefore excluded. In total 8,198 participants were recruited into our study. The AUROC value for metabolic syndrome was 0.790 for the waist to hip ratio (WHR). The cutoff value was 0.939. The sensitivity and specificity for predicting metabolic syndrome were 72.7% and 71.7%, respectively. The AUROC value for hearing loss was 0.758 for WHR. The cutoff value was 0.932. The sensitivity and specificity for predicting hearing loss were 65.8% and 73.4%, respectively. The WHR had the highest AUC and was the best predictor of metabolic syndrome and hearing loss. Univariate and multivariate linear regression analyses showed that WHR levels were positively associated with four hearing thresholds including averaged hearing threshold and low, middle, and high frequency thresholds. In addition, multivariate logistic analysis revealed that those with a high WHR had a 1.347-fold increased risk of hearing loss compared with the participants with a low WHR. Our results demonstrated that WHR may be a surrogate marker for predicting the risk of hearing loss resulting from metabolic syndrome.
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
Method and apparatus for analog pulse pile-up rejection
De Geronimo, Gianluigi
2013-12-31
A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.
Method and apparatus for analog pulse pile-up rejection
De Geronimo, Gianluigi
2014-11-18
A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.
INFLUENCE OF MASS ON DISPLACEMENT THRESHOLD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setyawan, Wahyu; Selby, A.; Nandipati, Giridhar
2014-12-30
Molecular dynamics simulations are performed to investigate the effect of mass on displacement threshold energy in Cr, Mo, Fe and W. For each interatomic potential, the mass of the atoms is varied among those metals for a total of 16 combinations. The average threshold energy over all crystal directions is calculated within the irreducible crystal directions using appropriate weighting factors. The weighting factors account for the different number of equivalent directions among the grid points and the different solid angle coverage of each grid point. The grid points are constructed with a Miller index increment of 1/24 for a totalmore » of 325 points. For each direction, 10 simulations each with a different primary-knock-on atom are performed. The results show that for each interatomic potential, the average threshold energy is insensitive to the mass; i.e., the values are the same within the standard error. In the future, the effect of mass on high-energy cascades for a given interatomic potential will be investigated.« less
Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa
2015-11-01
To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. Copyright © 2015 Elsevier Inc. All rights reserved.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
Sugar maple growth in relation to nutrition and stress in the northeastern United States.
Long, Robert P; Horsley, Stephen B; Hallett, Richard A; Bailey, Scott W
2009-09-01
Sugar maple, Acer saccharum, decline disease is incited by multiple disturbance factors when imbalanced calcium (Ca), magnesium (Mg), and manganese (Mn) act as predisposing stressors. Our objective in this study was to determine whether factors affecting sugar maple health also affect growth as estimated by basal area increment (BAI). We used 76 northern hardwood stands in northern Pennsylvania, New York, Vermont, and New Hampshire, USA, and found that sugar maple growth was positively related to foliar concentrations of Ca and Mg and stand level estimates of sugar maple crown health during a high stress period from 1987 to 1996. Foliar nutrient threshold values for Ca, Mg, and Mn were used to analyze long-term BAI trends from 1937 to 1996. Significant (P < or = 0.05) nutrient threshold-by-time interactions indicate changing growth in relation to nutrition during this period. Healthy sugar maples sampled in the 1990s had decreased growth in the 1970s, 10-20 years in advance of the 1980s and 1990s decline episode in Pennsylvania. Even apparently healthy stands that had no defoliation, but had below-threshold amounts of Ca or Mg and above-threshold Mn (from foliage samples taken in the mid 1990s), had decreasing growth by the 1970s. Co-occurring black cherry, Prunus serotina, in a subset of the Pennsylvania and New York stands, showed opposite growth responses with greater growth in stands with below-threshold Ca and Mg compared with above-threshold stands. Sugar maple growing on sites with the highest concentrations of foliar Ca and Mg show a general increase in growth from 1937 to 1996 while other stands with lower Ca and Mg concentrations show a stable or decreasing growth trend. We conclude that acid deposition induced changes in soil nutrient status that crossed a threshold necessary to sustain sugar maple growth during the 1970s on some sites. While nutrition of these elements has not been considered in forest management decisions, our research shows species specific responses to Ca and Mg that may reduce health and growth of sugar maple or change species composition, if not addressed.
NASA Technical Reports Server (NTRS)
Alston, Erica J.; Sokolik, Irina, N.; Doddridge, Bruce G.
2011-01-01
Poor air quality episodes occur often in metropolitan Atlanta, Georgia. The primary focus of this research is to assess the capability of satellites as a tool in characterizing air quality in Atlanta. Results indicate that intra-city PM2.5 concentrations show similar patterns as other U.S. urban areas, with the highest concentrations occurring within the city. Both PM2.5 and MODIS AOD show more increases in the summer than spring, yet MODIS AOD doubles in the summer unlike PM2.5. A majority of OMI AI is below 0.5. Using this value as an ambient measure of carbonaceous aerosols in the urban area, aerosol transport events can be identified. Our results indicate that MODIS AOD is well correlated with PM2.5 on a yearly and seasonal basis with correlation coefficients as high as 0.8 for Terra and 0.7 for Aqua. A possible alternative view of the PM2.5 and AOD relationship is seen through the use of AOD thresholds. These probabilistic thresholds provide a means to describe the AQI through the use of past AOD for a specific area. We use the NAAQS to classify the AOD into different AQI codes, and probabilistically determine thresholds of AOD that represent the majority of a specific AQI category. For example, the majority 80% of moderate AQI days have AOD values between 0.5 - 0.6. The development of thresholds could be a tool used to evaluate air quality from the use of satellites in regions where there are sparse ground-based measurements of PM2.5.
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Meir, Patrick; Wood, Tana E.; Galbraith, David R.; Brando, Paulo M.; Da Costa, Antonio C. L.; Rowland, Lucy; Ferreira, Leandro V.
2015-01-01
Many tropical rain forest regions are at risk of increased future drought. The net effects of drought on forest ecosystem functioning will be substantial if important ecological thresholds are passed. However, understanding and predicting these effects is challenging using observational studies alone. Field-based rainfall exclusion (canopy throughfall exclusion; TFE) experiments can offer mechanistic insight into the response to extended or severe drought and can be used to help improve model-based simulations, which are currently inadequate. Only eight TFE experiments have been reported for tropical rain forests. We examine them, synthesizing key results and focusing on two processes that have shown threshold behavior in response to drought: (1) tree mortality and (2) the efflux of carbon dioxdie from soil, soil respiration. We show that: (a) where tested using large-scale field experiments, tropical rain forest tree mortality is resistant to long-term soil moisture deficit up to a threshold of 50% of the water that is extractable by vegetation from the soil, but high mortality occurs beyond this value, with evidence from one site of increased autotrophic respiration, and (b) soil respiration reaches its peak value in response to soil moisture at significantly higher soil moisture content for clay-rich soils than for clay-poor soils. This first synthesis of tropical TFE experiments offers the hypothesis that low soil moisture–related thresholds for key stress responses in soil and vegetation may prove to be widely applicable across tropical rain forests despite the diversity of these forests. PMID:26955085
NASA Astrophysics Data System (ADS)
Nakamura, Akiko M.; Yamane, Fumiya; Okamoto, Takaya; Takasawa, Susumu
2015-03-01
The outcome of collision between small solid bodies is characterized by the threshold energy density Q*s, the specific energy to shatter, that is defined as the ratio of projectile kinetic energy to the target mass (or the sum of target and projectile) needed to produce the largest intact fragment that contains one half the target mass. It is indicated theoretically and by numerical simulations that the disruption threshold Q*s decreases with target size in strength-dominated regime. The tendency was confirmed by laboratory impact experiments using non-porous rock targets (Housen and Holsapple, 1999; Nagaoka et al., 2014). In this study, we performed low-velocity impact disruption experiments on porous gypsum targets with porosity of 65-69% and of three different sizes to examine the size dependence of the disruption threshold for porous material. The gypsum specimens were shown to have a weaker volume dependence on static tensile strength than do the non-porous rocks. The disruption threshold had also a weaker dependence on size scale as Q*s ∝D-γ , γ ≤ 0.25 - 0.26, while the previous laboratory studies showed γ=0.40 for the non-porous rocks. The measurements at low-velocity lead to a value of about 100 J kg-1 for Q*s which is roughly one order of magnitude lower than the value of Q*s for the gypsum targets of 65% porosity but impacted by projectiles with higher velocities. Such a clear dependence on the impact velocity was also shown by previous studies of gypsum targets with porosity of 50%.
Pre-impact fall detection system using dynamic threshold and 3D bounding box
NASA Astrophysics Data System (ADS)
Otanasap, Nuth; Boonbrahm, Poonpong
2017-02-01
Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.
The conventional tuning fork as a quantitative tool for vibration threshold.
Alanazy, Mohammed H; Alfurayh, Nuha A; Almweisheer, Shaza N; Aljafen, Bandar N; Muayqil, Taim
2018-01-01
This study was undertaken to describe a method for quantifying vibration when using a conventional tuning fork (CTF) in comparison to a Rydel-Seiffer tuning fork (RSTF) and to provide reference values. Vibration thresholds at index finger and big toe were obtained in 281 participants. Spearman's correlations were performed. Age, weight, and height were analyzed for their covariate effects on vibration threshold. Reference values at the fifth percentile were obtained by quantile regression. The correlation coefficients between CTF and RSTF values at finger/toe were 0.59/0.64 (P = 0.001 for both). Among covariates, only age had a significant effect on vibration threshold. Reference values for CTF at finger/toe for the age groups 20-39 and 40-60 years were 7.4/4.9 and 5.8/4.6 s, respectively. Reference values for RSTF at finger/toe for the age groups 20-39 and 40-60 years were 6.9/5.5 and 6.2/4.7, respectively. CTF provides quantitative values that are as good as those provided by RSTF. Age-stratified reference data are provided. Muscle Nerve 57: 49-53, 2018. © 2017 Wiley Periodicals, Inc.
Modeling heat stress under different environmental conditions.
Carabaño, M J; Logar, B; Bormann, J; Minet, J; Vanrobays, M-L; Díaz, C; Tychon, B; Gengler, N; Hammami, H
2016-05-01
Renewed interest in heat stress effects on livestock productivity derives from climate change, which is expected to increase temperatures and the frequency of extreme weather events. This study aimed at evaluating the effect of temperature and humidity on milk production in highly selected dairy cattle populations across 3 European regions differing in climate and production systems to detect differences and similarities that can be used to optimize heat stress (HS) effect modeling. Milk, fat, and protein test day data from official milk recording for 1999 to 2010 in 4 Holstein populations located in the Walloon Region of Belgium (BEL), Luxembourg (LUX), Slovenia (SLO), and southern Spain (SPA) were merged with temperature and humidity data provided by the state meteorological agencies. After merging, the number of test day records/cows per trait ranged from 686,726/49,655 in SLO to 1,982,047/136,746 in BEL. Values for the daily average and maximum temperature-humidity index (THIavg and THImax) ranges for THIavg/THImax were largest in SLO (22-74/28-84) and shortest in SPA (39-76/46-83). Change point techniques were used to determine comfort thresholds, which differed across traits and climatic regions. Milk yield showed an inverted U-shaped pattern of response across the THI scale with a HS threshold around 73 THImax units. For fat and protein, thresholds were lower than for milk yield and were shifted around 6 THI units toward larger values in SPA compared with the other countries. Fat showed lower HS thresholds than protein traits in all countries. The traditional broken line model was compared with quadratic and cubic fits of the pattern of response in production to increasing heat loads. A cubic polynomial model allowing for individual variation in patterns of response and THIavg as heat load measure showed the best statistical features. Higher/lower producing animals showed less/more persistent production (quantity and quality) across the THI scale. The estimated correlations between comfort and THIavg values of 70 (which represents the upper end of the THIavg scale in BEL-LUX) were lower for BEL-LUX (0.70-0.80) than for SPA (0.83-0.85). Overall, animals producing in the more temperate climates and semi-extensive grazing systems of BEL and LUX showed HS at lower heat loads and more re-ranking across the THI scale than animals producing in the warmer climate and intensive indoor system of SPA. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Johnson, Brittney; McCracken, I. Moriah
2016-01-01
In 2015, threshold concepts formed the foundation of two disciplinary documents: The "ACRL Framework for Information Literacy" (2015) and "Naming What We Know: Threshold Concepts of Writing Studies" (2015). While there is no consensus in the fields about the value of threshold concepts in teaching, reading the six Frames in the…
Automatic burst detection for the EEG of the preterm infant.
Jennekens, Ward; Ruijs, Loes S; Lommen, Charlotte M L; Niemarkt, Hendrik J; Pasman, Jaco W; van Kranen-Mastenbroek, Vivianne H J M; Wijn, Pieter F F; van Pul, Carola; Andriessen, Peter
2011-10-01
To aid with prognosis and stratification of clinical treatment for preterm infants, a method for automated detection of bursts, interburst-intervals (IBIs) and continuous patterns in the electroencephalogram (EEG) is developed. Results are evaluated for preterm infants with normal neurological follow-up at 2 years. The detection algorithm (MATLAB®) for burst, IBI and continuous pattern is based on selection by amplitude, time span, number of channels and numbers of active electrodes. Annotations of two neurophysiologists were used to determine threshold values. The training set consisted of EEG recordings of four preterm infants with postmenstrual age (PMA, gestational age + postnatal age) of 29-34 weeks. Optimal threshold values were based on overall highest sensitivity. For evaluation, both observers verified detections in an independent dataset of four EEG recordings with comparable PMA. Algorithm performance was assessed by calculation of sensitivity and positive predictive value. The results of algorithm evaluation are as follows: sensitivity values of 90% ± 6%, 80% ± 9% and 97% ± 5% for burst, IBI and continuous patterns, respectively. Corresponding positive predictive values were 88% ± 8%, 96% ± 3% and 85% ± 15%, respectively. In conclusion, the algorithm showed high sensitivity and positive predictive values for bursts, IBIs and continuous patterns in preterm EEG. Computer-assisted analysis of EEG may allow objective and reproducible analysis for clinical treatment.
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
Health hazards of ultrafine metal and metal oxide powders
NASA Technical Reports Server (NTRS)
Boylen, G. W., Jr.; Chamberlin, R. I.; Viles, F. J.
1969-01-01
Study reveals that suggested threshold limit values are from two to fifty times lower than current recommended threshold limit values. Proposed safe limits of exposure to the ultrafine dusts are based on known toxic potential of various materials as determined in particle size ranges.
48 CFR 41.401 - Monthly and annual review.
Code of Federal Regulations, 2010 CFR
2010-10-01
... values exceeding the simplified acquisition threshold, on an annual basis. Annual reviews of accounts with annual values at or below the simplified acquisition threshold shall be conducted when deemed... services to each facility under the utility's most economical, applicable rate and to examine competitive...
Spirtas, R; Steinberg, M; Wands, R C; Weisburger, E K
1986-01-01
The Chemical Substances Threshold Limit Value Committee of the American Conference of Governmental Industrial Hygienists has refined its procedures for evaluating carcinogens. Types of epidemiologic and toxicologic evidence used are reviewed and a discussion is presented on how the Committee evaluates data on carcinogenicity. Although it has not been conclusively determined whether biological thresholds exist for all types of carcinogens, the Committee will continue to develop guidelines for permissible exposures to carcinogens. The Committee will continue to use the safety factor approach to setting Threshold Limit Values for carcinogens, despite its shortcomings. A compilation has been developed for lists of substances considered to be carcinogenic by several scientific groups. The Committee will use this information to help to identify and classify carcinogens for its evaluation. PMID:3752326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spirtas, R.; Steinberg, M.; Wands, R.C.
1986-10-01
The Chemical Substances Threshold Limit Value Committee of the American Conference of Governmental Industrial Hygienists has refined its procedures for evaluating carcinogens. Types of epidemiologic and toxicologic evidence used are reviewed and a discussion is presented on how the Committee evaluates data on carcinogenicity. Although it has not been conclusively determined whether biological thresholds exist for all types of carcinogens, the Committee will continue to develop guidelines for permissible exposures to carcinogens. The Committee will continue to use the safety factor approach to setting Threshold Limit Values for carcinogens, despite its shortcomings. A compilation has been developed for lists ofmore » substances considered to be carcinogenic by several scientific groups. The Committee will use this information to help to identify and classify carcinogens for its evaluation.« less
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Devlin, Michelle; Painting, Suzanne; Best, Mike
2007-01-01
The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
NASA Astrophysics Data System (ADS)
Kurtzman, D.; Kanner, B.; Levy, Y.; Shapira, R. H.; Bar-Tal, A.
2017-12-01
Closed-root-zone experiments (e.g. pots, lyzimeters) reveal in many cases a mineral-nitrogen (N) concentration from which the root-N-uptake efficiency reduces significantly and nitrate leaching below the root-zone increases dramatically. A les-direct way to reveal this threshold concentration in agricultural fields is to calibrate N-transport models of the unsaturated zone to nitrate data of the deep samples (under the root-zone) by fitting the threshold concentration of the nitrate-uptake function. Independent research efforts of these two types in light soils where nitrate problems in underlying aquifers are common reviled: 1) that the threshold exists for most crops (filed, vegetables and orchards); 2) nice agreement on the threshold value between the two very different research methodologies; and 3) the threshold lies within 20-50 mg-N/L. Focusing on being below the threshold is a relatively simple aim in the way to maintain intensive agriculture with limited effects on the nitrate concentration in the underlying water resource. Our experience show that in some crops this threshold coincides with the end-of-rise of the N-yield curve (e.g. corn); in this case, it is relatively easy to convince farmers to fertilize below threshold. In other crops, although significant N is lost to leaching the crop can still use higher N concentration to increase yield (e.g. potato).
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.
Donoho, David; Jin, Jiashun
2008-09-30
In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
Donoho, David; Jin, Jiashun
2008-01-01
In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365
Vemer, Pepijn; Rutten-van Mölken, Maureen P M H
2011-10-01
Recently, several checklists systematically assessed factors that affect the transferability of cost-effectiveness (CE) studies between jurisdictions. The role of the threshold value for a QALY has been given little consideration in these checklists, even though the importance of a factor as a cause of between country differences in CE depends on this threshold. In this paper, we study the impact of the willingness-to-pay (WTP) per QALY on the importance of transferability factors in the case of smoking cessation support (SCS). We investigated, for several values of the WTP, how differences between six countries affect the incremental net monetary benefit (INMB) of SCS. The investigated factors were demography, smoking prevalence, mortality, epidemiology and costs of smoking-related diseases, resource use and unit costs of SCS, utility weights and discount rates. We found that when the WTP decreased, factors that mainly affect health outcomes became less important and factors that mainly effect costs became more important. With a WTP below
NASA Technical Reports Server (NTRS)
Smith, Paul L.; VonderHaar, Thomas H.
1996-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.
NASA Astrophysics Data System (ADS)
Zhang, Fengtian; Wang, Chao; Yuan, Mingquan; Tang, Bin; Xiong, Zhuang
2017-12-01
Most of the MEMS inertial switches developed in recent years are intended for shock and impact sensing with a threshold value above 50 g. In order to follow the requirement of detecting linear acceleration signal at low-g level, a silicon based MEMS inertial switch with a threshold value of 5 g was designed, fabricated and characterized. The switch consisted of a large proof mass, supported by circular spiral springs. An analytical model of the structure stiffness of the proposed switch was derived and verified by finite-element simulation. The structure fabrication was based on a customized double-buried layer silicon-on-insulator wafer and encapsulated by glass wafers. The centrifugal experiment and nanoindentation experiment were performed to measure the threshold value as well as the structure stiffness. The actual threshold values were measured to be 0.1-0.3 g lower than the pre-designed value of 5 g due to the dimension loss during non-contact lithography processing. Concerning the reliability assessment, a series of environmental experiments were conducted and the switches remained operational without excessive errors. However, both the random vibration and the shock tests indicate that the metal particles generated during collision of contact parts might affect the contact reliability and long-time stability. According to the conclusion reached in this report, an attentive study on switch contact behavior should be included in future research.
Cost-effectiveness thresholds: methods for setting and examples from around the world.
Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano
2018-06-01
Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.
Femtosecond ablation of ultrahard materials
NASA Astrophysics Data System (ADS)
Dumitru, G.; Romano, V.; Weber, H. P.; Sentis, M.; Marine, W.
Several ultrahard materials and coatings of definite interest for tribological applications were tested with respect to their response when irradiated with fs laser pulses. Results on cemented tungsten carbide and on titanium carbonitride are reported for the first time and compared with outcomes of investigations on diamond and titanium nitride. The experiments were carried out in air, in a regime of 5-8 J/cm2 fluences, using the beam of a commercial Ti:sapphire laser. The changes induced in the surface morphology were analysed with a Nomarski optical microscope, and with SEM and AFM techniques. From the experimental data and from the calculated incident energy density distributions, the damage and ablation threshold values were determined. As expected, the diamond showed the highest threshold, while the cemented tungsten carbide exhibited typical values for metallic surfaces. The ablation rates determined (under the above-mentioned experimental conditions) were in the range 0.1-0.2 μm per pulse for all the materials investigated.
Threshold Values for Identification of Contamination Predicted by Reduced-Order Models
Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; ...
2014-12-31
The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.
Processing circuitry for single channel radiation detector
NASA Technical Reports Server (NTRS)
Holland, Samuel D. (Inventor); Delaune, Paul B. (Inventor); Turner, Kathryn M. (Inventor)
2009-01-01
Processing circuitry is provided for a high voltage operated radiation detector. An event detector utilizes a comparator configured to produce an event signal based on a leading edge threshold value. A preferred event detector does not produce another event signal until a trailing edge threshold value is satisfied. The event signal can be utilized for counting the number of particle hits and also for controlling data collection operation for a peak detect circuit and timer. The leading edge threshold value is programmable such that it can be reprogrammed by a remote computer. A digital high voltage control is preferably operable to monitor and adjust high voltage for the detector.
Black, R.W.; Moran, P.W.; Frankforter, J.D.
2011-01-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).
Black, Robert W; Moran, Patrick W; Frankforter, Jill D
2011-04-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.
Measurement of erythema and tanning responses in human skin using a tri-stimulus colorimeter.
Seitz, J C; Whitmore, C G
1988-01-01
A 'Minolta Tri-Stimulus Colorimeter II' was evaluated for obtaining objective measurements of early changes in erythema and tanning. The meter showed a subtle, continuous transition between the primary erythematous response and the delayed tanning of skin which was below the visual threshold for detection. Thereafter, the a* (redness) value of the meter showed a significant linear correlation with the dermatologist's perception of erythema while the b* (yellow) value showed a significant correlation with the perception of tanning. This capability of the tri-stimulus colorimeter to simultaneously evaluate the hue and saturation of skin color affords an improved opportunity to quantitate the transition from erythema to tanning without subjective bias.
NASA Astrophysics Data System (ADS)
Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola
2018-03-01
We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.
NASA Astrophysics Data System (ADS)
Lagrosas, N.; Gacal, G. F. B.; Kuze, H.
2017-12-01
Detection of nighttime cloud from Himawari 8 is implemented using the difference of digital numbers from bands 13 (10.4µm) and 7 (3.9µm). The digital number difference of -1.39x104 can be used as a threshold to separate clouds from clear sky conditions. To look at observations from the ground over Chiba, a digital camera (Canon Powershot A2300) is used to take images of the sky every 5 minutes at an exposure time of 5s at the Center for Environmental Remote Sensing, Chiba University. From these images, cloud cover values are obtained using threshold algorithm (Gacal, et al, 2016). Ten minute nighttime cloud cover values from these two datasets are compared and analyzed from 29 May to 05 June 2017 (20:00-03:00 JST). When compared with lidar data, the camera can detect thick high level clouds up to 10km. The results show that during clear sky conditions (02-03 June), both camera and satellite cloud cover values show 0% cloud cover. During cloudy conditions (05-06 June), the camera shows almost 100% cloud cover while satellite cloud cover values range from 60 to 100%. These low values can be attributed to the presence of low-level thin clouds ( 2km above the ground) as observed from National Institute for Environmental Studies lidar located inside Chiba University. This difference of cloud cover values shows that the camera can produce accurate cloud cover values of low level clouds that are sometimes not detected by satellites. The opposite occurs when high level clouds are present (01-02 June). Derived satellite cloud cover shows almost 100% during the whole night while ground-based camera shows cloud cover values that range from 10 to 100% during the same time interval. The fluctuating values can be attributed to the presence of thin clouds located at around 6km from the ground and the presence of low level clouds ( 1km). Since the camera relies on the reflected city lights, it is possible that the high level thin clouds are not observed by the camera but is observed by the satellite. Also, this condition constitutes layers of clouds that are not observed by each camera. The results of this study show that one instrument can be used to correct each other to provide better cloud cover values. These corrections is dependent on the height and thickness of the clouds. No correction is necessary when the sky is clear.
Influence of network dynamics on the spread of sexually transmitted diseases.
Risau-Gusman, Sebastián
2012-06-07
Network epidemiology often assumes that the relationships defining the social network of a population are static. The dynamics of relationships is only taken indirectly into account by assuming that the relevant information to study epidemic spread is encoded in the network obtained, by considering numbers of partners accumulated over periods of time roughly proportional to the infectious period of the disease. On the other hand, models explicitly including social dynamics are often too schematic to provide a reasonable representation of a real population, or so detailed that no general conclusions can be drawn from them. Here, we present a model of social dynamics that is general enough so its parameters can be obtained by fitting data from surveys about sexual behaviour, but that can still be studied analytically, using mean-field techniques. This allows us to obtain some general results about epidemic spreading. We show that using accumulated network data to estimate the static epidemic threshold lead to a significant underestimation of that threshold. We also show that, for a dynamic network, the relative epidemic threshold is an increasing function of the infectious period of the disease, implying that the static value is a lower bound to the real threshold. A practical example is given of how to apply the model to the study of a real population.
Influence of network dynamics on the spread of sexually transmitted diseases
Risau-Gusman, Sebastián
2012-01-01
Network epidemiology often assumes that the relationships defining the social network of a population are static. The dynamics of relationships is only taken indirectly into account by assuming that the relevant information to study epidemic spread is encoded in the network obtained, by considering numbers of partners accumulated over periods of time roughly proportional to the infectious period of the disease. On the other hand, models explicitly including social dynamics are often too schematic to provide a reasonable representation of a real population, or so detailed that no general conclusions can be drawn from them. Here, we present a model of social dynamics that is general enough so its parameters can be obtained by fitting data from surveys about sexual behaviour, but that can still be studied analytically, using mean-field techniques. This allows us to obtain some general results about epidemic spreading. We show that using accumulated network data to estimate the static epidemic threshold lead to a significant underestimation of that threshold. We also show that, for a dynamic network, the relative epidemic threshold is an increasing function of the infectious period of the disease, implying that the static value is a lower bound to the real threshold. A practical example is given of how to apply the model to the study of a real population. PMID:22112655
Bauer, Daniel; Averett, Lacey A; De Smedt, Ann; Kleinman, Mark H; Muster, Wolfgang; Pettersen, Betty A; Robles, Catherine
2014-02-01
Phototoxicity is a relatively common phenomenon and is an adverse effect of some systemic drugs. The fundamental initial step of photochemical reactivity is absorption of a photon; however, little guidance has been provided thus far regarding how ultraviolet-visible (UV-vis) light absorption spectra may be used to inform testing strategies for investigational drugs. Here we report the results of an inter-laboratory study comparing the data from harmonized UV-vis light absorption spectra obtained in methanol with data from the in vitro 3T3 Neutral Red Uptake Phototoxicity Test. Six pharmaceutical companies submitted data according to predefined quality criteria for 76 compounds covering a wide range of chemical classes showing a diverse but "positive"-enhanced distribution of photo irritation factors (22%: PIF<2, 12%: PIF 2-5, 66%: PIF>5). For compounds being formally positive (PIF value above 5) the lowest reported molar extinction coefficient (MEC) was 1700 L mol⁻¹ cm⁻¹ in methanol. However, the majority of these formally positive compounds showed MEC values being significantly higher (up to almost 40,000 L mol⁻¹ cm⁻¹). In conclusion, an MEC value of 1000 L mol⁻¹ cm⁻¹ may represent a reasonable and pragmatic threshold warranting further experimental photosafety evaluation. Copyright © 2013 Elsevier Inc. All rights reserved.
Baykal, Mehmet; Gökmen, Necati; Doğan, Alper; Erbayraktar, Serhat; Yılmaz, Osman; Ocmen, Elvan; Erdost, Hale Aksu; Arkan, Atalay
The aim of this study was to investigate the effects of intracerebroventricularly administered rocuronium bromide on the central nervous system, determine the seizure threshold dose of rocuronium bromide in rats, and investigate the effects of rocuronium on the central nervous system at 1/5, 1/10, and 1/100 dilutions of the determined seizure threshold dose. A permanent cannula was placed in the lateral cerebral ventricle of the animals. The study was designed in two phases. In the first phase, the seizure threshold dose of rocuronium bromide was determined. In the second phase, Group R 1/5 (n=6), Group 1/10 (n=6), and Group 1/100 (n=6) were formed using doses of 1/5, 1/10, and 1/100, respectively, of the obtained rocuronium bromide seizure threshold dose. The rocuronium bromide seizure threshold value was found to be 0.056±0.009μmoL. The seizure threshold, as a function of the body weight of rats, was calculated as 0.286μmoL/kg -1 . A dose of 1/5 of the seizure threshold dose primarily caused splayed limbs, posturing, and tremors of the entire body, whereas the dose of 1/10 of the seizure threshold dose caused agitation and shivering. A dose of 1/100 of the seizure threshold dose was associated with decreased locomotor activity. This study showed that rocuronium bromide has dose-related deleterious effects on the central nervous system and can produce dose-dependent excitatory effects and seizures. Publicado por Elsevier Editora Ltda.
Baykal, Mehmet; Gökmen, Necati; Doğan, Alper; Erbayraktar, Serhat; Yılmaz, Osman; Ocmen, Elvan; Erdost, Hale Aksu; Arkan, Atalay
The aim of this study was to investigate the effects of intracerebroventricularly administered rocuronium bromide on the central nervous system, determine the seizure threshold dose of rocuronium bromide in rats, and investigate the effects of rocuronium on the central nervous system at 1/5, 1/10, and 1/100 dilutions of the determined seizure threshold dose. A permanent cannula was placed in the lateral cerebral ventricle of the animals. The study was designed in two phases. In the first phase, the seizure threshold dose of rocuronium bromide was determined. In the second phase, Group R 1/5 (n=6), Group 1/10 (n=6), and Group 1/100 (n=6) were formed using doses of 1/5, 1/10, and 1/100, respectively, of the obtained rocuronium bromide seizure threshold dose. The rocuronium bromide seizure threshold value was found to be 0.056±0.009μmoL. The seizure threshold, as a function of the body weight of rats, was calculated as 0.286μmoL/kg -1 . A dose of 1/5 of the seizure threshold dose primarily caused splayed limbs, posturing, and tremors of the entire body, whereas the dose of 1/10 of the seizure threshold dose caused agitation and shivering. A dose of 1/100 of the seizure threshold dose was associated with decreased locomotor activity. This study showed that rocuronium bromide has dose-related deleterious effects on the central nervous system and can produce dose-dependent excitatory effects and seizures. Published by Elsevier Editora Ltda.
NASA Astrophysics Data System (ADS)
Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.
2012-03-01
Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.
Uribe, Juan S; Isaacs, Robert E; Youssef, Jim A; Khajavi, Kaveh; Balzer, Jeffrey R; Kanter, Adam S; Küelling, Fabrice A; Peterson, Mark D
2015-04-01
This multicenter study aims to evaluate the utility of triggered electromyography (t-EMG) recorded throughout psoas retraction during lateral transpsoas interbody fusion to predict postoperative changes in motor function. Three hundred and twenty-three patients undergoing L4-5 minimally invasive lateral interbody fusion from 21 sites were enrolled. Intraoperative data collection included initial t-EMG thresholds in response to posterior retractor blade stimulation and subsequent t-EMG threshold values collected every 5 min throughout retraction. Additional data collection included dimensions/duration of retraction as well as pre-and postoperative lower extremity neurologic exams. Prior to expanding the retractor, the lowestt-EMG threshold was identified posterior to the retractor in 94 % of cases. Postoperatively, 13 (4.5 %) patients had a new motor weakness that was consistent with symptomatic neuropraxia (SN) of lumbar plexus nerves on the approach side. There were no significant differences between patients with or without a corresponding postoperative SN with respect to initial posterior blade reading (p = 0.600), or retraction dimensions (p > 0.05). Retraction time was significantly longer in those patients with SN vs. those without (p = 0.031). Stepwise logistic regression showed a significant positive relationship between the presence of new postoperative SN and total retraction time (p < 0.001), as well as change in t-EMG thresholds over time (p < 0.001), although false positive rates (increased threshold in patients with no new SN) remained high regardless of the absolute increase in threshold used to define an alarm criteria. Prolonged retraction time and coincident increases in t-EMG thresholds are predictors of declining nerve integrity. Increasing t-EMG thresholds, while predictive of injury, were also observed in a large number of patients without iatrogenic injury, with a greater predictive value in cases with extended duration. In addition to a careful approach with minimal muscle retraction and consistent lumbar plexus directional retraction, the incidence of postoperative motor neuropraxia may be reduced by limiting retraction time and utilizing t-EMG throughout retraction, while understanding that the specificity of this monitoring technique is low during initial retraction and increases with longer retraction duration.
Temperature measurements in an ytterbium fiber amplifier up to the mode instability threshold
NASA Astrophysics Data System (ADS)
Beier, F.; Heinzig, M.; Sattler, Bettina; Walbaum, Till; Haarlammert, N.; Schreiber, T.; Eberhardt, R.; Tünnermann, A.
2016-03-01
We report on the measurement of the longitudinal temperature distribution in a fiber amplifier fiber during high power operation. The measurement signal of an optical frequency domain reflectometer is coupled to an ytterbium doped amplifier fiber via a wavelength division multiplexer. The longitudinal temperature distribution was examined for different pump powers with a sub mm resolution. The results show even small temperature variations induced by slight changes of the environmental conditions along the fiber. The mode instability threshold of the fiber under investigation was determined to be 480W and temperatures could be measured overall the measured output power values.
Is it possible to shorten ambulatory blood pressure monitoring?
Wolak, Talya; Wilk, Lior; Paran, Esther; Wolak, Arik; Gutmacher, Bella; Shleyfer, Elena; Friger, Michael
2013-08-01
The aim of this investigation was to find a time segment in which average blood pressure (BP) has the best correlation with 24-hour BP control. A total of 240 patients with full ambulatory BP monitoring (ABPM) were included; 120 had controlled BP (systolic BP [SBP] ≤135 mm Hg and diastolic BP [DBP] ≤85 mm Hg) and 120 had uncontrolled BP (SBP >135 mm Hg and/or DBP >85 mm Hg). Each ABPM was divided into 6- and 8-hour segments. Evaluation for correlation between mean BP for each time segment and 24-hour BP control was performed using receiver operating characteristic curve analysis and Youden's index for threshold with the best sensitivity and specificity. The mean BP in the following segments showed the highest area under the curve (AUC) compared with average controlled 24-hour BP: SBP 2 am to 8 am (AUC, 0.918; threshold value of 133.5 mm Hg, sensitivity-0.752 and specificity-0.904); SBP 2 pm to 10 pm (AUC, 0.911; threshold value of 138.5 mm Hg, sensitivity-0.803 and specificity-0.878); and SBP 6 am to 2 pm (AUC, 0.903; threshold value of 140.5 mm Hg, sensitivity-0.778 and specificity-0.888). The time segment 2 pm to 10 pm was shown to have good correlation with 24-hour BP control (AUC >0.9; sensitivity and specificity >80%). This time segment might replace full ABPM as a screening measure for BP control or as abbreviated ABPM for patients with difficulty in performing full ABPM. © 2013 Wiley Periodicals, Inc.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Investigations of ionospheric sporadic Es layer using oblique sounding method
NASA Astrophysics Data System (ADS)
Minullin, R.
The characteristics of Es layer have been studied using oblique sounding at 28 radiolines at the frequencies of 34 -- 73 MHz at the transmission paths 400 -- 1600 km long during 30 years. Reflections from Es layer with a few hours duration were observed. The amplitude of the reflected signal reached 1000 μ V with the registration threshold 0,1 μ V. The borderlines between reflected and scattered signals were observed as sharp curves in 60 -- 100 s range on the distributions of duration of reflected signals for decameter waves. The duration of continuous Es reflections were decreased upon amplification of oblique sounding frequency. The distributions of duration of reflected signals for meter waves showed sharp curves in the range 200 -- 300 s, representing borderlines between signals reflected from meteoric traces and from Es layer. The filling coefficient for the oblique sounding as well as the Es layer emersion probability for the vertical sounding were shown to undergo daily, seasonal and periodic variations. The daily variations of the filling coefficient of Es signals showed clear-cut maximums at 10 -- 12 and 18 -- 20 hours and minimum at 4 -- 6 hours at all paths in summer time and the maximum at 12 -- 14 hours in winter time. The values of the filling coefficient for Es layer declined with the increase of oblique sounding frequency. The minimal values of the filling coefficient were observed in winter and early spring, while the maximal values were observed from May to August. Provided that the averaged filling coefficient is equal to one in summer, it reaches the level 0,25 in equinox and does not exceed the level 0,12 in winter as evident by the of oblique sounding. The filling coefficient relation to the value of the voltage detection threshold was approximated by power-mode law. The filling coefficients for summer period showed exponential relation with equivalent sounding frequencies. The experimental evidence was generalized in an analytical model. Using this model the averaged Es layer filling coefficients for particular season of the year can be forecasted in case of given sounding frequency, path length, and voltage threshold.
Detection and quantification system for monitoring instruments
Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA
2008-08-12
A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.
A study of surface ozone variability over the Iberian Peninsula during the last fifty years
NASA Astrophysics Data System (ADS)
Fernández-Fernández, M. I.; Gallego, M. C.; García, J. A.; Acero, F. J.
2011-02-01
There is good evidence for an increase in the global surface level of ozone in the past century. In this work we present an analysis of 18 surface ozone series over the Iberian Peninsula, considering the target values of ozone for the protection of human health and for the protection of vegetation, as well as the information and alert thresholds established by the current European Directive on ambient air quality and cleaner air for Europe (Directive 2008/50/EC). The results show that the stations located on the Cantabrian coast exceeded neither the target value for the protection of human health nor the target value for the protection of vegetation. The information threshold was exceeded in most of the stations, while the alert threshold was only exceeded in one. The seasonal and daily evolution of ozone concentrations were as expected. A trend analysis of three surface ozone concentration indices (monthly median and 98th percentile, and monthly maximum of the daily maximum 8-h mean) was performed both for the whole period of each station and for the common period from 2001 to 2007 for all the months of the year. It was noted that generally the south of the Iberian Peninsula presented increasing trends for the three indices, especially in the last six months of the year, and the north decreasing trends. Finally, a correlation analysis was performed between the daily maximum 8-h mean and both daily mean temperature and daily mean solar radiation for the whole and the common periods. For all stations, there was a significant positive association at a 5% significance level between the daily maximum 8-h mean and the two meteorological variables of up to approximately 0.5. The spatial distribution of these association values from 2001 to 2007 showed a positive northwest to southeast gradient over the Iberian Peninsula.
Investigating the Effects of the Interaction Intensity in a Weak Measurement.
Piacentini, Fabrizio; Avella, Alessio; Gramegna, Marco; Lussana, Rudi; Villa, Federica; Tosi, Alberto; Brida, Giorgio; Degiovanni, Ivo Pietro; Genovese, Marco
2018-05-03
Measurements are crucial in quantum mechanics, for fundamental research as well as for applicative fields like quantum metrology, quantum-enhanced measurements and other quantum technologies. In the recent years, weak-interaction-based protocols like Weak Measurements and Protective Measurements have been experimentally realized, showing peculiar features leading to surprising advantages in several different applications. In this work we analyze the validity range for such measurement protocols, that is, how the interaction strength affects the weak value extraction, by measuring different polarization weak values on heralded single photons. We show that, even in the weak interaction regime, the coupling intensity limits the range of weak values achievable, setting a threshold on the signal amplification effect exploited in many weak measurement based experiments.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
NASA Astrophysics Data System (ADS)
Jiang, C.; Rumyantsev, S. L.; Samnakay, R.; Shur, M. S.; Balandin, A. A.
2015-02-01
We report on fabrication of MoS2 thin-film transistors (TFTs) and experimental investigations of their high-temperature current-voltage characteristics. The measurements show that MoS2 devices remain functional to temperatures of at least as high as 500 K. The temperature increase results in decreased threshold voltage and mobility. The comparison of the direct current (DC) and pulse measurements shows that the direct current sub-linear and super-linear output characteristics of MoS2 thin-films devices result from the Joule heating and the interplay of the threshold voltage and mobility temperature dependences. At temperatures above 450 K, a kink in the drain current occurs at zero gate voltage irrespective of the threshold voltage value. This intriguing phenomenon, referred to as a "memory step," was attributed to the slow relaxation processes in thin films similar to those in graphene and electron glasses. The fabricated MoS2 thin-film transistors demonstrated stable operation after two months of aging. The obtained results suggest new applications for MoS2 thin-film transistors in extreme-temperature electronics and sensors.
Rogowski, W H; Grosse, S D; Meyer, E; John, J; Palmer, S
2012-05-01
Public decision makers face demands to invest in applied research in order to accelerate the adoption of new genetic tests. However, such an investment is profitable only if the results gained from further investigations have a significant impact on health care practice. An upper limit for the value of additional information aimed at improving the basis for reimbursement decisions is given by the expected value of perfect information (EVPI). This study illustrates the significance of the concept of EVPI on the basis of a probabilistic cost-effectiveness model of screening for hereditary hemochromatosis among German men. In the present example, population-based screening can barely be recommended at threshold values of 50,000 or 100,000 Euro per life year gained and also the value of additional research which might cause this decision to be overturned is small: At the mentioned threshold values, the EVPI in the German public health care system was ca. 500,000 and 2,200,000 Euro, respectively. An analysis of EVPI by individual parameters or groups of parameters shows that additional research about adherence to preventive phlebotomy could potentially provide the highest benefit. The potential value of further research also depends on methodological assumptions regarding the decision maker's time horizon as well as on scenarios with an impact on the number of affected patients and the cost-effectiveness of screening.
Value of information and pricing new healthcare interventions.
Willan, Andrew R; Eckermann, Simon
2012-06-01
Previous application of value-of-information methods to optimal clinical trial design have predominantly taken a societal decision-making perspective, implicitly assuming that healthcare costs are covered through public expenditure and trial research is funded by government or donation-based philanthropic agencies. In this paper, we consider the interaction between interrelated perspectives of a societal decision maker (e.g. the National Institute for Health and Clinical Excellence [NICE] in the UK) charged with the responsibility for approving new health interventions for reimbursement and the company that holds the patent for a new intervention. We establish optimal decision making from societal and company perspectives, allowing for trade-offs between the value and cost of research and the price of the new intervention. Given the current level of evidence, there exists a maximum (threshold) price acceptable to the decision maker. Submission for approval with prices above this threshold will be refused. Given the current level of evidence and the decision maker's threshold price, there exists a minimum (threshold) price acceptable to the company. If the decision maker's threshold price exceeds the company's, then current evidence is sufficient since any price between the thresholds is acceptable to both. On the other hand, if the decision maker's threshold price is lower than the company's, then no price is acceptable to both and the company's optimal strategy is to commission additional research. The methods are illustrated using a recent example from the literature.
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
Relationship between surface and free tropospheric ozone in the Western U.S.
Jaffe, Dan
2011-01-15
Ozone is an important air pollutant that affects lung function. In the U.S., the EPA has reduced the allowable O(3) concentrations several times over the last few decades. This puts greater emphasis on understanding the interannual variability and the contributions to surface O(3) from all sources. We have examined O(3) data from 11 rural CASTNET sites in the western US for the period 1995-2009. The 11 surface sites show a similar seasonal cycle and generally a good correlation in the deseasonalized monthly means, indicating that there are large scale influences on O(3) that operate across the entire western US. These sites also show a good correlation between site elevation and annual mean O(3), indicating a significant contribution from the free troposphere. We examined the number of exceedance days for each site, defined as a day when the Maximum Daily 8-h Average (MDA8) exceeds a threshold value. Over this time period, more than half of these sites exceeded an MDA8 threshold of 70 ppbv at least 4 times per year, and all sites exceeded a threshold value of 65 ppbv at least 4 times per year. The transition to lower threshold values increases substantially the number of exceedance days, especially during spring, reflecting the fact that background O(3) peaks during spring. We next examined the correlation between surface O(3) and free tropospheric O(3) in the same region, as measured by routine balloon launches from Boulder, CO. Using ozone measured by the balloon sensor in the range of 3-6 km above sea level we find statistically significant correlations between surface and free tropospheric O(3) in spring and summer months using both monthly means, daily MDA8 values, and the number of surface exceedance days. We suggest that during spring this correlation reflects variations in the flux of O(3) transport from the free troposphere to the surface. In summer, free tropospheric and surface concentrations of O(3) and the number of exceedance days are all significantly correlated with emissions from biomass burning in the western US. This indicates that wildfires significantly increase the number of exceedance days across the western U.S.
Paul, Shirshendu; Russakow, Daniel; Rodgers, Tyler; Sarkar, Kausik; Cochran, Michael; Wheatley, Margaret
2013-01-01
The stabilizing encapsulation of a microbubble based ultrasound contrast agent (UCA) critically affects its acoustic properties. Polymers, which behave differently from commonly used materials—e.g. lipids or proteins—for the monolayer encapsulation, hold potential for better stability and control over encapsulation properties. Air-filled microbubbles coated with Poly (D, L-lactide) (PLA) are characterized here using in vitro acoustic experiments and several models of encapsulation. The interfacial rheological properties of the encapsulation are determined according to each of these models using attenuation of ultrasound through a suspension of these microbubbles. Then the model predictions are compared with scattered nonlinear—sub- and second harmonic—responses. For this microbubble population (average diameter 1.9 μm), the peak in attenuation measurement indicates a weighted average resonance frequency of 2.5–3 MHz, which, in contrast to other encapsulated microbubbles, is lower than the resonance frequency of a free bubble of similar size (diameter 1.9 μm). This apparently contradictory result stems from the extremely low surface dilatational elasticity (around 0.01–0.07 N/m) and the reduced surface tension of the PLA encapsulation as well as the polydispersity of the bubble population. All models considered here are shown to behave similarly even in the nonlinear regime because of the low value of the surface dilatational elasticity. Pressure dependent scattering measurements at two different excitation frequencies (2.25 and 3 MHz) show strongly non-linear behavior with 25–30 dB and 5–20 dB enhancements in fundamental and second-harmonic responses respectively for a concentration of 1.33 μg/mL of suspension. Subharmonic responses are registered above a relatively low generation threshold of 100–150 kPa with up to 20 dB enhancement beyond that pressure. Numerical predictions from all models show good agreement with the experimentally measured fundamental response, but not with the second harmonic response. The characteristic features of subharmonic response and the steady response beyond the threshold are matched well by model predictions. However, prediction of the threshold value depends on property values and the size distribution. The variation in size distribution from sample to sample leads to variation in estimated encapsulation property values—the lowest estimated value of surface dilatational viscosity better predicts the subharmonic threshold. PMID:23643050
Decomposition of Fuzzy Soft Sets with Finite Value Spaces
Jun, Young Bae
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342
Decomposition of fuzzy soft sets with finite value spaces.
Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.
Intelligent person identification system using stereo camera-based height and stride estimation
NASA Astrophysics Data System (ADS)
Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo
2005-05-01
In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.
Comparison of Various Anthropometric Indices as Risk Factors for Hearing Impairment in Asian Women
Lee, Kyu Yup; Choi, Eun Woo; Do, Jun Young
2015-01-01
Background The objective of the present study was to examine the associations between various anthropometric measures and metabolic syndrome and hearing impairment in Asian women. Methods We identified 11,755 women who underwent voluntary routine health checkups at Yeungnam University Hospital between June 2008 and April 2014. Among these patients, 2,485 participants were <40 years old, and 1,072 participants lacked information regarding their laboratory findings or hearing and were therefore excluded. In total 8,198 participants were recruited into our study. Results The AUROC value for metabolic syndrome was 0.790 for the waist to hip ratio (WHR). The cutoff value was 0.939. The sensitivity and specificity for predicting metabolic syndrome were 72.7% and 71.7%, respectively. The AUROC value for hearing loss was 0.758 for WHR. The cutoff value was 0.932. The sensitivity and specificity for predicting hearing loss were 65.8% and 73.4%, respectively. The WHR had the highest AUC and was the best predictor of metabolic syndrome and hearing loss. Univariate and multivariate linear regression analyses showed that WHR levels were positively associated with four hearing thresholds including averaged hearing threshold and low, middle, and high frequency thresholds. In addition, multivariate logistic analysis revealed that those with a high WHR had a 1.347–fold increased risk of hearing loss compared with the participants with a low WHR. Conclusion Our results demonstrated that WHR may be a surrogate marker for predicting the risk of hearing loss resulting from metabolic syndrome. PMID:26575369
WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, H; Yeung, I; Milosevic, M
2016-06-15
Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less
Global Precipitation at One-Degree Daily Resolution From Multi-Satellite Observations
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Morrissey, Mark M.; Curtis, Scott; Joyce, Robert; McGavock, Brad; Susskind, Joel
2000-01-01
The One-Degree Daily (1DD) technique is described for producing globally complete daily estimates of precipitation on a 1 deg x 1 deg lat/long grid from currently available observational data. Where possible (40 deg N-40 deg S), the Threshold-Matched Precipitation Index (TMPI) provides precipitation estimates in which the 3-hourly infrared brightness temperatures (IR T(sub b)) are thresholded and all "cold" pixels are given a single precipitation rate. This approach is an adaptation of the Geostationary Operational Environmental Satellite (GOES) Precipitation Index (GPI), but for the TMPI the IR Tb threshold and conditional rain rate are set locally by month from Special Sensor Microwave/Imager (SSM/I)-based precipitation frequency and the Global Precipitation Climatology Project (GPCP) satellite-gauge (SG) combined monthly precipitation estimate, respectively. At higher latitudes the 1DD features a rescaled daily Television Infrared Observation Satellite (TIROS) Operational Vertical Sounder (TOVS) precipitation. The frequency of rain days in the TOVS is scaled down to match that in the TMPI at the data boundaries, and the resulting non-zero TOVS values are scaled locally to sum to the SG (which is a globally complete monthly product). The time series of the daily 1DD global images shows good continuity in time and across the data boundaries. Various examples are shown to illustrate uses. Validation for individual grid -box values shows a very high root-mean-square error but, it improves quickly when users perform time/space averaging according to their own requirements.
Antunes, Amanda H; Alberton, Cristine L; Finatto, Paula; Pinto, Stephanie S; Cadore, Eduardo L; Zaffari, Paula; Kruel, Luiz F M
2015-01-01
Maximal tests conducted on land are not suitable for the prescription of aquatic exercises, which makes it difficult to optimize the intensity of water aerobics classes. The aim of the present study was to evaluate the maximal and anaerobic threshold cardiorespiratory responses to 6 water aerobics exercises. Volunteers performed 3 of the exercises in the sagittal plane and 3 in the frontal plane. Twelve active female volunteers (aged 24 ± 2 years) performed 6 maximal progressive test sessions. Throughout the exercise tests, we measured heart rate (HR) and oxygen consumption (VO2). We randomized all sessions with a minimum interval of 48 hr between each session. For statistical analysis, we used repeated-measures 1-way analysis of variance. Regarding the maximal responses, for the peak VO2, abductor hop and jumping jacks (JJ) showed significantly lower values than frontal kick and cross-country skiing (CCS; p < .001; partial η(2) = .509), while for the peak HR, JJ showed statistically significantly lower responses compared with stationary running and CCS (p < .001; partial η(2) = .401). At anaerobic threshold intensity expressed as the percentage of the maximum values, no statistically significant differences were found among exercises. Cardiorespiratory responses are directly associated with the muscle mass involved in the exercise. Thus, it is worth emphasizing the importance of performing a maximal test that is specific to the analyzed exercise so the prescription of the intensity can be safer and valid.
Development of an epiphyte indicator of nutrient enrichment ...
Metrics of epiphyte load on macrophytes were evaluated for use as quantitative biological indicators for nutrient impacts in estuarine waters, based on review and analysis of the literature on epiphytes and macrophytes, primarily seagrasses, but including some brackish and freshwater rooted macrophyte species. An approach is presented that empirically derives threshold epiphyte loads which are likely to cause specified levels of decrease in macrophyte response metrics such as biomass, shoot density, percent cover, production and growth. Data from 36 studies of 10 macrophyte species were pooled to derive relationships between epiphyte load and -25 and -50% seagrass response levels, which are proposed as the primary basis for establishment of critical threshold values. Given multiple sources of variability in the response data, threshold ranges based on the range of values falling between the median and the 75th quantiles of observations at a given seagrass response level are proposed rather than single, critical point values. Four epiphyte load threshold categories - low, moderate, high, very high, are proposed. Comparison of values of epiphyte loads associated with 25 and 50% reductions in light to macrophytes suggest that the threshold ranges are realistic both in terms of the principle mechanism of impact to macrophytes and in terms of the magnitude of resultant impacts expressed by the macrophytes. Some variability in response levels was observed among
Chimera Type Behavior in Nonlocal Coupling System with Two Different Inherent Frequencies
NASA Astrophysics Data System (ADS)
Lin, Larry; Li, Ping-Cheng; Tseng, Hseng-Che
2014-03-01
From the research of Kuramoto and Strogatz, arrays of identical oscillators can display a remarkable pattern, named chimera state, in which phase-locked oscillators coexist with drifting ones in nonlocal coupling oscillator system. We consider further in this study, two groups of oscillators with different inherent frequencies and arrange them in a ring. When the difference of the inherent frequencies is within some specific parameter range, oscillators of nonlocal coupling system show two distinct chimera states. When the parameter value exceeds some threshold value, two chimera states disappear. They show different features. The statistical dynamic behavior of the system can be described by Kuramoto theory.
Effect of a preventive vaccine on the dynamics of HIV transmission
NASA Astrophysics Data System (ADS)
Gumel, A. B.; Moghadas, S. M.; Mickens, R. E.
2004-12-01
A deterministic mathematical model for the transmission dynamics of HIV infection in the presence of a preventive vaccine is considered. Although the equilibria of the model could not be expressed in closed form, their existence and threshold conditions for their stability are theoretically investigated. It is shown that the disease-free equilibrium is locally-asymptotically stable if the basic reproductive number R<1 (thus, HIV disease can be eradicated from the community) and unstable if R>1 (leading to the persistence of HIV within the community). A robust, positivity-preserving, non-standard finite-difference method is constructed and used to solve the model equations. In addition to showing that the anti-HIV vaccine coverage level and the vaccine-induced protection are critically important in reducing the threshold quantity R, our study predicts the minimum threshold values of vaccine coverage and efficacy levels needed to eradicate HIV from the community.
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
NASA Astrophysics Data System (ADS)
L'vov, Victor A.; Kosogor, Anna
2016-09-01
The magnetic field application leads to spatially inhomogeneous magnetostriction of twinned ferromagnetic martensite. When the increasing field and magnetostrictive strain reach certain threshold values, the motion of twin boundaries and magnetically induced reorientation (MIR) of twinned martensite start. The MIR leads to giant magnetically induced deformation of twinned martensite. In the present article, the threshold field (TF) and temperature range of observability of MIR were calculated for the Ni-Mn-Ga martensite assuming that the threshold strain (TS) is temperature-independent. The calculations show that if the TS is of the order of 10-4, the TF strongly depends on temperature and MIR can be observed only above the limiting temperature (~220 K). If the TS is of the order of 10-6, the TF weakly depends on temperature and MIR can be observed at extremely low temperatures. The obtained theoretical results are in agreement with available experimental data.
Enhancement of the Daytime MODIS Based Aircraft Icing Potential Algorithm Using Mesoscale Model Data
2006-03-01
January, 15, 2006 ...... 37 x Figure 25. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...42 Figure 26. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...Table 4. Results using T icing potential values from the Alexander Tmap , and 3 Hour PIREPs
Measurand transient signal suppressor
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.
Ding, Jiule; Xing, Wei; Chen, Jie; Dai, Yongming; Sun, Jun; Li, Dengfa
2014-01-21
To explore the influence of signal noise ratio (SNR) on analysis of clear cell renal cell carcinoma (CCRCC) using DWI with multi-b values. The images of 17 cases with CCRCC were analyzed, including 17 masses and 9 pure cysts. The signal intensity of the cysts and masses was measured separately on DWI for each b value. The minimal SNR, as the threshold, was recorded when the signal curve manifest as the single exponential line. The SNR of the CCRCC was calculated on DWI for each b value, and compared with the threshold by independent Two-sample t Test. The signal decreased on DWI with increased b factors for both pure cysts and CCRCC. The threshold is 1.29 ± 0.17, and the signal intensity of the cysts on DWI with multi-b values shown as a single exponential line when b ≤ 800 s/mm(2). For the CCRCC, the SNR is similar to the threshold when b = 1 000 s/mm(2) (t = 0.40, P = 0.69), and is lower when b = 1 200 s/mm(2) (t = -2.38, P = 0.03). The SNR should be sufficient for quantitative analysis of DWI, and the maximal b value is 1000 s/mm(2) for CCRCC.
Jacobs, Julia; Zijlmans, Maeike; Zelmann, Rina; Olivier, André; Hall, Jeffery; Gotman, Jean; Dubeau, François
2013-01-01
Summary Purpose Electrical stimulation (ES) is used during intracranial electroencephalography (EEG) investigations to delineate epileptogenic areas and seizure-onset zones (SOZs) by provoking afterdischarges (ADs) or patients’ typical seizure. High frequency oscillations (HFOs—ripples, 80–250 Hz; fast ripples, 250–500 Hz) are linked to seizure onset. This study investigates whether interictal HFOs are more frequent in areas with a low threshold to provoke ADs or seizures. Methods Intracranial EEG studies were filtered at 500 Hz and sampled at 2,000 Hz. HFOs were visually identified. Twenty patients underwent ES, with gradually increasing currents. Results were interpreted as agreeing or disagreeing with the intracranial study (clinical-EEG seizure onset defined the SOZ). Current thresholds provoking an AD or seizure were correlated with the rate of HFOs of each channel. Results ES provoked a seizure in 12 and ADs in 19 patients. Sixteen patients showed an ES response inside the SOZ, and 10 had additional areas with ADs. The response was more specific for mesiotemporal than for neocortical channels. HFO rates were negatively correlated with thresholds for ES responses; especially in neo-cortical regions; areas with low threshold and high HFO rate were colocalized even outside the SOZ. Discussion Areas showing epileptic HFOs colocalize with those reacting to ES. HFOs may represent a pathologic correlate of regions showing an ES response; both phenomena suggest a more widespread epileptogenicity. PMID:19845730
Ngwa, Gideon A; Teboh-Ewungkem, Miranda I
2016-01-01
A deterministic ordinary differential equation model for the dynamics and spread of Ebola Virus Disease is derived and studied. The model contains quarantine and nonquarantine states and can be used to evaluate transmission both in treatment centres and in the community. Possible sources of exposure to infection, including cadavers of Ebola Virus victims, are included in the model derivation and analysis. Our model's results show that there exists a threshold parameter, R 0, with the property that when its value is above unity, an endemic equilibrium exists whose value and size are determined by the size of this threshold parameter, and when its value is less than unity, the infection does not spread into the community. The equilibrium state, when it exists, is locally and asymptotically stable with oscillatory returns to the equilibrium point. The basic reproduction number, R 0, is shown to be strongly dependent on the initial response of the emergency services to suspected cases of Ebola infection. When intervention measures such as quarantining are instituted fully at the beginning, the value of the reproduction number reduces and any further infections can only occur at the treatment centres. Effective control measures, to reduce R 0 to values below unity, are discussed.
Ngwa, Gideon A.
2016-01-01
A deterministic ordinary differential equation model for the dynamics and spread of Ebola Virus Disease is derived and studied. The model contains quarantine and nonquarantine states and can be used to evaluate transmission both in treatment centres and in the community. Possible sources of exposure to infection, including cadavers of Ebola Virus victims, are included in the model derivation and analysis. Our model's results show that there exists a threshold parameter, R 0, with the property that when its value is above unity, an endemic equilibrium exists whose value and size are determined by the size of this threshold parameter, and when its value is less than unity, the infection does not spread into the community. The equilibrium state, when it exists, is locally and asymptotically stable with oscillatory returns to the equilibrium point. The basic reproduction number, R 0, is shown to be strongly dependent on the initial response of the emergency services to suspected cases of Ebola infection. When intervention measures such as quarantining are instituted fully at the beginning, the value of the reproduction number reduces and any further infections can only occur at the treatment centres. Effective control measures, to reduce R 0 to values below unity, are discussed. PMID:27579053
Sazykina, Tatiana G; Kryshev, Alexander I
2016-12-01
Lower threshold dose rates and confidence limits are quantified for lifetime radiation effects in mammalian animals from internally deposited alpha-emitting radionuclides. Extensive datasets on effects from internal alpha-emitters are compiled from the International Radiobiological Archives. In total, the compiled database includes 257 records, which are analyzed by means of non-parametric order statistics. The generic lower threshold for alpha-emitters in mammalian animals (combined datasets) is 6.6·10 -5 Gy day -1 . Thresholds for individual alpha-emitting elements differ considerably: plutonium and americium - 2.0·10 -5 Gy day -1 ; radium - 2.1·10 -4 Gy day -1 . Threshold for chronic low-LET radiation is previously estimated at 1·10 -3 Gy day -1 . For low exposures, the following values of alpha radiation weighting factor w R for internally deposited alpha-emitters in mammals are quantified: w R (α) = 15 as a generic value for the whole group of alpha-emitters; w R (Pu) = 50 for plutonium; w R (Am) = 50 for americium; w R (Ra) = 5 for radium. These values are proposed to serve as radiation weighting factors in calculations of equivalent doses to non-human biota. The lower threshold dose rate for long-lived mammals (dogs) is significantly lower than comparing with the threshold for short-lived mammals (mice): 2.7·10 -5 Gy day -1 , and 2.0·10 -4 Gy day -1 , respectively. The difference in thresholds is exactly reflecting the relationship between the natural longevity of these two species. Graded scale of severity in lifetime radiation effects in mammals is developed, based on compiled datasets. Being placed on the severity scale, the effects of internal alpha-emitters are situated in the zones of considerably lower dose rates than effects of the same severity caused by low-LET radiation. RBE values, calculated for effects of equal severity, are found to depend on the intensity of chronic exposure: different RBE values are characteristic for low, moderate, and high lifetime exposures (30, 70, and 13, respectively). The results of the study provide a basis for selecting correct values of radiation weighting factors in dose assessment to non-human biota. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2016-02-08
We aimed to provide insight in how threshold selection affects time to stabilization (TTS) and its reliability to support selection of methods to determine TTS. Eighty-two elite youth soccer players performed six single leg drop jump landings. The TTS was calculated based on four processed signals: raw ground reaction force (GRF) signal (RAW), moving root mean square window (RMS), sequential average (SA) or unbounded third order polynomial fit (TOP). For each trial and processing method a wide range of thresholds was applied. Per threshold, reliability of the TTS was assessed through intra-class correlation coefficients (ICC) for the vertical (V), anteroposterior (AP) and mediolateral (ML) direction of force. Low thresholds resulted in a sharp increase of TTS values and in the percentage of trials in which TTS exceeded trial duration. The TTS and ICC were essentially similar for RAW and RMS in all directions; ICC's were mostly 'insufficient' (<0.4) to 'fair' (0.4-0.6) for the entire range of thresholds. The SA signals resulted in the most stable ICC values across thresholds, being 'substantial' (>0.8) for V, and 'moderate' (0.6-0.8) for AP and ML. The ICC's for TOP were 'substantial' for V, 'moderate' for AP, and 'fair' for ML. The present findings did not reveal an optimal threshold to assess TTS in elite youth soccer players following a single leg drop jump landing. Irrespective of threshold selection, the SA and TOP methods yielded sufficiently reliable TTS values, while for RAW and RMS the reliability was insufficient to differentiate between players. Copyright © 2016 Elsevier Ltd. All rights reserved.
The impact of capillary backpressure on spontaneous counter-current imbibition in porous media
NASA Astrophysics Data System (ADS)
Foley, Amir Y.; Nooruddin, Hasan A.; Blunt, Martin J.
2017-09-01
We investigate the impact of capillary backpressure on spontaneous counter-current imbibition. For such displacements in strongly water-wet systems, the non-wetting phase is forced out through the inlet boundary as the wetting phase imbibes into the rock, creating a finite capillary backpressure. Under the assumption that capillary backpressure depends on the water saturation applied at the inlet boundary of the porous medium, its impact is determined using the continuum modelling approach by varying the imposed inlet saturation in the analytical solution. We present analytical solutions for the one-dimensional incompressible horizontal displacement of a non-wetting phase by a wetting phase in a porous medium. There exists an inlet saturation value above which any change in capillary backpressure has a negligible impact on the solutions. Above this threshold value, imbibition rates and front positions are largely invariant. A method for identifying this inlet saturation is proposed using an analytical procedure and we explore how varying multiphase flow properties affects the analytical solutions and this threshold saturation. We show the value of this analytical approach through the analysis of previously published experimental data.
Bai, Xiaohui; Zhi, Xinghua; Zhu, Huifeng; Meng, Mingqun; Zhang, Mingde
2015-01-01
This study investigates the effect of chloramine residual on bacteria growth and regrowth and the relationship between heterotrophic plate counts (HPCs) and the concentration of chloramine residual in the Shanghai drinking water distribution system (DWDS). In this study, models to control HPCs in the water distribution system and consumer taps are also developed. Real-time ArcGIS was applied to show the distribution and changed results of the chloramine residual concentration in the pipe system by using these models. Residual regression analysis was used to get a reasonable range of the threshold values that allows the chloramine residual to efficiently inhibit bacteria growth in the Shanghai DWDS; the threshold values should be between 0.45 and 0.5 mg/L in pipe water and 0.2 and 0.25 mg/L in tap water. The low residual chloramine value (0.05 mg/L) of the Chinese drinking water quality standard may pose a potential health risk for microorganisms that should be improved. Disinfection by-products (DBPs) were detected, but no health risk was identified.
Cabral, Ana Caroline; Stark, Jonathan S; Kolm, Hedda E; Martins, César C
2018-04-01
Sewage input and the relationship between chemical markers (linear alkylbenzenes and coprostanol) and fecal indicator bacteria (FIB, Escherichia coli and enterococci), were evaluated in order to establish thresholds values for chemical markers in suspended particulate matter (SPM) as indicators of sewage contamination in two subtropical estuaries in South Atlantic Brazil. Both chemical markers presented no linear relationship with FIB due to high spatial microbiological variability, however, microbiological water quality was related to coprostanol values when analyzed by logistic regression, indicating that linear models may not be the best representation of the relationship between both classes of indicators. Logistic regression was performed with all data and separately for two sampling seasons, using 800 and 100 MPN 100 mL -1 of E. coli and enterococci, respectively, as the microbiological limits of sewage contamination. Threshold values of coprostanol varied depending on the FIB and season, ranging between 1.00 and 2.23 μg g -1 SPM. The range of threshold values of coprostanol for SPM are relatively higher and more variable than those suggested in literature for sediments (0.10-0.50 μg g -1 ), probably due to higher concentration of coprostanol in SPM than in sediment. Temperature may affect the relationship between microbiological indicators and coprostanol, since the threshold value of coprostanol found here was similar to tropical areas, but lower than those found during winter in temperate areas, reinforcing the idea that threshold values should be calibrated for different climatic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Senese, Antonella; Maugeri, Maurizio; Vuillermoz, Elisa; Smiraglia, Claudio; Diolaiuti, Guglielmina
2014-05-01
Glacier melt occurs whenever the surface temperature is null (273.15 K) and the net energy budget is positive. These conditions can be assessed by analyzing meteorological and energy data acquired by a supraglacial Automatic Weather Station (AWS). In the case this latter is not present at the glacier surface the assessment of actual melting conditions and the evaluation of melt amount is difficult and degree-day (also named T-index) models are applied. These approaches require the choice of a correct temperature threshold. In fact, melt does not necessarily occur at daily air temperatures higher than 273.15 K, since it is determined by the energy budget which in turn is only indirectly affected by air temperature. This is the case of the late spring period when ablation processes start at the glacier surface thus progressively reducing snow thickness. In this study, to detect the most indicative air temperature threshold witnessing melt conditions in the April-June period, we analyzed air temperature data recorded from 2006 to 2012 by a supraglacial AWS (at 2631 m a.s.l.) on the ablation tongue of the Forni Glacier (Italy), and by a weather station located nearby the studied glacier (at Bormio, 1225 m a.s.l.). Moreover we evaluated the glacier energy budget (which gives the actual melt, Senese et al., 2012) and the snow water equivalent values during this time-frame. Then the ablation amount was estimated both from the surface energy balance (MEB from supraglacial AWS data) and from degree-day method (MT-INDEX, in this latter case applying the mean tropospheric lapse rate to temperature data acquired at Bormio changing the air temperature threshold) and the results were compared. We found that the mean tropospheric lapse rate permits a good and reliable reconstruction of daily glacier air temperature conditions and the major uncertainty in the computation of snow melt from degree-day models is driven by the choice of an appropriate air temperature threshold. Then, to assess the most suitable threshold, we firstly analyzed hourly MEB values to detect if ablation occurs and how long this phenomenon takes (number of hours per day). The largest part of the melting (97.7%) resulted occurring on days featuring at least 6 melting hours thus suggesting to consider their minimum average daily temperature value as a suitable threshold (268.1 K). Then we ran a simple T-index model applying different threshold values. The threshold which better reproduces snow melting results the value 268.1 K. Summarizing using a 5.0 K lower threshold value (with respect to the largely applied 273.15 K) permits the best reconstruction of glacier melt and it results in agreement with findings by van den Broeke et al. (2010) in Greenland ice sheet. Then probably the choice of a 268 K value as threshold for computing degree days amount could be generalized and applied not only on Greenland glaciers but also on Mid latitude and Alpine ones. This work was carried out under the umbrella of the SHARE Stelvio Project funded by the Lombardy Region and managed by FLA and EvK2-CNR Committee.
Quintana, Penelope J E; Matt, Georg E; Chatfield, Dale; Zakarian, Joy M; Fortmann, Addie L; Hoh, Eunha
2013-09-01
Secondhand smoke contains a mixture of pollutants that can persist in air, dust, and on surfaces for months or longer. This persistent residue is known as thirdhand smoke (THS). Here, we detail a simple method of wipe sampling for nicotine as a marker of accumulated THS on surfaces. We analyzed findings from 5 real-world studies to investigate the performance of wipe sampling for nicotine on surfaces in homes, cars, and hotels in relation to smoking behavior and smoking restrictions. The intraclass correlation coefficient for side-by-side samples was 0.91 (95% CI: 0.87-0.94). Wipe sampling for nicotine reliably distinguished between private homes, private cars, rental cars, and hotels with and without smoking bans and was significantly positively correlated with other measures of tobacco smoke contamination such as air and dust nicotine. The sensitivity and specificity of possible threshold values (0.1, 1, and 10 μg/m(2)) were evaluated for distinguishing between nonsmoking and smoking environments. Sensitivity was highest at a threshold of 0.1 μg/m(2), with 74%-100% of smoker environments showing nicotine levels above threshold. Specificity was highest at a threshold of 10 μg/m(2), with 81%-100% of nonsmoker environments showing nicotine levels below threshold. The optimal threshold will depend on the desired balance of sensitivity and specificity and on the types of smoking and nonsmoking environments. Surface wipe sampling for nicotine is a reliable, valid, and relatively simple collection method to quantify THS contamination on surfaces across a wide range of field settings and to distinguish between nonsmoking and smoking environments.
Grantz, Erin; Haggard, Brian; Scott, J Thad
2018-06-12
We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manzano-Santamaria, J.; Euratom/CIEMAT Fusion Association, Madrid; Olivares, J.
2012-10-08
We have determined the cross-section {sigma} for color center generation under single Br ion impacts on amorphous SiO{sub 2}. The evolution of the cross-sections, {sigma}(E) and {sigma}(S{sub e}), show an initial flat stage that we associate to atomic collision mechanisms. Above a certain threshold value (S{sub e} > 2 keV/nm), roughly coinciding with that reported for the onset of macroscopic disorder (compaction), {sigma} shows a marked increase due to electronic processes. In this regime, a energetic cost of around 7.5 keV is necessary to create a non bridging oxygen hole center-E Prime (NBOHC/E Prime ) pair, whatever the input energy.more » The data appear consistent with a non-radiative decay of self-trapped excitons.« less
Wang, Hui; Liu, Huifang; Cao, Zhiyong; Wang, Bowen
2016-01-01
This paper presents a new perspective that there is a double-threshold effect in terms of the technology gap existing in the foreign direct investment (FDI) technology spillover process in different regional Chinese industrial sectors. In this paper, a double-threshold regression model was established to examine the relation between the threshold effect of the technology gap and technology spillover. Based on the provincial panel data of Chinese industrial sectors from 2000 to 2011, the empirical results reveal that there are two threshold values, which are 1.254 and 2.163, in terms of the technology gap in the industrial sector in eastern China. There are also two threshold values in both the central and western industrial sector, which are 1.516, 2.694 and 1.635, 2.714, respectively. The technology spillover is a decreasing function of the technology gap in both the eastern and western industrial sectors, but a concave curve function of the technology gap is in the central industrial sectors. Furthermore, the FDI technology spillover has increased gradually in recent years. Based on the empirical results, suggestions were proposed to elucidate the introduction of the FDI and the improvement in the industrial added value in different regions of China.
A new cloud and aerosol layer detection method based on micropulse lidar measurements
NASA Astrophysics Data System (ADS)
Zhao, Chuanfeng; Wang, Yuzhao; Wang, Qianqian; Li, Zhanqing; Wang, Zhien; Liu, Dong
2014-06-01
This paper introduces a new algorithm to detect aerosols and clouds based on micropulse lidar measurements. A semidiscretization processing technique is first used to inhibit the impact of increasing noise with distance. The value distribution equalization method which reduces the magnitude of signal variations with distance is then introduced. Combined with empirical threshold values, we determine if the signal waves indicate clouds or aerosols. This method can separate clouds and aerosols with high accuracy, although differentiation between aerosols and clouds are subject to more uncertainties depending on the thresholds selected. Compared with the existing Atmospheric Radiation Measurement program lidar-based cloud product, the new method appears more reliable and detects more clouds with high bases. The algorithm is applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu sites. At the SGP site, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring and shows bimodal vertical distributions with maximum occurrences at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. The dominant clouds are stratiform in winter and convective in summer. By contrast, the cloud frequency at the Taihu site shows no clear seasonal variation and the maximum occurrence is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at the SGP site. A seasonal analysis of cloud base occurrence frequency suggests that stratiform clouds dominate at the Taihu site.
Aquatic Rational Threshold Value (RTV) Concepts for Army Environmental Impact Assessment.
1979-07-01
rreversible impacts. In aquatic impacts. Examination of the etymology of “ration al systems, bot h the possible cause-effect relationships threshold value...namics, aqueous chemistry . toxicology, a driving function. 30 3’ The shading effects of ripar- and aquatic ecology. In addition , when man ’s use ian
Code of Federal Regulations, 2014 CFR
2014-07-01
... gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND... limit values adopted by the American Conference of Governmental Industrial Hygienists in “Threshold...
Rainfall threshold definition using an entropy decision approach and radar data
NASA Astrophysics Data System (ADS)
Montesarchio, V.; Ridolfi, E.; Russo, F.; Napolitano, F.
2011-07-01
Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.
A score-statistic approach for determining threshold values in QTL mapping.
Kao, Chen-Hung; Ho, Hsiang-An
2012-06-01
Issues in determining the threshold values of QTL mapping are often investigated for the backcross and F2 populations with relatively simple genome structures so far. The investigations of these issues in the progeny populations after F2 (advanced populations) with relatively more complicated genomes are generally inadequate. As these advanced populations have been well implemented in QTL mapping, it is important to address these issues for them in more details. Due to an increasing number of meiosis cycle, the genomes of the advanced populations can be very different from the backcross and F2 genomes. Therefore, special devices that consider the specific genome structures present in the advanced populations are required to resolve these issues. By considering the differences in genome structure between populations, we formulate more general score test statistics and gaussian processes to evaluate their threshold values. In general, we found that, given a significance level and a genome size, threshold values for QTL detection are higher in the denser marker maps and in the more advanced populations. Simulations were performed to validate our approach.
Statistical Study of Magnetic Nonpotential Measures in Confined and Eruptive Flares
NASA Astrophysics Data System (ADS)
Vasantharaju, N.; Vemareddy, P.; Ravindra, B.; Doddamani, V. H.
2018-06-01
Using Solar Dynamics Observatory/Helioseismic and Magnetic Imager vector magnetic field observations, we studied the relation between the degree of magnetic non-potentiality with the observed flare/coronal mass ejection (CME) in active regions (ARs). From a sample of 77 flare/CME cases, we found in general that the degree of non-potentiality is positively correlated with the flare strength and the associated CME speed. Since the magnetic flux in the flare-ribbon area is more related to the reconnection, we trace the strong gradient polarity inversion line (SGPIL) and Schrijver’s R value manually along the flare-ribbon extent. Manually detected SGPIL length and R values show higher correlation with the flare strength and CME speed than automatically traced values without flare-ribbon information. This highlights the difficulty of predicting the flare strength and CME speed a priori from the pre-flare magnetograms used in flare prediction models. Although the total potential magnetic energy proxies show a weak positive correlation, the decrease in free energy exhibits a higher correlation (0.56) with the flare strength and CME speed. Moreover, eruptive flares have thresholds of SGPIL length (31 Mm), R value (1.6 × 1019 Mx), and free energy decrease (2 × 1031 erg) compared to confined flares. In 90% of eruptive flares, the decay-index curve is steeper, reaching {n}crit}=1.5 within 42 Mm, whereas it is beyond this value in >70% of confined flares. While indicating improved statistics in the predictive capability of AR eruptive behavior with flare-ribbon information, our study provides threshold magnetic properties for a flare to be eruptive.
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
Shah, Anoop D.; Nicholas, Owen; Timmis, Adam D.; Feder, Gene; Abrams, Keith R.; Chen, Ruoling; Hingorani, Aroon D.; Hemingway, Harry
2011-01-01
Background Low haemoglobin concentration has been associated with adverse prognosis in patients with angina and myocardial infarction (MI), but the strength and shape of the association and the presence of any threshold has not been precisely evaluated. Methods and findings A retrospective cohort study was carried out using the UK General Practice Research Database. 20,131 people with a new diagnosis of stable angina and no previous acute coronary syndrome, and 14,171 people with first MI who survived for at least 7 days were followed up for a mean of 3.2 years. Using semi-parametric Cox regression and multiple adjustment, there was evidence of threshold haemoglobin values below which mortality increased in a graded continuous fashion. For men with MI, the threshold value was 13.5 g/dl (95% confidence interval [CI] 13.2–13.9); the 29.5% of patients with haemoglobin below this threshold had an associated hazard ratio for mortality of 2.00 (95% CI 1.76–2.29) compared to those with haemoglobin values in the lowest risk range. Women tended to have lower threshold haemoglobin values (e.g, for MI 12.8 g/dl; 95% CI 12.1–13.5) but the shape and strength of association did not differ between the genders, nor between patients with angina and MI. We did a systematic review and meta-analysis that identified ten previously published studies, reporting a total of only 1,127 endpoints, but none evaluated thresholds of risk. Conclusions There is an association between low haemoglobin concentration and increased mortality. A large proportion of patients with coronary disease have haemoglobin concentrations below the thresholds of risk defined here. Intervention trials would clarify whether increasing the haemoglobin concentration reduces mortality. Please see later in the article for the Editors' Summary PMID:21655315
Threshold thickness for applying diffusion equation in thin tissue optical imaging
NASA Astrophysics Data System (ADS)
Zhang, Yunyao; Zhu, Jingping; Cui, Weiwen; Nie, Wei; Li, Jie; Xu, Zhenghong
2014-08-01
We investigated the suitability of the semi-infinite model of the diffusion equation when using diffuse optical imaging (DOI) to image thin tissues with double boundaries. Both diffuse approximation and Monte Carlo methods were applied to simulate light propagation in the thin tissue model with variable optical parameters and tissue thicknesses. A threshold value of the tissue thickness was defined as the minimum thickness in which the semi-infinite model exhibits the same reflected intensity as that from the double-boundary model and was generated as the final result. In contrast to our initial hypothesis that all optical properties would affect the threshold thickness, our results show that only absorption coefficient is the dominant parameter and the others are negligible. The threshold thickness decreases from 1 cm to 4 mm as the absorption coefficient grows from 0.01 mm-1 to 0.2 mm-1. A look-up curve was derived to guide the selection of the appropriate model during the optical diagnosis of thin tissue cancers. These results are useful in guiding the development of the endoscopic DOI for esophageal, cervical and colorectal cancers, among others.
Ultrahigh Error Threshold for Surface Codes with Biased Noise
NASA Astrophysics Data System (ADS)
Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.
2018-02-01
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
Formulating face verification with semidefinite programming.
Yan, Shuicheng; Liu, Jianzhuang; Tang, Xiaoou; Huang, Thomas S
2007-11-01
This paper presents a unified solution to three unsolved problems existing in face verification with subspace learning techniques: selection of verification threshold, automatic determination of subspace dimension, and deducing feature fusing weights. In contrast to previous algorithms which search for the projection matrix directly, our new algorithm investigates a similarity metric matrix (SMM). With a certain verification threshold, this matrix is learned by a semidefinite programming approach, along with the constraints of the kindred pairs with similarity larger than the threshold, and inhomogeneous pairs with similarity smaller than the threshold. Then, the subspace dimension and the feature fusing weights are simultaneously inferred from the singular value decomposition of the derived SMM. In addition, the weighted and tensor extensions are proposed to further improve the algorithmic effectiveness and efficiency, respectively. Essentially, the verification is conducted within an affine subspace in this new algorithm and is, hence, called the affine subspace for verification (ASV). Extensive experiments show that the ASV can achieve encouraging face verification accuracy in comparison to other subspace algorithms, even without the need to explore any parameters.
Revising two-point discrimination assessment in normal aging and in patients with polyneuropathies.
van Nes, S I; Faber, C G; Hamers, R M T P; Harschnitz, O; Bakkers, M; Hermans, M C E; Meijer, R J; van Doorn, P A; Merkies, I S J
2008-07-01
To revise the static and dynamic normative values for the two-point discrimination test and to examine its applicability and validity in patients with a polyneuropathy. Two-point discrimination threshold values were assessed in 427 healthy controls and 99 patients mildly affected by a polyneuropathy. The controls were divided into seven age groups ranging from 20-29, 30-39,..., up to 80 years and older; each group consisted of at least 30 men and 30 women. Two-point discrimination examination took place under standardised conditions on the index finger. Correlation studies were performed between the scores obtained and the values derived from the Weinstein Enhanced Sensory Test (WEST) and the arm grade of the Overall Disability SumScore (ODSS) in the patients' group (validity studies). Finally, the sensitivity to detect patients mildly affected by a polyneuropathy was evaluated for static and dynamic assessments. There was a significant age-dependent increase in the two-point discrimination values. No significant gender difference was found. The dynamic threshold values were lower than the static scores. The two-point discrimination values obtained correlated significantly with the arm grade of the ODSS (static values: r = 0.33, p = 0.04; dynamic values: r = 0.37, p = 0.02) and the scores of the WEST in patients (static values: r = 0.58, p = 0.0001; dynamic values: r = 0.55, p = 0.0002). The sensitivity for the static and dynamic threshold values was 28% and 33%, respectively. This study provides age-related normative two-point discrimination threshold values using a two-point discriminator (an aesthesiometer). This easily applicable instrument could be used as part of a more extensive neurological sensory evaluation.
NASA Astrophysics Data System (ADS)
Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun
2007-03-01
Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.
The ablation threshold of Er;Cr:YSGG laser radiation in bone tissue
NASA Astrophysics Data System (ADS)
Benetti, Carolina; Zezell, Denise Maria
2015-06-01
In laser cut clinical applications, the use of energy densities lower than the ablation threshold causes increase of temperature of the irradiated tissue, which might result in an irreversible thermal damage. Hence, knowing the ablation threshold is crucial for insuring the safety of these procedures. The aim of this study was to determine the ablation threshold of the Er,Cr:YSGG laser in bone tissue. Bone pieces from jaws of New Zealand rabbits were cut as blocks of 5 mm × 8 mm and polished with sandpaper. The Er,Cr:YSGG laser used in this study had wavelength of 2780 nm, 20 Hz of frequency, and the irradiation condition was chosen so as to simulate the irradiation during a surgical procedure. The laser irradiation was performed with 12 different values of laser energy densities, between 3 J/cm2 and 42 J/cm2, during 3 seconds, resulting in the overlap of 60 pulses. This process was repeated in each sample, for all laser energy densities. After irradiation, the samples were analyzed by scanning electron microscope (SEM), and it was measured the crater diameter for each energy density. By fitting a curve that related the ablation threshold with the energy density and the corresponding diameter of ablation crater, it was possible to determine the ablation threshold. The results showed that the ablation threshold of the Er,Cr:YSGG in bone tissue was 1.95+/-0.42 J/cm2.
Pretorius, M L; Van Huyssteen, C W; Brown, L R
2017-10-13
A relationship between soil organic carbon and soil color is acknowledged-albeit not a direct one. Since heightened carbon contents can be an indicator of wetlands, a quantifiable relationship between color and carbon might assist in determining wetland boundaries by rapid, field-based appraisal. The overarching aim of this initial study was to determine the potential of top soil color to indicate soil organic carbon, and by extension wetland boundaries, on a sandy coastal plain in South Africa. Data were collected from four wetland types in northern KwaZulu-Natal in South Africa. Soil samples were taken to a depth of 300 mm in three transects in each wetland type and analyzed for soil organic carbon. The matrix color was described using a Munsell soil color chart. Various color indices were correlated with soil organic carbon. The relationship between color and carbon were further elucidated using segmented quantile regression. This showed that potentially maximal carbon contents will occur at values of low color indices, and predictably minimal carbon contents will occur at values of low or high color indices. Threshold values can thus be used to make deductions such as "when the sum of dry and wet Value and Chroma values is 9 or more, carbon content will be 4.79% and less." These threshold values can then be used to differentiate between wetland and non-wetland sites with a 70 to 100% certainty. This study successfully developed a quantifiable correlation between color and carbon and showed that wetland boundaries can be determined based thereon.
Baldi, Pierre
2010-01-01
As repositories of chemical molecules continue to expand and become more open, it becomes increasingly important to develop tools to search them efficiently and assess the statistical significance of chemical similarity scores. Here we develop a general framework for understanding, modeling, predicting, and approximating the distribution of chemical similarity scores and its extreme values in large databases. The framework can be applied to different chemical representations and similarity measures but is demonstrated here using the most common binary fingerprints with the Tanimoto similarity measure. After introducing several probabilistic models of fingerprints, including the Conditional Gaussian Uniform model, we show that the distribution of Tanimoto scores can be approximated by the distribution of the ratio of two correlated Normal random variables associated with the corresponding unions and intersections. This remains true also when the distribution of similarity scores is conditioned on the size of the query molecules in order to derive more fine-grained results and improve chemical retrieval. The corresponding extreme value distributions for the maximum scores are approximated by Weibull distributions. From these various distributions and their analytical forms, Z-scores, E-values, and p-values are derived to assess the significance of similarity scores. In addition, the framework allows one to predict also the value of standard chemical retrieval metrics, such as Sensitivity and Specificity at fixed thresholds, or ROC (Receiver Operating Characteristic) curves at multiple thresholds, and to detect outliers in the form of atypical molecules. Numerous and diverse experiments carried in part with large sets of molecules from the ChemDB show remarkable agreement between theory and empirical results. PMID:20540577
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-04-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1-2 deg s(-1). After unilateral vestibular injury patients' direction-discrimination thresholds worsen to ∼20 deg s(-1), and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s(-1)). While thresholds showed slight improvement by week 3 (25 deg s(-1)), they never recovered to control values - a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-01-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1–2 deg s−1. After unilateral vestibular injury patients’ direction–discrimination thresholds worsen to ∼20 deg s−1, and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s−1). While thresholds showed slight improvement by week 3 (25 deg s−1), they never recovered to control values – a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues. PMID:24366259
On the optimal z-score threshold for SISCOM analysis to localize the ictal onset zone.
De Coster, Liesbeth; Van Laere, Koen; Cleeren, Evy; Baete, Kristof; Dupont, Patrick; Van Paesschen, Wim; Goffin, Karolien E
2018-04-17
In epilepsy patients, SISCOM or subtraction ictal single photon emission computed tomography co-registered to magnetic resonance imaging has become a routinely used, non-invasive technique to localize the ictal onset zone (IOZ). Thresholding of clusters with a predefined number of standard deviations from normality (z-score) is generally accepted to localize the IOZ. In this study, we aimed to assess the robustness of this parameter in a group of patients with well-characterized drug-resistant epilepsy in whom the exact location of the IOZ was known after successful epilepsy surgery. Eighty patients underwent preoperative SISCOM and were seizure free in a postoperative period of minimum 1 year. SISCOMs with z-threshold 2 and 1.5 were analyzed by two experienced readers separately, blinded from the clinical ground truth data. Their reported location of the IOZ was compared with the operative resection zone. Furthermore, confidence scores of the SISCOM IOZ were compared for the two thresholds. Visual reporting with a z-score threshold of 1.5 and 2 showed no statistically significant difference in localizing correspondence with the ground truth (70 vs. 72% respectively, p = 0.17). Interrater agreement was moderate (κ = 0.65) at the threshold of 1.5, but high (κ = 0.84) at a threshold of 2, where also reviewers were significantly more confident (p < 0.01). SISCOM is a clinically useful, routinely used modality in the preoperative work-up in many epilepsy surgery centers. We found no significant differences in localizing value of the IOZ using a threshold of 1.5 or 2, but interrater agreement and reader confidence were higher using a z-score threshold of 2.
Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee
2014-02-01
Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Chefranov, Sergey; Chefranov, Alexander
2016-04-01
Linear hydrodynamic stability theory for the Hagen-Poiseuille (HP) flow yields a conclusion of infinitely large threshold Reynolds number, Re, value. This contradiction to the observation data is bypassed using assumption of the HP flow instability having hard type and possible for sufficiently high-amplitude disturbances. HP flow disturbance evolution is considered by nonlinear hydrodynamic stability theory. Similar is the case of the plane Couette (PC) flow. For the plane Poiseuille (PP) flow, linear theory just quantitatively does not agree with experimental data defining the threshold Reynolds number Re= 5772 ( S. A. Orszag, 1971), more than five-fold exceeding however the value observed, Re=1080 (S. J. Davies, C. M. White, 1928). In the present work, we show that the linear stability theory conclusions for the HP and PC on stability for any Reynolds number and evidently too high threshold Reynolds number estimate for the PP flow are related with the traditional use of the disturbance representation assuming the possibility of separation of the longitudinal (along the flow direction) variable from the other spatial variables. We show that if to refuse from this traditional form, conclusions on the linear instability for the HP and PC flows may be obtained for finite Reynolds numbers (for the HP flow, for Re>704, and for the PC flow, for Re>139). Also, we fit the linear stability theory conclusion on the PP flow to the experimental data by getting an estimate of the minimal threshold Reynolds number as Re=1040. We also get agreement of the minimal threshold Reynolds number estimate for PC with the experimental data of S. Bottin, et.al., 1997, where the laminar PC flow stability threshold is Re = 150. Rogue waves excitation mechanism in oppositely directed currents due to the PC flow linear instability is discussed. Results of the new linear hydrodynamic stability theory for the HP, PP, and PC flows are published in the following papers: 1. S.G. Chefranov, A.G. Chefranov, JETP, v.119, No.2, 331, 2014 2. S.G. Chefranov, A.G. Chefranov, Doklady Physics, vol.60, No.7, 327-332, 2015 3. S.G. Chefranov, A. G. Chefranov, arXiv: 1509.08910v1 [physics.flu-dyn] 29 Sep 2015 (accepted to JETP)
Lilge, L.; Olivo, M. C.; Schatz, S. W.; MaGuire, J. A.; Patterson, M. S.; Wilson, B. C.
1996-01-01
The applicability and limitations of a photodynamic threshold model, used to describe quantitatively the in vivo response of tissues to photodynamic therapy, are currently being investigated in a variety of normal and malignant tumour tissues. The model states that tissue necrosis occurs when the number of photons absorbed by the photosensitiser per unit tissue volume exceeds a threshold. New Zealand White rabbits were sensitised with porphyrin-based photosensitisers. Normal brain or intracranially implanted VX2 tumours were illuminated via an optical fibre placed into the tissue at craniotomy. The light fluence distribution in the tissue was measured by multiple interstitial optical fibre detectors. The tissue concentration of the photosensitiser was determined post mortem by absorption spectroscopy. The derived photodynamic threshold values for normal brain are significantly lower than for VX2 tumour for all photosensitisers examined. Neuronal damage is evident beyond the zone of frank necrosis. For Photofrin the threshold decreases with time delay between photosensitiser administration and light treatment. No significant difference in threshold is found between Photofrin and haematoporphyrin derivative. The threshold in normal brain (grey matter) is lowest for sensitisation by 5 delta-aminolaevulinic acid. The results confirm the very high sensitivity of normal brain to porphyrin photodynamic therapy and show the importance of in situ light fluence monitoring during photodynamic irradiation. Images Figure 1 Figure 4 Figure 5 Figure 6 Figure 7 PMID:8562339
Fabre, Stéphanie; Clerson, Pierre; Launay, Jean-Marie; Gautier, Jean-François; Vidal-Trecan, Tiphaine; Riveline, Jean-Pierre; Platt, Adam; Abrahamsson, Anna; Miner, Jeffrey N; Hughes, Glen; Richette, Pascal; Bardin, Thomas
2018-05-02
The uric acid (UA) level in patients with gout is a key factor in disease management and is typically measured in the laboratory using plasma samples obtained after venous puncture. This study aimed to assess the reliability of immediate UA measurement with capillary blood samples obtained by fingertip puncture with the HumaSens plus point-of-care meter. UA levels were measured using both the HumaSens plus meter in the clinic and the routine plasma UA method in the biochemistry laboratory of 238 consenting diabetic patients. HumaSens plus capillary and routine plasma UA measurements were compared by linear regression, Bland-Altman plots, intraclass correlation coefficient (ICC), and Lin's concordance coefficient. Values outside the dynamic range of the meter, low (LO) or high (HI), were analyzed separately. The best capillary UA thresholds for detecting hyperuricemia were determined by receiver operating characteristic (ROC) curves. The impact of potential confounding factors (demographic and biological parameters/treatments) was assessed. Capillary and routine plasma UA levels were compared to reference plasma UA measurements by liquid chromatography-mass spectrometry (LC-MS) for a subgroup of 67 patients. In total, 205 patients had capillary and routine plasma UA measurements available. ICC was 0.90 (95% confidence interval (CI) 0.87-0.92), Lin's coefficient was 0.91 (0.88-0.93), and the Bland-Altman plot showed good agreement over all tested values. Overall, 17 patients showed values outside the dynamic range. LO values were concordant with plasma values, but HI values were considered uninterpretable. Capillary UA thresholds of 299 and 340 μmol/l gave the best results for detecting hyperuricemia (corresponding to routine plasma UA thresholds of 300 and 360 μmol/l, respectively). No significant confounding factor was found among those tested, except for hematocrit; however, this had a negligible influence on the assay reliability. When capillary and routine plasma results were discordant, comparison with LC-MS measurements showed that plasma measurements had better concordance: capillary UA, ICC 0.84 (95% CI 0.75-0.90), Lin's coefficient 0.84 (0.77-0.91); plasma UA, ICC 0.96 (0.94-0.98), Lin's coefficient 0.96 (0.94-0.98). UA measurements with the HumaSens plus meter were reasonably comparable with those of the laboratory assay. The meter is easy to use and may be useful in the clinic and in epidemiologic studies.
Prediction of Fracture Initiation in Hot Compression of Burn-Resistant Ti-35V-15Cr-0.3Si-0.1C Alloy
NASA Astrophysics Data System (ADS)
Zhang, Saifei; Zeng, Weidong; Zhou, Dadi; Lai, Yunjin
2015-11-01
An important concern in hot working of metals is whether the desired deformation can be accomplished without fracture of the material. This paper builds a fracture prediction model to predict fracture initiation in hot compression of a burn-resistant beta-stabilized titanium alloy Ti-35V-15Cr-0.3Si-0.1C using a combined approach of upsetting experiments, theoretical failure criteria and finite element (FE) simulation techniques. A series of isothermal compression experiments on cylindrical specimens were conducted in temperature range of 900-1150 °C, strain rate of 0.01-10 s-1 first to obtain fracture samples and primary reduction data. Based on that, a comparison of eight commonly used theoretical failure criteria was made and Oh criterion was selected and coded into a subroutine. FE simulation of upsetting experiments on cylindrical specimens was then performed to determine the fracture threshold values of Oh criterion. By building a correlation between threshold values and the deforming parameters (temperature and strain rate, or Zener-Hollomon parameter), a new fracture prediction model based on Oh criterion was established. The new model shows an exponential decay relationship between threshold values and Zener-Hollomon parameter (Z), and the relative error of the model is less than 15%. This model was then applied successfully in the cogging of Ti-35V-15Cr-0.3Si-0.1C billet.
NASA Technical Reports Server (NTRS)
Murri, Gretchen Bostaph; Martin, Roderick H.
1991-01-01
Static and fatigue double-cantilever beam (DCB) and end-notch flexure (ENF) tests were conducted to determine the effect of the simulated initial delamination in interlaminar fracture toughness, G(sub c), and fatigue fracture threshold, G(sub th). Unidirectional, 24-ply specimens of S2/SP250 glass/epoxy were tested using Kapton inserts of four different thickness - 13, 25, 75, and 130 microns, at the midplane at one end, or with tension or shear precracks, to simulate an initial delamination. To determine G(sub c), the fatigue fracture threshold below which no delamination growth would occur in less than 1 x 10(exp 6) cycles, fatigue tests were conducted by cyclically loading specimens until delamination growth was detected. Consistent values of model 1 fracture toughness, G(sub Ic), were measured from DCB specimens with inserts of thickness 75 microns or thinner, or with shear precracks. The fatigue DCB tests gave similar values of G(sub Ith) for the 13, 25, and 75 microns specimens. Results for the shear precracked specimens were significantly lower that for specimens without precracks. Results for both the static and fatigue ENF tests showed that measured G(IIc) and G(IIth) values decreased with decreasing insert thickness, so that no limiting thickness could be determined. Results for specimens with inserts of 75 microns or thicker were significantly higher than the results for precracked specimens or specimens with 13 or 25 microns inserts.
NASA Astrophysics Data System (ADS)
Braud, A.; Girard, S.; Doualan, J. L.; Thuau, M.; Moncorgé, R.; Tkachuk, A. M.
2000-02-01
Energy-transfer processes have been quantitatively studied in various Tm:Yb-doped fluoride crystals. A comparison between the three host crystals which have been examined (KY3F10, LiYF4, and BaY2F8) shows clearly that the efficiency of the Yb-->Tm energy transfers is larger in KY3F10 than in LiYF4 or BaY2F8. The dependence of the energy-transfer parameters upon the codopant concentrations has been experimentally measured and compared with the results calculated on the basis of migration-assisted energy-transfer models. Using these energy-transfer parameters and a rate equation model, we have performed a theoretical calculation of the laser thresholds for the 3H4-->3F4 and 3H4-->3H5 laser transitions of the Tm ion around 1.5 and 2.3 μm, respectively. Laser experiments performed at 1.5 μm in Yb:Tm:LiYF4 then led to laser threshold values in good agreement with those derived theoretically. Based on these results, optimized values for the Yb and Tm dopant concentrations for typical values of laser cavity and pump modes were finally derived to minimize the threshold pump powers for the laser transitions around 1.5 and 2.3 μm.
Cingoz, Ilker Deniz; Kizmazoglu, Ceren; Guvenc, Gonul; Sayin, Murat; Imre, Abdulkadir; Yuceer, Nurullah
2018-06-01
The aim of this study was to evaluate the olfactory function of patients who had undergone endoscopic transsphenoidal pituitary surgery. In this prospective study, the "Sniffin' Sticks" test was performed between June 2016 and April 2017 at Izmir Katip Celebi University Ataturk Training and Research Hospital. Thirty patients who were scheduled to undergo endoscopic transsphenoidal pituitary surgery were evaluated preoperatively and 8 weeks postoperatively using the Sniffin' Sticks test battery for olfactory function, odor threshold, smell discrimination, and odor identification. The patients were evaluated preoperatively by an otolaryngologist. The patients' demographic data and olfactory functions were analyzed with a t test and Wilcoxon-labeled sequential test. The study group comprised 14 women (46.7%) and 16 men (53.3%) patients. The mean age of the patients was 37.50 ± 9.43 years (range: 16-53 years). We found a significant difference in the preoperative and postoperative values of the odor recognition test (P = 0.017); however, there was no significant difference between the preoperative and postoperative odor threshold values (P = 0.172) and odor discrimination values (P = 0.624). The threshold discrimination identification test scores were not significant (P = 0.110). The olfactory function of patients who were normosmic preoperatively was not affected postoperatively. This study shows that the endoscopic transsphenoidal technique for pituitary surgery without nasal flap has no negative effect on the olfactory function.
Bilevel thresholding of sliced image of sludge floc.
Chu, C P; Lee, D J
2004-02-15
This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.
Socała, Katarzyna; Nieoczym, Dorota; Kowalczuk-Vasilev, Edyta; Wyska, Elżbieta; Wlaź, Piotr
2017-07-01
Activation of Nrf2 with sulforaphane has recently gained attention as a new therapeutic approach in the treatment of many diseases, including epilepsy. As a plant-derived compound, sulforaphane is considered to be safe and well-tolerated. It is widely consumed, also by patients suffering from seizure and taking antiepileptic drugs, but no toxicity profile of sulforaphane exists. Since many natural remedies and dietary supplements may increase seizure risk and potentially interact with antiepileptic drugs, the aim of our study was to investigate the acute effects of sulforaphane on seizure thresholds and activity of some first- and second-generation antiepileptic drugs in mice. In addition, some preliminary toxicity profile of sulforaphane in mice after intraperitoneal injection was evaluated. The LD 50 value of sulforaphane in mice was estimated at 212.67mg/kg, while the TD 50 value - at 191.58mg/kg. In seizure tests, sulforaphane at the highest dose tested (200mg/kg) significantly decreased the thresholds for the onset of the first myoclonic twitch and generalized clonic seizure in the iv PTZ test as well as the threshold for the 6Hz-induced psychomotor seizure. At doses of 10-200mg/kg, sulforaphane did not affect the threshold for the iv PTZ-induced forelimb tonus or the threshold for maximal electroshock-induced hindlimb tonus. Interestingly, sulforaphane (at 100mg/kg) potentiated the anticonvulsant efficacy of carbamazepine in the maximal electroshock seizure test. This interaction could have been pharmacokinetic in nature, as sulforaphane increased concentrations of carbamazepine in both serum and brain tissue. The toxicity study showed that high doses of sulforaphane produced marked sedation (at 150-300mg/kg), hypothermia (at 150-300mg/kg), impairment of motor coordination (at 200-300mg/kg), decrease in skeletal muscle strength (at 250-300mg/kg), and deaths (at 200-300mg/kg). Moreover, blood analysis showed leucopenia in mice injected with sulforaphane at 200mg/kg. In conclusion, since sulforaphane was proconvulsant at a toxic dose, the safety profile and the risk-to-benefit ratio of sulforaphane usage in epileptic patients should be further evaluated. Copyright © 2017 Elsevier Inc. All rights reserved.
Singh, Lakhwinder Pal; Bhardwaj, Arvind; Kumar, Deepak Kishore
2012-01-01
Occupational noise exposure and noise-induced hearing loss (NIHL) have been recognized as a problem among workers in Indian industries. The major industries in India are based on manufacturing. There are appreciable numbers of casting and forging units spread across the country. The objective of this study is to determine the prevalence of permanent hearing threshold shift among the workers engaged in Indian iron and steel small and medium enterprises (SMEs) and compared with control group subjects. As a part of hearing protection intervention, audiometric tests were conducted at low (250-1000 Hz), medium (1500-3000 Hz), and high (4000-8000 Hz) frequencies. The occurrence of hearing loss was determined based on hearing threshold levels with a low fence of 25 dB. Comparisons were made for hearing threshold at different frequencies between the exposed and control groups using Student's t test. ANOVA was used for the comparison of hearing threshold dB at different frequencies among occupation and year of experience. A P value <0.05 was considered as statistically significant. All data were presented as mean value (SD). Over 90% of workers engaged in various processes of casting and forging industry showed hearing loss in the noise-sensitive medium and higher frequencies. Occupation was significantly associated with NIHL, and hearing loss was particularly high among the workers of forging section. The analyses revealed a higher prevalence of significant hearing loss among the forging workers compared to the workers associated with other activities. The study shows alarming signals of NIHL, especially in forging workers. The occupational exposure to noise could be minimized by efficient control measures through engineering controls, administrative controls, and the use of personal protective devices. Applications of engineering and/or administrative controls are frequently not feasible in the developing countries for technical and financial reasons. A complete hearing conservation programme, including training, audiometry, job rotation, and the use of hearing protection devices, is the most feasible method for the protection of industrial workers from prevailing noise in workplace environments in the developing countries.
Decay rates of magnetic modes below the threshold of a turbulent dynamo.
Herault, J; Pétrélis, F; Fauve, S
2014-04-01
We measure the decay rates of magnetic field modes in a turbulent flow of liquid sodium below the dynamo threshold. We observe that turbulent fluctuations induce energy transfers between modes with different symmetries (dipolar and quadrupolar). Using symmetry properties, we show how to measure the decay rate of each mode without being restricted to the one with the smallest damping rate. We observe that the respective values of the decay rates of these modes depend on the shape of the propellers driving the flow. Dynamical regimes, including field reversals, are observed only when the modes are both nearly marginal. This is in line with a recently proposed model.
Low latency counter event indication
Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY
2008-09-16
A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.
Low latency counter event indication
Gara, Alan G.; Salapura, Valentina
2010-08-24
A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.
Kotsiou, Ourania S; Tzortzi, Panagiota; Beta, Rafailia A A; Kyritsis, Athanasios; Gourgoulianis, Konstantinos I
2018-06-01
A follow-up thoracentesis is proposed in suspected atypical tuberculosis cases. The study aimed to define the variability of pleural ADA values across repeated thoracenteses in different types of pleural effusions (PEs) and to evaluate whether ADA variance, in regard to the cutoff value of 40 U/L, affected final diagnosis. A total of 131 patients with PEs of various etiologies underwent three repeated thoracenteses. ADA values were subsequently estimated. 82% and 55% of patients had greater than 10% and 20% deviation from the highest ADA value, respectively. From those patients who had a variance of 20%, 36% had only increasing ADA values, while 19% had only decreasing values. Considering the cutoff value of 40 U/L, only in two cases, ADA decreased below this threshold, which concerned a man with tuberculous pleurisy and a woman with lymphoma both in the course of treatment. Furthermore, only in two cases with rising values, ADA finally exceeded the cutoff limit, which concerned a man with rheumatoid pleurisy and a man with tuberculous pleurisy. Surprisingly, malignant PEs (MPEs) showed a higher percentage of increasing values compared to all other exudates that did not, however, exceed the threshold. The determination of pleural ADA levels is a reproducible method for rapid tuberculosis diagnosis. The detected measurement deviations do not appear to affect final diagnosis. In specific situations, repeated ADA measurements may be valuable in directing further diagnostic evaluation. More investigation is needed to elucidate the possible prognostic significance of the increasing trend in ADA values in MPEs. © 2017 Wiley Periodicals, Inc.
Dual processing model of medical decision-making.
Djulbegovic, Benjamin; Hozo, Iztok; Beckstead, Jason; Tsalatsanis, Athanasios; Pauker, Stephen G
2012-09-03
Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. We show that physician's beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker's threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories).
Lucente, Giuseppe; Lam, Steven; Schneider, Heike; Picht, Thomas
2018-02-01
Non-invasive pre-surgical mapping of eloquent brain areas with navigated transcranial magnetic stimulation (nTMS) is a useful technique linked to the improvement of surgical planning and patient outcomes. The stimulator output intensity and subsequent resting motor threshold determination (rMT) are based on the motor-evoked potential (MEP) elicited in the target muscle with an amplitude above a predetermined threshold of 50 μV. However, a subset of patients is unable to achieve complete relaxation in the target muscles, resulting in false positives that jeopardize mapping validity with conventional MEP determination protocols. Our aim is to explore the feasibility and reproducibility of a novel mapping approach that investigates how an increase of the MEP amplitude threshold to 300 and 500 μV affects subsequent motor maps. Seven healthy subjects underwent motor mapping with nTMS. RMT was calculated with the conventional methodology in conjunction with experimental 300- and 500-μV MEP amplitude thresholds. Motor mapping was performed with 105% of rMT stimulator intensity using the FDI as the target muscle. Motor mapping was possible in all patients with both the conventional and experimental setups. Motor area maps with a conventional 50-μV threshold showed poor correlation with 300-μV (α = 0.446, p < 0.001) maps, but showed excellent consistency with 500-μV motor area maps (α = 0.974, p < 0.001). MEP latencies were significantly less variable (23 ms for 50 μV vs. 23.7 ms for 300 μV vs. 23.7 ms for 500 μV, p < 0.001). A slight but significant increase of the electric field (EF) value was found (EF: 60.8 V/m vs. 64.8 V/m vs. 66 V/m p < 0.001). Our study demonstrates the feasibility of increasing the MEP detection threshold to 500 μV in rMT determination and motor area mapping with nTMS without losing precision.
Vysotsky, Yu B; Belyaeva, E A; Fainerman, V B; Vollhardt, D; Aksenenko, E V; Miller, R
2009-04-02
In the framework of the semiempirical PM3 method, the thermodynamic parameters of cis isomers of unsaturated carboxylic acids at the air/water interface are studied. The model systems used are unsaturated cis fatty acid of the composition Delta = 12-15 and omega = 6-11, where Delta and omega refer to the number of carbon atoms between the functional group and double bond, and that between the double bond and methyl group, respectively. For dimers, trimers, and tetramers of the four acid series, the thermodynamic parameters of clusterization are calculated. It is shown that the position of the double bond does not significantly affect the values of thermodynamic parameters of formation and clusterization of carboxylic acids for equal chain lengths (n = Delta + omega). The calculated results show that for cis unsaturated fatty acid with odd Delta values the spontaneous clusterization threshold corresponds to n = 17-18 carbon atoms in the alkyl chain, while for monounsaturated acids with even Delta values this threshold corresponds to n = 18-19 carbon atoms in the alkyl chain. These differences in the clusterization threshold between the acids with even and odd Delta values are attributed to the formation of additional intermolecular hydrogen bonds between the ketonic oxygen atom of one monomer and the hydrogen atom linked to the alpha-carbon atom of the second monomer for the acids with odd Delta values or between the hydroxyl oxygen atom of one monomer and hydrogen atom linked to the alpha-carbon atom of the second monomer for the acids with even Delta values. The results obtained in the study agree satisfactorily with our experimental data for cis unsaturated nervonic (Delta15, omega9) and erucic acids (Delta13, omega9), and published data for some fatty acids, namely cis-16-heptadecenoic (Delta16, omega1), cis-9-hexadecenoic (Delta7, omega9), cis-11-eicosenoic (Delta11, omega9) and cis-9-octadecenoic acid (Delta9, omega9).
Sparing of normal urothelium in hexyl-aminolevulinate-mediated photodynamic therapy
NASA Astrophysics Data System (ADS)
Vaucher, Laurent; Jichlinski, Patrice; Lange, Norbert; Ritter-Schenk, Celine; van den Bergh, Hubert; Kucera, Pavel
2005-04-01
This work determines on an in vitro porcine urothelium model the threshold values of different parameters such as photosensitizer concentration, irradiation parameters and production of reactive oxygen species in order to control the damage on normal urothelium and spare about 50% of normal mucosa. For a three hours HAL incubation time, these threshold values were with blue light (0.75J/cm at 75 mW/cm2 or 0.15J/cm2 at 30 mW/cm2) and with white light (0.55J/cm2, at 30 mW/cm2). This means that for identical fluence rates, the threshold value for white light irradiation may be 3 times higher than for blue light irradiation.
Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate
NASA Astrophysics Data System (ADS)
Li, Chun-Hsien
2015-06-01
This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.
Abejón, David; Rueda, Pablo; del Saz, Javier; Arango, Sara; Monzón, Eva; Gilsanz, Fernando
2015-04-01
Neurostimulation is the process and technology derived from the application of electricity with different parameters to activate or inhibit nerve pathways. Pulse width (Pw) is the duration of each electrical impulse and, along with amplitude (I), determines the total energy charge of the stimulation. The aim of the study was to test Pw values to find the most adequate pulse widths in rechargeable systems to obtain the largest coverage of the painful area, the most comfortable paresthesia, and the greatest patient satisfaction. A study of the parameters was performed, varying Pw while maintaining a fixed frequency at 50 Hz. Data on perception threshold (Tp ), discomfort threshold (Td ), and therapeutic threshold (Tt ) were recorded, applying 14 increasing Pw values ranging from 50 µsec to 1000 µsec. Lastly, the behavior of the therapeutic range (TR), the coverage of the painful area, the subjective patient perception of paresthesia, and the degree of patient satisfaction were assessed. The findings after analyzing the different thresholds were as follows: When varying the Pw, the differences obtained at each threshold (Tp , Tt , and Td ) were statistically significant (p < 0.05). The differences among the resulting Tp values and among the resulting Tt values were statistically significant when varying Pw from 50 up to 600 µsec (p < 0.05). For Pw levels 600 µsec and up, no differences were observed in these thresholds. In the case of Td , significant differences existed as Pw increased from 50 to 700 µsec (p ≤ 0.05). The coverage increased in a statistically significant way (p < 0.05) from Pw values of 50 µsec to 300 µsec. Good or very good subjective perception was shown at about Pw 300 µsec. The patient paresthesia coverage was introduced as an extra variable in the chronaxie-rheobase curve, allowing the adjustment of Pw values for optimal programming. The coverage of the patient against the current chronaxie-rheobase formula will be represented on three axes; an extra axis (z) will appear, multiplying each combination of Pw value and amplitude by the percentage of coverage corresponding to those values. Using this new comparison of chronaxie-rheobase curve vs. coverage, maximum Pw values will be obtained different from those obtained by classic methods. © 2014 International Neuromodulation Society.
Application of automatic threshold in dynamic target recognition with low contrast
NASA Astrophysics Data System (ADS)
Miao, Hua; Guo, Xiaoming; Chen, Yu
2014-11-01
Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.
NASA Astrophysics Data System (ADS)
van Wyk, F.; Highcock, E. G.; Field, A. R.; Roach, C. M.; Schekochihin, A. A.; Parra, F. I.; Dorland, W.
2017-11-01
We investigate the effect of varying the ion temperature gradient (ITG) and toroidal equilibrium scale sheared flow on ion-scale turbulence in the outer core of MAST by means of local gyrokinetic simulations. We show that nonlinear simulations reproduce the experimental ion heat flux and that the experimentally measured values of the ITG and the flow shear lie close to the turbulence threshold. We demonstrate that the system is subcritical in the presence of flow shear, i.e., the system is formally stable to small perturbations, but transitions to a turbulent state given a large enough initial perturbation. We propose that the transition to subcritical turbulence occurs via an intermediate state dominated by low number of coherent long-lived structures, close to threshold, which increase in number as the system is taken away from the threshold into the more strongly turbulent regime, until they fill the domain and a more conventional turbulence emerges. We show that the properties of turbulence are effectively functions of the distance to threshold, as quantified by the ion heat flux. We make quantitative comparisons of correlation lengths, times, and amplitudes between our simulations and experimental measurements using the MAST BES diagnostic. We find reasonable agreement of the correlation properties, most notably of the correlation time, for which significant discrepancies were found in previous numerical studies of MAST turbulence.
Iwasaki, Satoshi; Usami, Shin-Ichi; Takahashi, Haruo; Kanda, Yukihiko; Tono, Tetsuya; Doi, Katsumi; Kumakawa, Kozo; Gyo, Kiyofumi; Naito, Yasushi; Kanzaki, Sho; Yamanaka, Noboru; Kaga, Kimitaka
2017-07-01
To report on the safety and efficacy of an investigational active middle ear implant (AMEI) in Japan, and to compare results to preoperative results with a hearing aid. Prospective study conducted in Japan in which 23 Japanese-speaking adults suffering from conductive or mixed hearing loss received a VIBRANT SOUNDBRIDGE with implantation at the round window. Postoperative thresholds, speech perception results (word recognition scores, speech reception thresholds, signal-to-noise ratio [SNR]), and quality of life questionnaires at 20 weeks were compared with preoperative results with all patients receiving the same, best available hearing aid (HA). Statistically significant improvements in postoperative AMEI-aided thresholds (1, 2, 4, and 8 kHz) and on the speech reception thresholds and word recognition scores tests, compared with preoperative HA-aided results, were observed. On the SNR, the subjects' mean values showed statistically significant improvement, with -5.7 dB SNR for the AMEI-aided mean and -2.1 dB SNR for the preoperative HA-assisted mean. The APHAB quality of life questionnaire also showed statistically significant improvement with the AMEI. Results with the AMEI applied to the round window exceeded those of the best available hearing aid in speech perception as well as quality of life questionnaires. There were minimal adverse events or changes to patients' residual hearing.
Rodriguez-Martinez, Carlos E; Sossa-Briceño, Monica P; Castro-Rodriguez, Jose A
2018-05-01
Asthma educational interventions have been shown to improve several clinically and economically important outcomes. However, these interventions are costly in themselves and could lead to even higher disease costs. A cost-effectiveness threshold analysis would be helpful in determining the threshold value of the cost of educational interventions, leading to these interventions being cost-effective. The aim of the present study was to perform a cost-effectiveness threshold analysis to determine the level at which the cost of a pediatric asthma educational intervention would be cost-effective and cost-saving. A Markov-type model was developed in order to estimate costs and health outcomes of a simulated cohort of pediatric patients with persistent asthma treated over a 12-month period. Effectiveness parameters were obtained from a single uncontrolled before-and-after study performed with Colombian asthmatic children. Cost data were obtained from official databases provided by the Colombian Ministry of Health. The main outcome was the variable "quality-adjusted life-years" (QALYs). A deterministic threshold sensitivity analysis showed that the asthma educational intervention will be cost-saving to the health system if its cost is under US$513.20. Additionally, the analysis showed that the cost of the intervention would have to be below US$967.40 in order to be cost-effective. This study identified the level at which the cost of a pediatric asthma educational intervention will be cost-effective and cost-saving for the health system in Colombia. Our findings could be a useful aid for decision makers in efficiently allocating limited resources when planning asthma educational interventions for pediatric patients.
An avoidance behavior model for migrating whale populations
NASA Astrophysics Data System (ADS)
Buck, John R.; Tyack, Peter L.
2003-04-01
A new model is presented for the avoidance behavior of migrating marine mammals in the presence of a noise stimulus. This model assumes that each whale will adjust its movement pattern near a sound source to maintain its exposure below its own individually specific maximum received sound-pressure level, called its avoidance threshold. The probability distribution function (PDF) of this avoidance threshold across individuals characterizes the migrating population. The avoidance threshold PDF may be estimated by comparing the distribution of migrating whales during playback and control conditions at their closest point of approach to the sound source. The proposed model was applied to the January 1998 experiment which placed a single acoustic source from the U.S. Navy SURTASS-LFA system in the migration corridor of grey whales off the California coast. This analysis found that the median avoidance threshold for this migrating grey whale population was 135 dB, with 90% confidence that the median threshold was within +/-3 dB of this value. This value is less than the 141 dB value for 50% avoidance obtained when the 1984 ``Probability of Avoidance'' model of Malme et al.'s was applied to the same data. [Work supported by ONR.
The (in)famous GWAS P-value threshold revisited and updated for low-frequency variants.
Fadista, João; Manning, Alisa K; Florez, Jose C; Groop, Leif
2016-08-01
Genome-wide association studies (GWAS) have long relied on proposed statistical significance thresholds to be able to differentiate true positives from false positives. Although the genome-wide significance P-value threshold of 5 × 10(-8) has become a standard for common-variant GWAS, it has not been updated to cope with the lower allele frequency spectrum used in many recent array-based GWAS studies and sequencing studies. Using a whole-genome- and -exome-sequencing data set of 2875 individuals of European ancestry from the Genetics of Type 2 Diabetes (GoT2D) project and a whole-exome-sequencing data set of 13 000 individuals from five ancestries from the GoT2D and T2D-GENES (Type 2 Diabetes Genetic Exploration by Next-generation sequencing in multi-Ethnic Samples) projects, we describe guidelines for genome- and exome-wide association P-value thresholds needed to correct for multiple testing, explaining the impact of linkage disequilibrium thresholds for distinguishing independent variants, minor allele frequency and ancestry characteristics. We emphasize the advantage of studying recent genetic isolate populations when performing rare and low-frequency genetic association analyses, as the multiple testing burden is diminished due to higher genetic homogeneity.
Do poison center triage guidelines affect healthcare facility referrals?
Benson, B E; Smith, C A; McKinney, P E; Litovitz, T L; Tandberg, W D
2001-01-01
The purpose of this study was to determine the extent to which poison center triage guidelines influence healthcare facility referral rates for acute, unintentional acetaminophen-only poisoning and acute, unintentional adult formulation iron poisoning. Managers of US poison centers were interviewed by telephone to determine their center's triage threshold value (mg/kg) for acute iron and acute acetaminophen poisoning in 1997. Triage threshold values and healthcare facility referral rates were fit to a univariate logistic regression model for acetaminophen and iron using maximum likelihood estimation. Triage threshold values ranged from 120-201 mg/kg (acetaminophen) and 16-61 mg/kg (iron). Referral rates ranged from 3.1% to 24% (acetaminophen) and 3.7% to 46.7% (iron). There was a statistically significant inverse relationship between the triage value and the referral rate for acetaminophen (p < 0.001) and iron (p = 0.0013). The model explained 31.7% of the referral variation for acetaminophen but only 4.1% of the variation for iron. There is great variability in poison center triage values and referral rates for iron and acetaminophen poisoning. Guidelines can account for a meaningful proportion of referral variation. Their influence appears to be substance dependent. These data suggest that efforts to determine and utilize the highest, safe, triage threshold value could substantially decrease healthcare costs for poisonings as long as patient medical outcomes are not compromised.
Xu, Yueru; Ye, Zhirui; Wang, Yuan; Wang, Chao; Sun, Cuicui
2018-05-18
This paper focuses on the effect of road lighting on road safety at accesses and tries to quantitatively analyze the relationship between road lighting and road safety. An Artificial Neural Network (ANN) was applied in this study. This method is one of the most popular machine-learning methods in recent years and does not require any pre-defined assumptions. This method was applied using field data collected from ten road segments in Nanjing, Jiangsu Province, China. The results show that the impact of road lighting on road safety at accesses is significant. In addition, road lighting has greater influence when vehicle speeds are higher or the number of lanes is larger. A threshold illuminance was also found in this paper, and the results show that the safety level at accesses will become stable when reaching this value. The improvement of illuminance can decrease the speed variation among vehicles and improve the safety level. In addition, high-grade roads need better illuminance at accesses. A threshold value can also be obtained based on related variables and used to develop scientific guidelines for traffic management organizations.
NASA Technical Reports Server (NTRS)
Hastings, E. C., Jr.; Shanks, R. E.; Mueller, A. W.
1976-01-01
Noise measurements have been made with a twin-engine commercial jet aircraft making 3 deg approaches and level flyovers. The flight-test data showed that, in the standard 3 deg approach configuration with 40 deg flaps, effective perceived noise level (EPNL) had a value of 109.5 effective perceived noise decibels (EPNdB). This result was in agreement with unpublished data obtained with the same type of aircraft during noise certification tests; the 3 deg approaches made with 30 deg flaps and slightly reduced thrust reduced the EPNL value by 1 EPNdB. Extended center-line noise determined during the 3 deg approaches with 40 deg flaps showed that the maximum reference A-weighted sound pressure level (LA,max)ref varied from 100.0 A-weighted decibels 2.01 km (108 n. mi.) from the threshold to 87.4 db(A) at 6.12 km (3.30 n. mi.) from the threshold. These test values were about 3 db(A) higher than estimates used for comparison. The test data along the extended center line during approaches with 30 deg flaps were 1 db(A) lower than those for approaches with 40 deg flaps. Flight-test data correlating (LA,max)ref with thrust at altitudes of 122 m (400 ft) and 610 m (2000 ft) were in agreement with reference data used for comparison.
Phase transition of Boolean networks with partially nested canalizing functions
NASA Astrophysics Data System (ADS)
Jansen, Kayse; Matache, Mihaela Teodora
2013-07-01
We generate the critical condition for the phase transition of a Boolean network governed by partially nested canalizing functions for which a fraction of the inputs are canalizing, while the remaining non-canalizing inputs obey a complementary threshold Boolean function. Past studies have considered the stability of fully or partially nested canalizing functions paired with random choices of the complementary function. In some of those studies conflicting results were found with regard to the presence of chaotic behavior. Moreover, those studies focus mostly on ergodic networks in which initial states are assumed equally likely. We relax that assumption and find the critical condition for the sensitivity of the network under a non-ergodic scenario. We use the proposed mathematical model to determine parameter values for which phase transitions from order to chaos occur. We generate Derrida plots to show that the mathematical model matches the actual network dynamics. The phase transition diagrams indicate that both order and chaos can occur, and that certain parameters induce a larger range of values leading to order versus chaos. The edge-of-chaos curves are identified analytically and numerically. It is shown that the depth of canalization does not cause major dynamical changes once certain thresholds are reached; these thresholds are fairly small in comparison to the connectivity of the nodes.
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
Li, Jing; Blakeley, Daniel; Smith?, Robert J.
2011-01-01
The basic reproductive ratio, R 0, is one of the fundamental concepts in mathematical biology. It is a threshold parameter, intended to quantify the spread of disease by estimating the average number of secondary infections in a wholly susceptible population, giving an indication of the invasion strength of an epidemic: if R 0 < 1, the disease dies out, whereas if R 0 > 1, the disease persists. R 0 has been widely used as a measure of disease strength to estimate the effectiveness of control measures and to form the backbone of disease-management policy. However, in almost every aspect that matters, R 0 is flawed. Diseases can persist with R 0 < 1, while diseases with R 0 > 1 can die out. We show that the same model of malaria gives many different values of R 0, depending on the method used, with the sole common property that they have a threshold at 1. We also survey estimated values of R 0 for a variety of diseases, and examine some of the alternatives that have been proposed. If R 0 is to be used, it must be accompanied by caveats about the method of calculation, underlying model assumptions and evidence that it is actually a threshold. Otherwise, the concept is meaningless. PMID:21860658
Percolation Laws of a Fractal Fracture-Pore Double Medium
NASA Astrophysics Data System (ADS)
Zhao, Yangsheng; Feng, Zengchao; Lv, Zhaoxing; Zhao, Dong; Liang, Weiguo
2016-12-01
The fracture-pore double porosity medium is one of the most common media in nature, for example, rock mass in strata. Fracture has a more significant effect on fluid flow than a pore in a fracture-pore double porosity medium. Hence, the fracture effect on percolation should be considered when studying the percolation phenomenon in porous media. In this paper, based on the fractal distribution law, three-dimensional (3D) fracture surfaces, and two-dimensional (2D) fracture traces in rock mass, the locations of fracture surfaces or traces are determined using a random function of uniform distribution. Pores are superimposed to build a fractal fracture-pore double medium. Numerical experiments were performed to show percolation phenomena in the fracture-pore double medium. The percolation threshold can be determined from three independent variables (porosity n, fracture fractal dimension D, and initial value of fracture number N0). Once any two are determined, the percolation probability exists at a critical point with the remaining parameter changing. When the initial value of the fracture number is greater than zero, the percolation threshold in the fracture-pore medium is much smaller than that in a pore medium. When the fracture number equals zero, the fracture-pore medium degenerates to a pore medium, and both percolation thresholds are the same.
Robbiano, Valentina; Paternò, Giuseppe M; La Mattina, Antonino A; Motti, Silvia G; Lanzani, Guglielmo; Scotognella, Francesco; Barillaro, Giuseppe
2018-05-22
Silicon photonics would strongly benefit from monolithically integrated low-threshold silicon-based laser operating at room temperature, representing today the main challenge toward low-cost and power-efficient electronic-photonic integrated circuits. Here we demonstrate low-threshold lasing from fully transparent nanostructured porous silicon (PSi) monolithic microcavities (MCs) infiltrated with a polyfluorene derivative, namely, poly(9,9-di- n-octylfluorenyl-2,7-diyl) (PFO). The PFO-infiltrated PSiMCs support single-mode blue lasing at the resonance wavelength of 466 nm, with a line width of ∼1.3 nm and lasing threshold of 5 nJ (15 μJ/cm 2 ), a value that is at the state of the art of PFO lasers. Furthermore, time-resolved photoluminescence shows a significant shortening (∼57%) of PFO emission lifetime in the PSiMCs, with respect to nonresonant PSi reference structures, confirming a dramatic variation of the radiative decay rate due to a Purcell effect. Our results, given also that blue lasing is a worst case for silicon photonics, are highly appealing for the development of low-cost, low-threshold silicon-based lasers with wavelengths tunable from visible to the near-infrared region by simple infiltration of suitable emitting polymers in monolithically integrated nanostructured PSiMCs.
Effect of postprandial thermogenesis on the cutaneous vasodilatory response during exercise.
Hayashi, Keiji; Ito, Nozomi; Ichikawa, Yoko; Suzuki, Yuichi
2014-08-01
To examine the effect of postprandial thermogenesis on the cutaneous vasodilatory response, 10 healthy male subjects exercised for 30 min on a cycle ergometer at 50% of peak oxygen uptake, with and without food intake. Mean skin temperature, mean body temperature (Tb), heart rate, oxygen uptake, carbon dioxide elimination, and respiratory quotient were all significantly higher at baseline in the session with food intake than in the session without food intake. To evaluate the cutaneous vasodilatory response, relative laser Doppler flowmetry values were plotted against esophageal temperature (Tes) and Tb. Regression analysis revealed that the [Formula: see text] threshold for cutaneous vasodilation tended to be higher with food intake than without it, but there were no significant differences in the sensitivity. To clarify the effect of postprandial thermogenesis on the threshold for cutaneous vasodilation, the between-session difference in the Tes threshold and the Tb threshold were plotted against the between-session difference in baseline Tes and baseline Tb, respectively. Linear regression analysis of the resultant plot showed significant positive linear relationships (Tes: r = 0.85, P < 0.01; Tb: r = 0.67, P < 0.05). These results suggest that postprandial thermogenesis increases baseline body temperature, which raises the body temperature threshold for cutaneous vasodilation during exercise.
Assenova, Valentina A
2018-01-01
Complex innovations- ideas, practices, and technologies that hold uncertain benefits for potential adopters-often vary in their ability to diffuse in different communities over time. To explain why, I develop a model of innovation adoption in which agents engage in naïve (DeGroot) learning about the value of an innovation within their social networks. Using simulations on Bernoulli random graphs, I examine how adoption varies with network properties and with the distribution of initial opinions and adoption thresholds. The results show that: (i) low-density and high-asymmetry networks produce polarization in influence to adopt an innovation over time, (ii) increasing network density and asymmetry promote adoption under a variety of opinion and threshold distributions, and (iii) the optimal levels of density and asymmetry in networks depend on the distribution of thresholds: networks with high density (>0.25) and high asymmetry (>0.50) are optimal for maximizing diffusion when adoption thresholds are right-skewed (i.e., barriers to adoption are low), but networks with low density (<0.01) and low asymmetry (<0.25) are optimal when thresholds are left-skewed. I draw on data from a diffusion field experiment to predict adoption over time and compare the results to observed outcomes.
Running economy and body composition between competitive and recreational level distance runners.
Mooses, Martin; Jürimäe, J; Mäestu, J; Mooses, K; Purge, P; Jürimäe, T
2013-09-01
The aim of the present study was to compare running economy between competitive and recreational level athletes at their individual ventilatory thresholds on track and to compare body composition parameters that are related to the individual running economy measured on track. We performed a cross-sectional analysis of a total 45 male runners classified as competitive runners (CR; n = 28) and recreational runners (RR; n = 17). All runners performed an incremental test on treadmill until voluntary exhaustion and at least 48 h later a 2 × 2000 m test at indoor track with intensities according to ventilatory threshold 1, ventilator threshold 2. During the running tests, athletes wore portable oxygen analyzer. Body composition was measured with Dual energy X-ray absorptiometry (DXA) method. Running economy at the first ventilatory threshold was not significantly related to any of the measured body composition values or leg mass ratios either in the competitive or in the recreational runners group. This study showed that there was no difference in the running economy between distance runners with different performance level when running on track, while there was a difference in the second ventilatory threshold speed in different groups of distance runners. Differences in running economy between competitive and recreational athletes cannot be explained by body composition and/or different leg mass ratios.
NASA Astrophysics Data System (ADS)
Zohner, Justin J.; Schuster, Kurt J.; Chavey, Lucas J.; Stolarski, David J.; Kumru, Semih S.; Rockwell, Benjamin A.; Thomas, Robert J.; Cain, Clarence P.
2006-02-01
Skin damage thresholds were measured and compared with theoretical predictions using a skin thermal model for near-IR laser pulses at 1318 nm and 1540 nm. For the 1318-nm data, a Q-switched, 50-ns pulse with a spot size of 5 mm was applied to porcine skin and the damage thresholds were determined at 1 hour and 24 hours postexposure using Probit analysis. The same analysis was conducted for a Q-switched, 30-ns pulse at 1540 nm with a spot size of 5 mm. The Yucatan mini-pig was used as the skin model for human skin due to its similarity to pigmented human skin. The ED 50 for these skin exposures at 24 hours postexposure was 10.5 J/cm2 for the 1318-nm exposures, and 6.1 J/cm2 for the 1540-nm exposures. These results were compared to thermal model predictions. We show that the thermal model fails to account for the ED 50 values observed. A brief discussion of the possible causes of this discrepancy is presented. These thresholds are also compared with previously published skin minimum visible lesion (MVL) thresholds and with the ANSI Standard's MPE for 1318-nm lasers at 50 ns and 1540-nm lasers at 30 ns.
Leyden, Matthew R; Matsushima, Toshinori; Qin, Chuanjiang; Ruan, Shibin; Ye, Hao; Adachi, Chihaya
2018-06-06
Organo-metal-halide perovskites are a promising set of materials for optoelectronic applications such as solar cells, light emitting diodes and lasers. Perovskite thin films have demonstrated amplified spontaneous emission thresholds as low as 1.6 μJ cm-2 and lasing thresholds as low as 0.2 μJ cm-2. Recently the performance of perovskite light emitting diodes has rapidly risen due to the formation of quasi 2D films using bulky ligands such as phenylethylammonium. Despite the high photoluminescent yield and external quantum efficiency of quasi 2D perovskites, few reports exist on amplified spontaneous emission. We show within this report that the threshold for amplified spontaneous emission of quasi 2D perovskite films increases with the concentration of phenylethylammonium. We attribute this increasing threshold to a charge transfer state at the PEA interface that competes for excitons with the ASE process. Additionally, the comparatively slow inter-grain charge transfer process cannot significantly contribute to the fast radiative recombination in amplified spontaneous emission. These results suggest that relatively low order PEA based perovskite films that are suitable for LED applications are not well suited for lasing applications. However high order films were able to maintain their low threshold values and may still benefit from improved stability.
Soil water storage, rainfall and runoff relationships in a tropical dry forest catchment
NASA Astrophysics Data System (ADS)
Farrick, Kegan K.; Branfireun, Brian A.
2014-12-01
In forested catchments, the exceedance of rainfall and antecedent water storage thresholds is often required for runoff generation, yet to our knowledge these threshold relationships remain undescribed in tropical dry forest catchments. We, therefore, identified the controls of streamflow activation and the timing and magnitude of runoff in a tropical dry forest catchment near the Pacific coast of central Mexico. During a 52 day transition phase from the dry to wet season, soil water movement was dominated by vertical flow which continued until a threshold soil moisture content of 26% was reached at 100 cm below the surface. This satisfied a 162 mm storage deficit and activated streamflow, likely through lateral subsurface flow pathways. High antecedent soil water conditions were maintained during the wet phase but had a weak influence on stormflow. We identified a threshold value of 289 mm of summed rainfall and antecedent soil water needed to generate >4 mm of stormflow per event. Above this threshold, stormflow response and magnitude was almost entirely governed by rainfall event characteristics and not antecedent soil moisture conditions. Our results show that over the course of the wet season in tropical dry forests the dominant controls on runoff generation changed from antecedent soil water and storage to the depth of rainfall.
Variable Threshold Method for Determining the Boundaries of Imaged Subvisible Particles.
Cavicchi, Richard E; Collett, Cayla; Telikepalli, Srivalli; Hu, Zhishang; Carrier, Michael; Ripple, Dean C
2017-06-01
An accurate assessment of particle characteristics and concentrations in pharmaceutical products by flow imaging requires accurate particle sizing and morphological analysis. Analysis of images begins with the definition of particle boundaries. Commonly a single threshold defines the level for a pixel in the image to be included in the detection of particles, but depending on the threshold level, this results in either missing translucent particles or oversizing of less transparent particles due to the halos and gradients in intensity near the particle boundaries. We have developed an imaging analysis algorithm that sets the threshold for a particle based on the maximum gray value of the particle. We show that this results in tighter boundaries for particles with high contrast, while conserving the number of highly translucent particles detected. The method is implemented as a plugin for FIJI, an open-source image analysis software. The method is tested for calibration beads in water and glycerol/water solutions, a suspension of microfabricated rods, and stir-stressed aggregates made from IgG. The result is that appropriate thresholds are automatically set for solutions with a range of particle properties, and that improved boundaries will allow for more accurate sizing results and potentially improved particle classification studies. Published by Elsevier Inc.
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
Weight monitoring system for newborn incubator application
NASA Astrophysics Data System (ADS)
Widianto, Arif; Nurfitri, Intan; Mahatidana, Pradipta; Abuzairi, Tomy; Poespawati, N. R.; Purnamaningsih., Retno W.
2018-02-01
We proposed weight monitoring system using load cell sensor for newborn incubator application. The weight sensing system consists of a load cell, conditioning signal circuit, and microcontroller Arduino Uno R3. The performance of the sensor was investigated by using the various weight from 0 up to 3000 g. Experiment results showed that this system has a small error of 4.313% and 12.5 g of threshold and resolution value. Compared to the typical baby scale available in local market, the proposed system has a lower error value and hysteresis.
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
Löffler, Frank E.; Tiedje, James M.; Sanford, Robert A.
1999-01-01
Measurements of the hydrogen consumption threshold and the tracking of electrons transferred to the chlorinated electron acceptor (fe) reliably detected chlororespiratory physiology in both mixed cultures and pure cultures capable of using tetrachloroethene, cis-1,2-dichloroethene, vinyl chloride, 2-chlorophenol, 3-chlorobenzoate, 3-chloro-4-hydroxybenzoate, or 1,2-dichloropropane as an electron acceptor. Hydrogen was consumed to significantly lower threshold concentrations of less than 0.4 ppmv compared with the values obtained for the same cultures without a chlorinated compound as an electron acceptor. The fe values ranged from 0.63 to 0.7, values which are in good agreement with theoretical calculations based on the thermodynamics of reductive dechlorination as the terminal electron-accepting process. In contrast, a mixed methanogenic culture that cometabolized 3-chlorophenol exhibited a significantly lower fe value, 0.012. PMID:10473415
Wilson, Raymond C.
1997-01-01
Broad-scale variations in long-term precipitation climate may influence rainfall/debris-flow threshold values along the U.S. Pacific coast, where both the mean annual precipitation (MAP) and the number of rainfall days (#RDs) are controlled by topography, distance from the coastline, and geographic latitude. Previous authors have proposed that rainfall thresholds are directly proportional to MAP, but this appears to hold only within limited areas (< 1?? latitude), where rainfall frequency (#RDs) is nearly constant. MAP-normalized thresholds underestimate the critical rainfall when applied to areas to the south, where the #RDs decrease, and overestimate threshold rainfall when applied to areas to the north, where the #RDs increase. For normalization between climates where both MAP and #RDs vary significantly, thresholds may best be described as multiples of the rainy-day normal, RDN = MAP/#RDs. Using data from several storms that triggered significant debris-flow activity in southern California, the San Francisco Bay region, and the Pacific Northwest, peak 24-hour rainfalls were plotted against RDN values, displaying a linear relationship with a lower bound at about 14 RDN. RDN ratios in this range may provide a threshold for broad-scale regional forecasting of debris-flow activity.
A Locust Phase Change Model with Multiple Switching States and Random Perturbation
NASA Astrophysics Data System (ADS)
Xiang, Changcheng; Tang, Sanyi; Cheke, Robert A.; Qin, Wenjie
2016-12-01
Insects such as locusts and some moths can transform from a solitarious phase when they remain in loose populations and a gregarious phase, when they may swarm. Therefore, the key to effective management of outbreaks of species such as the desert locust Schistocercagregaria is early detection of when they are in the threshold state between the two phases, followed by timely control of their hopper stages before they fledge because the control of flying adult swarms is costly and often ineffective. Definitions of gregarization thresholds should assist preventive control measures and avoid treatment of areas that might not lead to gregarization. In order to better understand the effects of the threshold density which represents the gregarization threshold on the outbreak of a locust population, we developed a model of a discrete switching system. The proposed model allows us to address: (1) How frequently switching occurs from solitarious to gregarious phases and vice versa; (2) When do stable switching transients occur, the existence of which indicate that solutions with larger amplitudes can switch to a stable attractor with a value less than the switching threshold density?; and (3) How does random perturbation influence the switching pattern? Our results show that both subsystems have refuge equilibrium points, outbreak equilibrium points and bistable equilibria. Further, the outbreak equilibrium points and bistable equilibria can coexist for a wide range of parameters and can switch from one to another. This type of switching is sensitive to the intrinsic growth rate and the initial values of the locust population, and may result in locust population outbreaks and phase switching once a small perturbation occurs. Moreover, the simulation results indicate that the switching transient patterns become identical after some generations, suggesting that the evolving process of the perturbation system is not related to the initial value after some fixed number of generations for the same stochastic processes. However, the switching frequency and outbreak patterns can be significantly affected by the intensity of noise and the intrinsic growth rate of the locust population.
Spreading dynamics of a SIQRS epidemic model on scale-free networks
NASA Astrophysics Data System (ADS)
Li, Tao; Wang, Yuanmei; Guan, Zhi-Hong
2014-03-01
In order to investigate the influence of heterogeneity of the underlying networks and quarantine strategy on epidemic spreading, a SIQRS epidemic model on the scale-free networks is presented. Using the mean field theory the spreading dynamics of the virus is analyzed. The spreading critical threshold and equilibria are derived. Theoretical results indicate that the critical threshold value is significantly dependent on the topology of the underlying networks and quarantine rate. The existence of equilibria is determined by threshold value. The stability of disease-free equilibrium and the permanence of the disease are proved. Numerical simulations confirmed the analytical results.
Piezoresistive strain sensing of carbon black /silicone composites above percolation threshold
NASA Astrophysics Data System (ADS)
Shang, Shuying; Yue, Yujuan; Wang, Xiaoer
2016-12-01
A series of flexible composites with a carbon black (CB) filled silicone rubber matrix were made by an improved process in this work. A low percolation threshold with a mass ratio of 2.99% CB was achieved. The piezoresistive behavior of CB/silicone composites above the critical value, with the mass ratio of carbon black to the silicone rubber ranging from 0.01 to 0.2, was studied. The piezoresistive behavior was different from each other for the composites with different CB contents. But, the composites show an excellent repeatability of piezoresistivity under cyclic compression, no matter with low filler content or with high filler content. The most interesting phenomena were that the plots of gauge factor versus strain of the composites with different CB contents constructed a master curve and the curve could be well fitted by a function. It was showed that the gauge factor of the composites was strain-controlled showing a promising prospect of application.
Pulse oximeter based mobile biotelemetry application.
Işik, Ali Hakan; Güler, Inan
2012-01-01
Quality and features of tele-homecare are improved by information and communication technologies. In this context, a pulse oximeter-based mobile biotelemetry application is developed. With this application, patients can measure own oxygen saturation and heart rate through Bluetooth pulse oximeter at home. Bluetooth virtual serial port protocol is used to send the test results from pulse oximeter to the smart phone. These data are converted into XML type and transmitted to remote web server database via smart phone. In transmission of data, GPRS, WLAN or 3G can be used. The rule based algorithm is used in the decision making process. By default, the threshold value of oxygen saturation is 80; the heart rate threshold values are 40 and 150 respectively. If the patient's heart rate is out of the threshold values or the oxygen saturation is below the threshold value, an emergency SMS is sent to the doctor. By this way, the directing of an ambulance to the patient can be performed by doctor. The doctor for different patients can change these threshold values. The conversion of the result of the evaluated data to SMS XML template is done on the web server. Another important component of the application is web-based monitoring of pulse oximeter data. The web page provides access to of all patient data, so the doctors can follow their patients and send e-mail related to the evaluation of the disease. In addition, patients can follow own data on this page. Eight patients have become part of the procedure. It is believed that developed application will facilitate pulse oximeter-based measurement from anywhere and at anytime.
Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing
2018-01-01
The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric field threshold for the computer simulation of IRE in the treatment of cervical cancer; and (3) the proposed statistical model was able to express the change in ablation zone with the change in pulse-setting parameters.
A generalized methodology for identification of threshold for HRU delineation in SWAT model
NASA Astrophysics Data System (ADS)
M, J.; Sudheer, K.; Chaubey, I.; Raj, C.
2016-12-01
The distributed hydrological model, Soil and Water Assessment Tool (SWAT) is a comprehensive hydrologic model widely used for making various decisions. The simulation accuracy of the distributed hydrological model differs due to the mechanism involved in the subdivision of the watershed. Soil and Water Assessment Tool (SWAT) considers sub-dividing the watershed and the sub-basins into small computing units known as 'hydrologic response units (HRU). The delineation of HRU is done based on unique combinations of land use, soil types, and slope within the sub-watersheds, which are not spatially defined. The computations in SWAT are done at HRU level and are then aggregated up to the sub-basin outlet, which is routed through the stream system. Generally, the HRUs are delineated by considering a threshold percentage of land use, soil and slope are to be given by the modeler to decrease the computation time of the model. The thresholds constrain the minimum area for constructing an HRU. In the current HRU delineation practice in SWAT, the land use, soil and slope of the watershed within a sub-basin, which is less than the predefined threshold, will be surpassed by the dominating land use, soil and slope, and introduce some level of ambiguity in the process simulations in terms of inappropriate representation of the area. But the loss of information due to variation in the threshold values depends highly on the purpose of the study. Therefore this research studies the effects of threshold values of HRU delineation on the hydrological modeling of SWAT on sediment simulations and suggests guidelines for selecting the appropriate threshold values considering the sediment simulation accuracy. The preliminary study was done on Illinois watershed by assigning different thresholds for land use and soil. A general methodology was proposed for identifying an appropriate threshold for HRU delineation in SWAT model that considered computational time and accuracy of the simulation. The methodology can be adopted for identifying an appropriate threshold for SWAT model simulation in any watershed with a single simulation of the model with a zero-zero threshold.
Collisionless microtearing modes in hot tokamaks: Effect of trapped electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swamy, Aditya K.; Ganesh, R., E-mail: ganesh@ipr.res.in; Brunner, S.
2015-07-15
Collisionless microtearing modes have recently been found linearly unstable in sharp temperature gradient regions of large aspect ratio tokamaks. The magnetic drift resonance of passing electrons has been found to be sufficient to destabilise these modes above a threshold plasma β. A global gyrokinetic study, including both passing electrons as well as trapped electrons, shows that the non-adiabatic contribution of the trapped electrons provides a resonant destabilization, especially at large toroidal mode numbers, for a given aspect ratio. The global 2D mode structures show important changes to the destabilising electrostatic potential. The β threshold for the onset of the instabilitymore » is found to be generally downshifted by the inclusion of trapped electrons. A scan in the aspect ratio of the tokamak configuration, from medium to large but finite values, clearly indicates a significant destabilizing contribution from trapped electrons at small aspect ratio, with a diminishing role at larger aspect ratios.« less
NASA Astrophysics Data System (ADS)
Hashemi, Adeleh; Bahari, Ali; Ghasemi, Shahram
2017-09-01
In this work, povidone/silica nanocomposite dielectric layers were deposited on the n-type Si (100) substrates for application in n-type silicon field-effect transistors (FET). Thermogravimetric analysis (TGA) indicated that strong chemical interactions between polymer and silica nanoparticles were created. In order to examine the effect of annealing temperatures on chemical interactions and nanostructure properties, annealing process was done at 423-513 K. Atomic force microscopy (AFM) images show the very smooth surfaces with very low surface roughness (0.038-0.088 nm). The Si2p and C1s core level photoemission spectra were deconvoluted to the chemical environments of Si and C atoms respectively. The obtained results of deconvoluted X-ray photoelectron spectroscopy (XPS) spectra revealed a high percentage of silanol hydrogen bonds in the sample which was not annealed. These bonds were inversed to stronger covalence bonds (siloxan bonds) at annealing temperature of 423 K. By further addition of temperature, siloxan bonds were shifted to lower binding energy of about 1 eV and their intensity were abated at annealing temperature of 513 K. The electrical characteristics were extracted from current-Voltage (I-V) and capacitance-voltage (C-V) measurements in metal-insulator-semiconductor (MIS) structure. The all n-type Si transistors showed very low threshold voltages (-0.24 to 1 V). The formation of the strongest cross-linking at nanostructure of dielectric film annealed at 423 K caused resulted in an un-trapped path for the transport of charge carriers yielding the lowest threshold voltage (0.08 V) and the highest electron mobility (45.01 cm2/V s) for its FET. By increasing the annealing temperature (473 and 513 K) on the nanocomposite dielectric films, the values of the average surface roughness, the capacitance and the FET threshold voltage increased and the value of FET electron field-effect mobility decreased.
Simulation of MAD Cow Disease Propagation
NASA Astrophysics Data System (ADS)
Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.
Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.
Bodirsky, Benjamin Leon; Popp, Alexander; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Rolinski, Susanne; Weindl, Isabelle; Schmitz, Christoph; Müller, Christoph; Bonsch, Markus; Humpenöder, Florian; Biewald, Anne; Stevanovic, Miodrag
2014-05-13
Reactive nitrogen (Nr) is an indispensable nutrient for agricultural production and human alimentation. Simultaneously, agriculture is the largest contributor to Nr pollution, causing severe damages to human health and ecosystem services. The trade-off between food availability and Nr pollution can be attenuated by several key mitigation options, including Nr efficiency improvements in crop and animal production systems, food waste reduction in households and lower consumption of Nr-intensive animal products. However, their quantitative mitigation potential remains unclear, especially under the added pressure of population growth and changes in food consumption. Here we show by model simulations, that under baseline conditions, Nr pollution in 2050 can be expected to rise to 102-156% of the 2010 value. Only under ambitious mitigation, does pollution possibly decrease to 36-76% of the 2010 value. Air, water and atmospheric Nr pollution go far beyond critical environmental thresholds without mitigation actions. Even under ambitious mitigation, the risk remains that thresholds are exceeded.
Sanni, Steinar; Björkblom, Carina; Jonsson, Henrik; Godal, Brit F; Liewenborg, Birgitta; Lyng, Emily; Pampanin, Daniela M
2017-04-01
The aim of this study was to determine a suitable set of biomarker based methods for environmental monitoring in sub-arctic and temperate offshore areas using scientific knowledge on the sensitivity of fish species to dispersed crude oil. Threshold values for environmental monitoring and risk assessment were obtained based on a quantitative comparison of biomarker responses. Turbot, halibut, salmon and sprat were exposed for up to 8 weeks to five different sub-lethal concentrations of dispersed crude oil. Biomarkers assessing PAH metabolites, oxidative stress, detoxification system I activity, genotoxicity, immunotoxicity, endocrine disruption, general cellular stress and histological changes were measured. Results showed that PAH metabolites, CYP1A/EROD, DNA adducts and histopathology rendered the most robust results across the different fish species, both in terms of sensitivity and dose-responsiveness. The reported results contributed to forming links between biomonitoring and risk assessment procedures by using biomarker species sensitivity distributions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Changing carbon isotope ratio of atmospheric carbon dioxide: implications for food authentication.
Peck, William H; Tubman, Stephanie C
2010-02-24
Carbon isotopes are often used to detect the addition of foreign sugars to foods. This technique takes advantage of the natural difference in carbon isotope ratio between C(3) and C(4) plants. Many foods are derived from C(3) plants, but the low-cost sweeteners corn and sugar cane are C(4) plants. Most adulteration studies do not take into account the secular shift of the carbon isotope ratio of atmospheric carbon dioxide caused by fossil fuel burning, a shift also seen in plant tissues. As a result statistical tests and threshold values that evaluate authenticity of foods based on carbon isotope ratios may need to be corrected for changing atmospheric isotope values. Literature and new data show that the atmospheric trend in carbon isotopes is seen in a 36-year data set of maple syrup analyses (n = 246), demonstrating that published thresholds for cane or corn sugar adulteration in maple syrup (and other foods) have become progressively more lenient over time.
Aoyagi, Miki; Nagata, Kenji
2012-06-01
The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.
Theoretical studies of the decomposition mechanisms of 1,2,4-butanetriol trinitrate.
Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo
2017-12-06
Density functional theory (DFT) and canonical variational transition-state theory combined with a small-curvature tunneling correction (CVT/SCT) were used to explore the decomposition mechanisms of 1,2,4-butanetriol trinitrate (BTTN) in detail. The results showed that the γ-H abstraction reaction is the initial pathway for autocatalytic BTTN decomposition. The three possible hydrogen atom abstraction reactions are all exothermic. The rate constants for autocatalytic BTTN decomposition are 3 to 10 40 times greater than the rate constants for the two unimolecular decomposition reactions (O-NO 2 cleavage and HONO elimination). The process of BTTN decomposition can be divided into two stages according to whether the NO 2 concentration is above a threshold value. HONO elimination is the main reaction channel during the first stage because autocatalytic decomposition requires NO 2 and the concentration of NO 2 is initially low. As the reaction proceeds, the concentration of NO 2 gradually increases; when it exceeds the threshold value, the second stage begins, with autocatalytic decomposition becoming the main reaction channel.
The fragmentation threshold and implications for explosive eruptions
NASA Astrophysics Data System (ADS)
Kennedy, B.; Spieler, O.; Kueppers, U.; Scheu, B.; Mueller, S.; Taddeucci, J.; Dingwell, D.
2003-04-01
The fragmentation threshold is the minimum pressure differential required to cause a porous volcanic rock to form pyroclasts. This is a critical parameter when considering the shift from effusive to explosive eruptions. We fragmented a variety of natural volcanic rock samples at room temperature (20oC) and high temperature (850oC) using a shock tube modified after Aldibirov and Dingwell (1996). This apparatus creates a pressure differential which drives fragmentation. Pressurized gas in the vesicles of the rock suddenly expands, blowing the sample apart. For this reason, the porosity is the primary control on the fragmentation threshold. On a graph of porosity against fragmentation threshold, our results from a variety of natural samples at both low and high temperatures all plot on the same curve and show the threshold increasing steeply at low porosities. A sharp decrease in the fragmentation threshold occurs as porosity increases from 0- 15%, while a more gradual decrease is seen from 15- 85%. The high temperature experiments form a curve with less variability than the low temperature experiments. For this reason, we have chosen to model the high temperature thresholds. The curve can be roughly predicted by the tensile strength of glass (140 MPa) divided by the porosity. Fractured phenocrysts in the majority of our samples reduces the overall strength of the sample. For this reason, the threshold values can be more accurately predicted by % matrix x the tensile strength/ porosity. At very high porosities the fragmentation threshold varies significantly due to the effect of bubble shape and size distributions on the permeability (Mueller et al, 2003). For example, high thresholds are seen for samples with very high permeabilities, where gas flow reduces the local pressure differential. These results allow us to predict the fragmentation threshold for any volcanic rock for which the porosity and crystal contents are known. During explosive eruptions, the fragmentation threshold may be exceeded in two ways: (1) by building an overpressure within the vesicles above the fragmentation threshold or (2) by unloading and exposing lithostatically pressurised magma to lower pressures. Using this data, we can in principle estimate the height of dome collapse or amount of overpressure necessary to produce an explosive eruption.
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false What happens if the dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 73-REAL ESTATE...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND SURFACE WORK AREAS OF UNDERGROUND...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND SURFACE WORK AREAS OF UNDERGROUND...
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2014 CFR
2014-01-01
... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false What happens if the dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 73-REAL ESTATE...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND SURFACE WORK AREAS OF UNDERGROUND...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 1 2013-01-01 2013-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 1 2014-01-01 2014-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 1 2012-01-01 2012-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
Percolation of disordered jammed sphere packings
NASA Astrophysics Data System (ADS)
Ziff, Robert M.; Torquato, Salvatore
2017-02-01
We determine the site and bond percolation thresholds for a system of disordered jammed sphere packings in the maximally random jammed state, generated by the Torquato-Jiao algorithm. For the site threshold, which gives the fraction of conducting versus non-conducting spheres necessary for percolation, we find {{p}\\text{c}}=0.3116(3) , consistent with the 1979 value of Powell 0.310(5) and identical within errors to the threshold for the simple-cubic lattice, 0.311 608, which shares the same average coordination number of 6. In terms of the volume fraction ϕ, the threshold corresponds to a critical value {φ\\text{c}}=0.199 . For the bond threshold, which apparently was not measured before, we find {{p}\\text{c}}=0.2424(3) . To find these thresholds, we considered two shape-dependent universal ratios involving the size of the largest cluster, fluctuations in that size, and the second moment of the size distribution; we confirmed the ratios’ universality by also studying the simple-cubic lattice with a similar cubic boundary. The results are applicable to many problems including conductivity in random mixtures, glass formation, and drug loading in pharmaceutical tablets.
Reliability of the method of levels for determining cutaneous temperature sensitivity
NASA Astrophysics Data System (ADS)
Jakovljević, Miroljub; Mekjavić, Igor B.
2012-09-01
Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.
Automatic detection of malaria parasite in blood images using two parameters.
Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong
2015-01-01
Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.
The short time Fourier transform and local signals
NASA Astrophysics Data System (ADS)
Okumura, Shuhei
In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Zemek, Allison; Garg, Rohit; Wong, Brian J. F.
2014-01-01
Objectives/Hypothesis Characterizing the mechanical properties of structural cartilage grafts used in rhinoplasty is valuable because softer engineered tissues are more time- and cost-efficient to manufacture. The aim of this study is to quantitatively identify the threshold mechanical stability (e.g., Young’s modulus) of columellar, L-strut, and alar cartilage replacement grafts. Study Design Descriptive, focus group survey. Methods Ten mechanical phantoms of identical size (5 × 20 × 2.3 mm) and varying stiffness (0.360 to 0.85 MPa in 0.05 MPa increments) were made from urethane. A focus group of experienced rhinoplasty surgeons (n = 25, 5 to 30 years in practice) were asked to arrange the phantoms in order of increasing stiffness. Then, they were asked to identify the minimum acceptable stiffness that would still result in favorable surgical outcomes for three clinical applications: columellar, L-strut, and lateral crural replacement grafts. Available surgeons were tested again after 1 week to evaluate intra-rater consistency. Results For each surgeon, the threshold stiffness for each clinical application differed from the threshold values derived by logistic regression by no more than 0.05 MPa (accuracy to within 10%). Specific thresholds were 0.56, 0.59, and 0.49 MPa for columellar, L-strut, and alar grafts, respectively. For comparison, human nasal septal cartilage is approximately 0.8 MPa. Conclusions There was little inter- and intra-rater variation of the identified threshold values for adequate graft stiffness. The identified threshold values will be useful for the design of tissue-engineered or semisynthetic cartilage grafts for use in structural nasal surgery. PMID:20513022
Experimental research on femto-second laser damaging array CCD cameras
NASA Astrophysics Data System (ADS)
Shao, Junfeng; Guo, Jin; Wang, Ting-feng; Wang, Ming
2013-05-01
Charged Coupled Devices (CCD) are widely used in military and security applications, such as airborne and ship based surveillance, satellite reconnaissance and so on. Homeland security requires effective means to negate these advanced overseeing systems. Researches show that CCD based EO systems can be significantly dazzled or even damaged by high-repetition rate pulsed lasers. Here, we report femto - second laser interaction with CCD camera, which is probable of great importance in future. Femto - second laser is quite fresh new lasers, which has unique characteristics, such as extremely short pulse width (1 fs = 10-15 s), extremely high peak power (1 TW = 1012W), and especially its unique features when interacting with matters. Researches in femto second laser interaction with materials (metals, dielectrics) clearly indicate non-thermal effect dominates the process, which is of vast difference from that of long pulses interaction with matters. Firstly, the damage threshold test are performed with femto second laser acting on the CCD camera. An 800nm, 500μJ, 100fs laser pulse is used to irradiate interline CCD solid-state image sensor in the experiment. In order to focus laser energy onto tiny CCD active cells, an optical system of F/5.6 is used. A Sony production CCDs are chose as typical targets. The damage threshold is evaluated with multiple test data. Point damage, line damage and full array damage were observed when the irradiated pulse energy continuously increase during the experiment. The point damage threshold is found 151.2 mJ/cm2.The line damage threshold is found 508.2 mJ/cm2.The full-array damage threshold is found to be 5.91 J/cm2. Although the phenomenon is almost the same as that of nano laser interaction with CCD, these damage thresholds are substantially lower than that of data obtained from nano second laser interaction with CCD. Then at the same time, the electric features after different degrees of damage are tested with electronic multi meter. The resistance values between clock signal lines are measured. Contrasting the resistance values of the CCD before and after damage, it is found that the resistances decrease significantly between the vertical transfer clock signal lines values. The same results are found between the vertical transfer clock signal line and the earth electrode (ground).At last, the damage position and the damage mechanism were analyzed with above results and SEM morphological experiments. The point damage results in the laser destroying material, which shows no macro electro influence. The line damage is quite different from that of point damage, which shows deeper material corroding effect. More importantly, short circuits are found between vertical clock lines. The full array damage is even more severe than that of line damage starring with SEM, while no obvious different electrical features than that of line damage are found. Further researches are anticipated in femto second laser caused CCD damage mechanism with more advanced tools. This research is valuable in EO countermeasure and/or laser shielding applications.
Tremblay, Louis A; Clark, Dana; Sinner, Jim; Ellis, Joanne I
2017-09-20
The sustainable management of estuarine and coastal ecosystems requires robust frameworks due to the presence of multiple physical and chemical stressors. In this study, we assessed whether ecological health decline, based on community structure composition changes along a pollution gradient, occurred at levels below guideline threshold values for copper, zinc and lead. Canonical analysis of principal coordinates (CAP) was used to characterise benthic communities along a metal contamination gradient. The analysis revealed changes in benthic community distribution at levels below the individual guideline values for the three metals. These results suggest that field-based measures of ecological health analysed with multivariate tools can provide additional information to single metal guideline threshold values to monitor large systems exposed to multiple stressors.
NASA Astrophysics Data System (ADS)
Yu, Xinting; Hörst, Sarah M.; He, Chao; Bridges, Nathan T.; Burr, Devon M.; Sebree, Joshua A.; Smith, James K.
2017-11-01
Saltation threshold, the minimum wind speed for sediment transport, is a fundamental parameter in aeolian processes. Measuring this threshold using boundary layer wind tunnels, in which particles are mobilized by flowing air, for a subset of different planetary conditions can inform our understanding of physical processes of sediment transport. The presence of liquid, such as water on Earth or methane on Titan, may affect the threshold values to a great extent. Sediment density is also crucial for determining threshold values. Here we provide quantitative data on density and water content of common wind tunnel materials (including chromite, basalt, quartz sand, beach sand, glass beads, gas chromatograph packing materials, walnut shells, iced tea powder, activated charcoal, instant coffee, and glass bubbles) that have been used to study conditions on Earth, Titan, Mars, and Venus. The measured density values for low density materials are higher compared to literature values (e.g., ∼30% for walnut shells), whereas for the high density materials, there is no such discrepancy. We also find that low density materials have much higher water content and longer atmospheric equilibration timescales compared to high density sediments. We used thermogravimetric analysis (TGA) to quantify surface and internal water and found that over 80% of the total water content is surface water for low density materials. In the Titan Wind Tunnel (TWT), where Reynolds number conditions similar to those on Titan can be achieved, we performed threshold experiments with the standard walnut shells (125-150 μm, 7.2% water by mass) and dried walnut shells, in which the water content was reduced to 1.7%. The threshold results for the two scenarios are almost the same, which indicates that humidity had a negligible effect on threshold for walnut shells in this experimental regime. When the water content is lower than 11.0%, the interparticle forces are dominated by adsorption forces, whereas at higher values the interparticle forces are dominated by much larger capillary forces. For materials with low equilibrium water content, like quartz sand, capillary forces dominate. When the interparticle forces are dominated by adsorption forces, the threshold does not increase with increasing relative humidity (RH) or water content. Only when the interparticle forces are dominated by capillary forces does the threshold start to increase with increasing RH/water content. Since tholins have a low methane content (0.3% at saturation, [Curtis, D. B., Hatch, C. D., Hasenkopf, C. A., et al., 2008. Laboratory studies of methane and ethane adsorption and nucleation onto organic particles: Application to Titan's clouds. Icarus, 195, 792. http://dx.doi.org/10.1016/j.icarus.2008.02.003]), we believe tholins would behave similarly to quartz sand when subjected to methane moisture.
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Peter, Thomas; Ribatet, Mathieu; Davison, Anthony C.; Stübi, Rene; Weihs, Philipp; Holawe, Franz
2010-05-01
In this study tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) are applied for the first time in the field of stratospheric ozone research, as statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not address the internal data structure concerning extremes adequately. The study illustrates that tools based on extreme value theory are appropriate to identify ozone extremes and to describe the tails of the world's longest total ozone record (Arosa, Switzerland - for details see Staehelin et al., 1998a,b) (Rieder et al., 2010a). A daily moving threshold was implemented for consideration of the seasonal cycle in total ozone. The frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone and the influence of those on mean values and trends is analyzed for Arosa total ozone time series. The results show (a) an increase in ELOs and (b) a decrease in EHOs during the last decades and (c) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Furthermore, it is shown that the fitted model represents the tails of the total ozone data set with very high accuracy over the entire range (including absolute monthly minima and maxima). Also the frequency distribution of ozone mini-holes (using constant thresholds) can be calculated with high accuracy. Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight in time series properties. Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (e.g. Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances lead to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more such fingerprints than conventional time series analysis of annual and seasonal mean values. Especially, the analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b). Overall the presented new extremes concept provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998b.
NASA Astrophysics Data System (ADS)
Abancó, Clàudia; Hürlimann, Marcel; Moya, José; Berenguer, Marc
2016-10-01
Torrential flows like debris flows or debris floods are fast movements formed by a mix of water and different amounts of unsorted solid material. They generally occur in steep torrents and pose high risk in mountainous areas. Rainfall is their most common triggering factor and the analysis of the critical rainfall conditions is a fundamental research task. Due to their wide use in warning systems, rainfall thresholds for the triggering of torrential flows are an important outcome of such analysis and are empirically derived using data from past events. In 2009, a monitoring system was installed in the Rebaixader catchment, Central Pyrenees (Spain). Since then, rainfall data of 25 torrential flows (;TRIG rainfalls;) were recorded, with a 5-min sampling frequency. Other 142 rainfalls that did not trigger torrential flows (;NonTRIG rainfalls;) were also collected and analyzed. The goal of this work was threefold: (i) characterize rainfall episodes in the Rebaixader catchment and compare rainfall data that triggered torrential flows and others that did not; (ii) define and test Intensity-Duration (ID) thresholds using rainfall data measured inside the catchment by with different techniques; (iii) analyze how the criterion used for defining the rainfall duration and the spatial variability of rainfall influences the value obtained for the thresholds. The statistical analysis of the rainfall characteristics showed that the parameters that discriminate better the TRIG and NonTRIG rainfalls are the rainfall intensities, the mean rainfall and the total rainfall amount. The antecedent rainfall was not significantly different between TRIG and NonTRIG rainfalls, as it can be expected when the source material is very pervious (a sandy glacial soil in the study site). Thresholds were derived from data collected at one rain gauge located inside the catchment. Two different methods were applied to calculate the duration and intensity of rainfall: (i) using total duration, Dtot, and mean intensity, Imean, of the rainfall event, and (ii) using floating durations, D, and intensities, Ifl, based on the maximum values over floating periods of different duration. The resulting thresholds are considerably different (Imean = 6.20 Dtot-0.36 and Ifl_90% = 5.49 D-0.75, respectively) showing a strong dependence on the applied methodology. On the other hand, the definition of the thresholds is affected by several types of uncertainties. Data from both rain gauges and weather radar were used to analyze the uncertainty associated with the spatial variability of the triggering rainfalls. The analysis indicates that the precipitation recorded by the nearby rain gauges can introduce major uncertainties, especially for convective summer storms. Thus, incorporating radar rainfall can significantly improve the accuracy of the measured triggering rainfall. Finally, thresholds were also derived according to three different criteria for the definition of the duration of the triggering rainfall: (i) the duration until the peak intensity, (ii) the duration until the end of the rainfall; and, (iii) the duration until the trigger of the torrential flow. An important contribution of this work is the assessment of the threshold relationships obtained using the third definition of duration. Moreover, important differences are observed in the obtained thresholds, showing that ID relationships are significantly dependent on the applied methodology.
Atmospheric Science Data Center
2017-10-11
... new inland water class for RCCM calculation and changed threshold and surface classification datasets accordingly. Modified land second ... 06/21/2000 First version of RCCM. Pre-launch threshold values are used. New ancillary files: ...
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam
2017-02-07
Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.
NASA Astrophysics Data System (ADS)
Amrani, Aumeur El; Es-saghiri, Abdeljabbar; Boufounas, El-Mahjoub; Lucas, Bruno
2018-06-01
The performance of a pentacene based organic thin film transistor (OTFT) with polymethylmethacrylate as a dielectric insulator and indium tin oxide based electrical gate is investigated. On the one hand, we showed that the threshold voltage increases with gate voltage, and on the other hand that it decreases with drain voltage. Thus, we noticed that the onset voltage shifts toward positive voltage values with the drain voltage increase. In addition, threshold-onset differential voltage (TODV) is proposed as an original approach to estimate an averaged carrier density in pentacene. Indeed, a value of about 4.5 × 1016 cm-3 is reached at relatively high gate voltage of -50 V; this value is in good agreement with that reported in literature with other technique measurements. However, at a low applied gate voltage, the averaged pentacene carrier density remains two orders of magnitude lower; it is of about 2.8 × 1014 cm-3 and remains similar to that obtained from space charge limited current approach for low applied bias voltage of about 2.2 × 1014 cm-3. Furthermore, high IOn/IOff and IOn/IOnset current ratios of 5 × 106 and 7.5 × 107 are reported for lower drain voltage, respectively. The investigated OTFTs also showed good electrical performance including carrier mobility increasing with gate voltage; mobility values of 4.5 × 10-2 cm2 V-1 s-1 and of 4.25 × 10-2 cm2 V-1 s-1 are reached for linear and saturation regimes, respectively. These results remain enough interesting since current modulation ratio exceeds a value of 107 that is a quite important requirement than high mobility for some particular logic gate applications.
Analysis of Publically Available Skin Sensitization Data from REACH Registrations 2008–2014
Luechtefeld, Thomas; Maertens, Alexandra; Russo, Daniel P.; Rovida, Costanza; Zhu, Hao; Hartung, Thomas
2017-01-01
Summary The public data on skin sensitization from REACH registrations already included 19,111 studies on skin sensitization in December 2014, making it the largest repository of such data so far (1,470 substances with mouse LLNA, 2,787 with GPMT, 762 with both in vivo and in vitro and 139 with only in vitro data). 21% were classified as sensitizers. The extracted skin sensitization data was analyzed to identify relationships in skin sensitization guidelines, visualize structural relationships of sensitizers, and build models to predict sensitization. A chemical with molecular weight > 500 Da is generally considered non-sensitizing owing to low bioavailability, but 49 sensitizing chemicals with a molecular weight > 500 Da were found. A chemical similarity map was produced using PubChem’s 2D Tanimoto similarity metric and Gephi force layout visualization. Nine clusters of chemicals were identified by Blondel’s module recognition algorithm revealing wide module-dependent variation. Approximately 31% of mapped chemicals are Michael’s acceptors but alone this does not imply skin sensitization. A simple sensitization model using molecular weight and five ToxTree structural alerts showed a balanced accuracy of 65.8% (specificity 80.4%, sensitivity 51.4%), demonstrating that structural alerts have information value. A simple variant of k-nearest neighbors outperformed the ToxTree approach even at 75% similarity threshold (82% balanced accuracy at 0.95 threshold). At higher thresholds, the balanced accuracy increased. Lower similarity thresholds decrease sensitivity faster than specificity. This analysis scopes the landscape of chemical skin sensitization, demonstrating the value of large public datasets for health hazard prediction. PMID:26863411
Minet, V; Baudar, J; Bailly, N; Douxfils, J; Laloy, J; Lessire, S; Gourdin, M; Devalet, B; Chatelain, B; Dogné, J M; Mullier, F
2014-06-01
Accurate diagnosis of heparin-induced thrombocytopenia (HIT) is essential but remains challenging. We have previously demonstrated, in a retrospective study, the usefulness of the combination of the 4Ts score, AcuStar HIT and heparin-induced multiple electrode aggregometry (HIMEA) with optimized thresholds. We aimed at exploring prospectively the performances of our optimized diagnostic algorithm on suspected HIT patients. The secondary objective is to evaluate performances of AcuStar HIT-Ab (PF4-H) in comparison with the clinical outcome. 116 inpatients with clinically suspected immune HIT were included. Our optimized diagnostic algorithm was applied to each patient. Sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) of the overall diagnostic strategy as well as AcuStar HIT-Ab (at manufacturer's thresholds and at our thresholds) were calculated using clinical diagnosis as the reference. Among 116 patients, 2 patients had clinically-diagnosed HIT. These 2 patients were positive on AcuStar HIT-Ab, AcuStar HIT-IgG and HIMEA. Using our optimized algorithm, all patients were correctly diagnosed. AcuStar HIT-Ab at our cut-off (>9.41 U/mL) and at manufacturer's cut-off (>1.00 U/mL) showed both a sensitivity of 100.0% and a specificity of 99.1% and 90.4%, respectively. The combination of the 4Ts score, the HemosIL® AcuStar HIT and HIMEA with optimized thresholds may be useful for the rapid and accurate exclusion of the diagnosis of immune HIT. Copyright © 2014 Elsevier Ltd. All rights reserved.
Net reclassification index at event rate: properties and relationships.
Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B
2017-12-10
The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Froud, Robert; Abel, Gary
2014-01-01
Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472
NASA Astrophysics Data System (ADS)
Rossi, Giuseppe; Garrote, Luis; Caporali, Enrica
2010-05-01
Identifying the occurrence, the extent and the magnitude of a drought can be delicate, requiring detection of depletions of supplies and increases in demand. Drought indices, particularly the meteorological ones, can describe the onset and the persistency of droughts, especially in natural systems. However they have to be used cautiously when applied to water supply systems. They show little correlation with water shortage situations, since water storage, as well as demand fluctuation, play an important role in water resources management. For that reason a more dynamic indicator relating supply and demand is required in order to identify situations when there is risk of water shortages. In water supply systems there is great variability on the natural water resources and also on the demands. These quantities can only be defined probabilistically. This great variability is faced defining some threshold values, expressed in probabilistic terms, that measure the hydrologic state of the system. They can identify specific actions in an operational context in different levels of severity, like the normal, pre-alert, alert and emergency scenarios. They can simplify the decision-making required during stressful periods and can help mitigate the impacts of drought by clearly defining the conditions requiring actions. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are calibrated through discussion with water managers. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are validated with a long term simulation that takes into account the characteristics of the evaluated system. The levels and volumes in the different reservoirs are simulated using 20-30 years time series. The critical situations are assessed month by month in order to evaluate optimal management rules during the year and avoid conditions of total water shortage. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in central Italy. The catchment of the investigated area has a surface of 1231 km2 and, accordingly to the census ISTAT 2001, 945˙972 inhabitants.
Gil de Prado, Elena; Rivas, Eva-María; de Silóniz, María-Isabel; Diezma, Belén; Barreiro, Pilar; Peinado, José M
2014-11-01
The colony shape of four yeast species growing on agar medium was measured for 116 days by image analysis. Initially, all the colonies are circular, with regular edges. The loss of circularity can be quantitatively estimated by the eccentricity index, Ei , calculated as the ratio between their orthogonal vertical and horizontal diameters. Ei can increase from 1 (complete circularity) to a maximum of 1.17-1.30, depending on the species. One colony inhibits its neighbour only when it has reached a threshold area. Then, Ei of the inhibited colony increases proportionally to the area of the inhibitory colony. The initial distance between colonies affects those threshold values but not the proportionality, Ei /area; this inhibition affects the shape but not the total surface of the colony. The appearance of irregularities in the edges is associated, in all the species, not with age but with nutrient exhaustion. The edge irregularity can be quantified by the Fourier index, Fi , calculated by the minimum number of Fourier coefficients that are needed to describe the colony contour with 99% fitness. An ad hoc function has been developed in Matlab v. 7.0 to automate the computation of the Fourier coefficients. In young colonies, Fi has a value between 2 (circumference) and 3 (ellipse). These values are maintained in mature colonies of Debaryomyces, but can reach values up to 14 in Saccharomyces. All the species studied showed the inhibition of growth in facing colony edges, but only three species showed edge irregularities associated with substrate exhaustion. Copyright © 2014 John Wiley & Sons, Ltd.
Xu, Xiaowen; Wang, Peijun; Ma, Liang; Shao, Zhihong; Zhang, Min
2015-01-20
To explore the value of diffusion weighted imaging (DWI) and perfusion weighted imaging (PWI) in identifying benign and malignant renal masses and differentiating the histological types of renal masses. Fifteen healthy volunteers and 46 patients with renal masses proven by pathology, including clear cell carcinomas (n = 18), papillary carcinomas (n = 8), chromophobe carcinomas (n = 7) and angiomyolipomas (n = 13), were examined with DWI and PWI scan at 3.0 T MRI. ANOVA was employed to compare the values of transfer constant (K(trans)), rate constant of backflux (Kep) and extra-vascular extra-cellular space fractional volume (Ve) proceeded by PWI and the value of ADC resulted from DWI between normal kidney and different histological types of renal masses. Receiver operating characteristics (ROC) curve was used to analyze and compare the diagnostic value of the methods of PWI and DWI in differentiating benign and malignant renal masses. The ADC value of normal renal parenchyma was (2.10 ± 0.24) × 10⁻³ mm²/s, which was statistically higher than benign and malignant renal masses (P < 0.05). The ADC value of benign masses was statistically higher than that of all histological types of malignant masses (P < 0.05). Among three histological types of malignancies, clear cell carcinoma showed the statistically highest ADC value (P < 0.05). But the difference between papillary carcinoma and chromophobe carcinoma had no statistical significance (P > 0.05).Values of K(trans), Kep and Ve between normal renal parenchyma and different histological types of renal masses had statistical differences.Values of K(trans) and Ve in three histological types of malignant renal masses were statistically higher than those of benign renal masses.Kep value of clear cell carcinoma was significantly higher than that of benign renal masses (P < 0.05).However, other histological types of malignant masses had no significant difference with benign masses.For three malignant masses, K(trans) of clear cell carcinoma, papillary carcinoma and chromophobe carcinoma were (0.85 ± 0.27), (0.51 ± 0.04) and (0.39 ± 0.05)/min respectively. All values gradually reduced. And the differences were statistically significant (P < 0.05). The Ve value of renal clear cell carcinoma was statistically higher than that of papillary carcinoma (P < 0.05). ROC curve was used to analyze and compare the diagnostic value of PWI versus DWI in differentiating benign and malignant renal masses. The K(trans) of benign and malignant renal masses had the largest AUC (AUC = 0.937) at a threshold of 0.38/min. And there were a sensitivity of 87.9% and a specificity of 85.7%. The AUC of ADC was 0.823, sensitivity 72.7% and specificity 92.9%. The ADC threshold for differentiating benign from malignant masses was 1.40 × 10⁻³ mm²/s; AUC of Ve 0.803, sensitivity 78.8% and specificity 71.4%, a threshold of 0.29/min; Kep showed lower diagnostic value. 3.0 T MRI DWI and PWI can effectively differentiate benign and different histological types of malignant renal masses. And PWI is superior to DWI in differentiating benign and malignant renal masses.K(trans) with the largest AUC showed the highest diagnostic value. And ADC is also irreplaceable in providing the information of cellular structural features and the movement of water diffusion.
The evolution of pore connectivity in volcanic rocks
NASA Astrophysics Data System (ADS)
Colombier, Mathieu; Wadsworth, Fabian B.; Gurioli, Lucia; Scheu, Bettina; Kueppers, Ulrich; Di Muro, Andrea; Dingwell, Donald B.
2017-03-01
Pore connectivity is a measure of the fraction of pore space (vesicles, voids or cracks) in a material that is interconnected on the system length scale. Pore connectivity is fundamentally related to permeability, which has been shown to control magma outgassing and the explosive potential of magma during ascent in the shallowest part of the crust. Here, we compile a database of connectivity and porosity from published sources and supplement this with additional measurements, using natural volcanic rocks produced in a broad range of eruptive styles and with a range of bulk composition. The database comprises 2715 pairs of connectivity C and porosity ϕ values for rocks from 35 volcanoes as well as 116 products of experimental work. For 535 volcanic rock samples, the permeability k was also measured. Data from experimental studies constrain the general features of the relationship between C and ϕ associated with both vesiculation and densification processes, which can then be used to interpret natural data. To a first order, we show that a suite of rocks originating from effusive eruptive behaviour can be distinguished from rocks originating from explosive eruptive behaviour using C and ϕ. We observe that on this basis, a particularly clear distinction can be made between scoria formed in fire-fountains and that formed in Strombolian activity. With increasing ϕ, the onset of connectivity occurs at the percolation threshold ϕc which in turn can be hugely variable. We demonstrate that C is an excellent metric for constraining ϕc in suites of porous rocks formed in a common process and discuss the range of ϕc values recorded in volcanic rocks. The percolation threshold is key to understanding the onset of permeability, outgassing and compaction in shallow magmas. We show that this threshold is dramatically different in rocks formed during densification processes than in rocks formed in vesiculating processes and propose that this value is the biggest factor in controlling the evolution of permeability at porosities above ϕc.
Assis, Pedro I L S; Alonso, Rocío; Meirelles, Sérgio T; Moraes, Regina M
2015-07-01
Phytotoxic ozone (O3) levels have been recorded in the Metropolitan Region of São Paulo (MRSP). Flux-based critical levels for O3 through stomata have been adopted for some northern hemisphere species, showing better accuracy than with accumulated ozone exposure above a threshold of 40 ppb (AOT40). In Brazil, critical levels for vegetation protection against O3 adverse effects do not exist. The study aimed to investigate the applicability of O3 deposition model (Deposition of Ozone for Stomatal Exchange (DO3SE)) to an O3-sensitive tropical tree species (Psidium guajava L. 'Paluma') under the MRSP environmental conditions, which are very unstable, and to assess the performance of O3 flux and AOT40 in relation to O3-induced leaf injuries. Stomatal conductance (g s) parameterization for 'Paluma' was carried out and used to calculate different rate thresholds (from 0 to 5 nmol O3 m(-2) projected leaf area (PLA) s(-1)) for the phytotoxic ozone dose (POD). The model performance was assessed through the relationship between the measured and modeled g sto. Leaf injuries were analyzed and associated with POD and AOT40. The model performance was satisfactory and significant (R (2) = 0.56; P < 0.0001; root-mean-square error (RMSE) = 116). As already expected, high AOT40 values did not result in high POD values. Although high POD values do not always account for more injuries, POD0 showed better performance than did AOT40 and other different rate thresholds for POD. Further investigation is necessary to improve our model and also to check if there is a critical level of ozone in which leaf injuries arise. The conclusion is that the DO3SE model for 'Paluma' is applicable in the MRSP as well as in temperate regions and may contribute to future directives.
Abraham, Michael H; Gola, Joelle M R; Cometto-Muñiz, J Enrique
2016-01-01
We present a method to assess the air quality of an environment based on the chemosensory irritation impact of mixtures of volatile organic compounds (VOCs) present in such environment. We begin by approximating the sigmoid function that characterizes psychometric plots of probability of irritation detection (Q) versus VOC vapor concentration to a linear function. First, we apply an established equation that correlates and predicts human sensory irritation thresholds (SIT) (i.e., nasal and eye irritation) based on the transfer of the VOC from the gas phase to biophases, e.g., nasal mucus and tear film. Second, we expand the equation to include other biological data (e.g., odor detection thresholds) and to include further VOCs that act mainly by "specific" effects rather than by transfer (i.e., "physical") effects as defined in the article. Then we show that, for 72 VOCs in common, Q values based on our calculated SITs are consistent with the Threshold Limit Values (TLVs) listed for those same VOCs on the basis of sensory irritation by the American Conference of Governmental Industrial Hygienists (ACGIH). Third, we set two equations to calculate the probability (Qmix) that a given air sample containing a number of VOCs could elicit chemosensory irritation: one equation based on response addition (Qmix scale: 0.00 to 1.00) and the other based on dose addition (1000*Qmix scale: 0 to 2000). We further validate the applicability of our air quality assessment method by showing that both Qmix scales provide values consistent with the expected sensory irritation burden from VOC mixtures present in a wide variety of indoor and outdoor environments as reported on field studies in the literature. These scales take into account both the concentration of VOCs at a particular site and the propensity of the VOCs to evoke sensory irritation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sun, Wei; Song, Qipeng; Yu, Bing; Zhang, Cui; Mao, Dewei
2015-01-01
This study aimed to evaluate the test-retest reliability of a new device for assessing ankle joint kinesthesia. This device could measure the passive motion threshold of four ankle joint movements, namely plantarflexion, dorsiflexion, inversion and eversion. A total of 21 healthy adults, including 13 males and 8 females, participated in the study. Each participant completed two sessions on two separate days with 1-week interval. The sessions were administered by the same experimenter in the same laboratory. At least 12 trials (three successful trials in each of the four directions) were performed in each session. The mean values in each direction were calculated and analysed. The ICC values of test-retest reliability ranged from 0.737 (dorsiflexion) to 0.935 (eversion), whereas the SEM values ranged from 0.21° (plantarflexion) to 0.52° (inversion). The Bland-Altman plots showed that the reliability of plantarflexion-dorsiflexion was better than that of inversion-eversion. The results evaluated the reliability of the new device as fair to excellent. The new device for assessing kinesthesia could be used to examine the ankle joint kinesthesia.
De Cloedt, Lise; Emeriaud, Guillaume; Lefebvre, Émilie; Kleiber, Niina; Robitaille, Nancy; Jarlot, Christine; Lacroix, Jacques; Gauvin, France
2018-04-01
The incidence of transfusion-associated circulatory overload (TACO) is not well known in children, especially in pediatric intensive care unit (PICU) patients. All consecutive patients admitted over 1 year to the PICU of CHU Sainte-Justine were included after they received their first red blood cell transfusion. TACO was diagnosed using the criteria of the International Society of Blood Transfusion, with two different ways of defining abnormal values: 1) using normal pediatric values published in the Nelson Textbook of Pediatrics and 2) by using the patient as its own control and comparing pre- and posttransfusion values with either 10 or 20% difference threshold. We monitored for TACO up to 24 hours posttransfusion. A total of 136 patients were included. Using the "normal pediatric values" definition, we diagnosed 63, 88, and 104 patients with TACO at 6, 12, and 24 hours posttransfusion, respectively. Using the "10% threshold" definition we detected 4, 15, and 27 TACO cases in the same periods, respectively; using the "20% threshold" definition, the number of TACO cases was 2, 6, and 17, respectively. Chest radiograph was the most frequent missing item, especially at 6 and 12 hours posttransfusion. Overall, the incidence of TACO varied from 1.5% to 76% depending on the definition. A more operational definition of TACO is needed in PICU patients. Using a threshold could be more optimal but more studies are needed to confirm the best threshold. © 2018 AABB.
Somatosensory temporal discrimination in essential tremor and isolated head and voice tremors.
Conte, Antonella; Ferrazzano, Gina; Manzo, Nicoletta; Leodori, Giorgio; Fabbrini, Giovanni; Fasano, Alfonso; Tinazzi, Michele; Berardelli, Alfredo
2015-05-01
The aim of this study was to investigate the somatosensory temporal discrimination threshold in patients with essential tremor (sporadic and familial) and to evaluate whether somatosensory temporal discrimination threshold values differ depending on the body parts involved by tremor. We also investigated the somatosensory temporal discrimination in patients with isolated voice tremor. We enrolled 61 patients with tremor: 48 patients with essential tremor (31 patients with upper limb tremor alone, nine patients with head tremor alone, and eight patients with upper limb plus head tremor; 22 patients with familial vs. 26 sporadic essential tremor), 13 patients with isolated voice tremor, and 45 healthy subjects. Somatosensory temporal discrimination threshold values were normal in patients with familial essential tremor, whereas they were higher in patients with sporadic essential tremor. When we classified patients according to tremor distribution, somatosensory temporal discrimination threshold values were normal in patients with upper limb tremor and abnormal only in patients with isolated head tremor. Temporal discrimination threshold values were also abnormal in patients with isolated voice tremor. Somatosensory temporal discrimination processing is normal in patients with familial as well as in patients with sporadic essential tremor involving the upper limbs. By contrast, somatosensory temporal discrimination is altered in patients with isolated head tremor and voice tremor. This study with somatosensory temporal discrimination suggests that isolated head and voice tremors might possibly be considered as separate clinical entities from essential tremor. © 2015 International Parkinson and Movement Disorder Society.
Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu
2017-01-01
The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.
NASA Astrophysics Data System (ADS)
Juhlke, Florian; Lorber, Katja; Wagenstaller, Maria; Buettner, Andrea
2017-12-01
Chlorinated guaiacol derivatives are found in waste water of pulp mills using chlorine in the bleaching process of wood pulp. They can also be detected in fish tissue, possibly causing off-odors. To date, there is no systematic investigation on the odor properties of halogenated guaiacol derivatives. To close this gap, odor thresholds in air and odor qualities of 14 compounds were determined by gas chromatography-olfactometry. Overall, the investigated compounds elicited smells that are characteristic for guaiacol, namely smoky, sweet, vanilla-like, but also medicinal and plaster-like. Their odor thresholds in air were, however, very low, ranging from 0.00072 to 23 ng/Lair. The lowest thresholds were found for 5-chloro- and 5-bromoguaiacol, followed by 4,5-dichloro- and 6-chloroguaiacol. Moreover, some inter-individual differences in odor threshold values could be observed, with the highest variations having been recorded for the individual values of 5-iodo- and 4-bromoguaiacol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachman, D., E-mail: bachman@ualberta.ca; Fedosejevs, R.; Tsui, Y. Y.
An optical damage threshold for crystalline silicon from single femtosecond laser pulses was determined by detecting a permanent change in the refractive index of the material. This index change could be detected with unprecedented sensitivity by measuring the resonant wavelength shift of silicon integrated optics microring resonators irradiated with femtosecond laser pulses at 400 nm and 800 nm wavelengths. The threshold for permanent index change at 400 nm wavelength was determined to be 0.053 ± 0.007 J/cm{sup 2}, which agrees with previously reported threshold values for femtosecond laser modification of crystalline silicon. However, the threshold for index change at 800 nm wavelength was found to be 0.044 ± 0.005 J/cm{supmore » 2}, which is five times lower than the previously reported threshold values for visual change on the silicon surface. The discrepancy is attributed to possible modification of the crystallinity of silicon below the melting temperature that has not been detected before.« less
Cury, Rubens G; Galhardoni, Ricardo; Teixeira, Manoel J; Dos Santos Ghilardi, Maria G; Silva, Valquiria; Myczkowski, Martin L; Marcolin, Marco A; Barbosa, Egberto R; Fonoff, Erich T; Ciampi de Andrade, Daniel
2016-12-01
Subthalamic deep brain stimulation (STN-DBS) is used to treat refractory motor complications in Parkinson disease (PD), but its effects on nonmotor symptoms remain uncertain. Up to 80% of patients with PD may have pain relief after STN-DBS, but it is unknown whether its analgesic properties are related to potential effects on sensory thresholds or secondary to motor improvement. We have previously reported significant and long-lasting pain relief after DBS, which did not correlate with motor symptomatic control. Here we present secondary data exploring the effects of DBS on sensory thresholds in a controlled way and have explored the relationship between these changes and clinical pain and motor improvement after surgery. Thirty-seven patients were prospectively evaluated before STN-DBS and 12 months after the procedure compared with healthy controls. Compared with baseline, patients with PD showed lower thermal and mechanical detection and higher cold pain thresholds after surgery. There were no changes in heat and mechanical pain thresholds. Compared with baseline values in healthy controls, patients with PD had higher thermal and mechanical detection thresholds, which decreased after surgery toward normalization. These sensory changes had no correlation with motor or clinical pain improvement after surgery. These data confirm the existence of sensory abnormalities in PD and suggest that STN-DBS mainly influenced the detection thresholds rather than painful sensations. However, these changes may depend on the specific effects of DBS on somatosensory loops with no correlation to motor or clinical pain improvement.
NASA Technical Reports Server (NTRS)
Reed, M. A.
1974-01-01
The need for an obstacle detection system on the Mars roving vehicle was assumed, and a practical scheme was investigated and simulated. The principal sensing device on this vehicle was taken to be a laser range finder. Both existing and original algorithms, ending with thresholding operations, were used to obtain the outlines of obstacles from the raw data of this laser scan. A theoretical analysis was carried out to show how proper value of threshold may be chosen. Computer simulations considered various mid-range boulders, for which the scheme was quite successful. The extension to other types of obstacles, such as craters, was considered. The special problems of bottom edge detection and scanning procedure are discussed.
Comparing the locking threshold for rings and chains of oscillators.
Ottino-Löffler, Bertrand; Strogatz, Steven H
2016-12-01
We present a case study of how topology can affect synchronization. Specifically, we consider arrays of phase oscillators coupled in a ring or a chain topology. Each ring is perfectly matched to a chain with the same initial conditions and the same random natural frequencies. The only difference is their boundary conditions: periodic for a ring and open for a chain. For both topologies, stable phase-locked states exist if and only if the spread or "width" of the natural frequencies is smaller than a critical value called the locking threshold (which depends on the boundary conditions and the particular realization of the frequencies). The central question is whether a ring synchronizes more readily than a chain. We show that it usually does, but not always. Rigorous bounds are derived for the ratio between the locking thresholds of a ring and its matched chain, for a variant of the Kuramoto model that also includes a wider family of models.
Comparing the locking threshold for rings and chains of oscillators
NASA Astrophysics Data System (ADS)
Ottino-Löffler, Bertrand; Strogatz, Steven H.
2016-12-01
We present a case study of how topology can affect synchronization. Specifically, we consider arrays of phase oscillators coupled in a ring or a chain topology. Each ring is perfectly matched to a chain with the same initial conditions and the same random natural frequencies. The only difference is their boundary conditions: periodic for a ring and open for a chain. For both topologies, stable phase-locked states exist if and only if the spread or "width" of the natural frequencies is smaller than a critical value called the locking threshold (which depends on the boundary conditions and the particular realization of the frequencies). The central question is whether a ring synchronizes more readily than a chain. We show that it usually does, but not always. Rigorous bounds are derived for the ratio between the locking thresholds of a ring and its matched chain, for a variant of the Kuramoto model that also includes a wider family of models.
A Review of Physical Fitness as It Pertains to the Military Services
1985-07-01
muscle metabolic capacity. The results of this research has led to a measure which is commonly referred to as " anaerobic threshold " (40). This is an...unfortunate term in that it is really a measure of aerobic metabolic capacity, not anaerobic . Anaerobic threshold is defined as the point of exercise...the individual can exercise without lactate accumulation, the higher the anaerobic threshold value. Untrained individuals have a threshold at a work
Regression Discontinuity Designs in Epidemiology
Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till
2014-01-01
When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922
Combining multiple thresholding binarization values to improve OCR output
NASA Astrophysics Data System (ADS)
Lund, William B.; Kennard, Douglas J.; Ringger, Eric K.
2013-01-01
For noisy, historical documents, a high optical character recognition (OCR) word error rate (WER) can render the OCR text unusable. Since image binarization is often the method used to identify foreground pixels, a body of research seeks to improve image-wide binarization directly. Instead of relying on any one imperfect binarization technique, our method incorporates information from multiple simple thresholding binarizations of the same image to improve text output. Using a new corpus of 19th century newspaper grayscale images for which the text transcription is known, we observe WERs of 13.8% and higher using current binarization techniques and a state-of-the-art OCR engine. Our novel approach combines the OCR outputs from multiple thresholded images by aligning the text output and producing a lattice of word alternatives from which a lattice word error rate (LWER) is calculated. Our results show a LWER of 7.6% when aligning two threshold images and a LWER of 6.8% when aligning five. From the word lattice we commit to one hypothesis by applying the methods of Lund et al. (2011) achieving an improvement over the original OCR output and a 8.41% WER result on this data set.
Evolution of complex density-dependent dispersal strategies.
Parvinen, Kalle; Seppänen, Anne; Nagy, John D
2012-11-01
The question of how dispersal behavior is adaptive and how it responds to changes in selection pressure is more relevant than ever, as anthropogenic habitat alteration and climate change accelerate around the world. In metapopulation models where local populations are large, and thus local population size is measured in densities, density-dependent dispersal is expected to evolve to a single-threshold strategy, in which individuals stay in patches with local population density smaller than a threshold value and move immediately away from patches with local population density larger than the threshold. Fragmentation tends to convert continuous populations into metapopulations and also to decrease local population sizes. Therefore we analyze a metapopulation model, where each patch can support only a relatively small local population and thus experience demographic stochasticity. We investigated the evolution of density-dependent dispersal, emigration and immigration, in two scenarios: adult and natal dispersal. We show that density-dependent emigration can also evolve to a nonmonotone, "triple-threshold" strategy. This interesting phenomenon results from an interplay between the direct and indirect benefits of dispersal and the costs of dispersal. We also found that, compared to juveniles, dispersing adults may benefit more from density-dependent vs. density-independent dispersal strategies.
NASA Astrophysics Data System (ADS)
Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W.
2015-04-01
Threshold versions of Schloegl's model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.
NASA Astrophysics Data System (ADS)
Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.
2004-05-01
Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeters, A. G.; Rath, F.; Buchholz, R.
2016-08-15
It is shown that Ion Temperature Gradient turbulence close to the threshold exhibits a long time behaviour, with smaller heat fluxes at later times. This reduction is connected with the slow growth of long wave length zonal flows, and consequently, the numerical dissipation on these flows must be sufficiently small. Close to the nonlinear threshold for turbulence generation, a relatively small dissipation can maintain a turbulent state with a sizeable heat flux, through the damping of the zonal flow. Lowering the dissipation causes the turbulence, for temperature gradients close to the threshold, to be subdued. The heat flux then doesmore » not go smoothly to zero when the threshold is approached from above. Rather, a finite minimum heat flux is obtained below which no fully developed turbulent state exists. The threshold value of the temperature gradient length at which this finite heat flux is obtained is up to 30% larger compared with the threshold value obtained by extrapolating the heat flux to zero, and the cyclone base case is found to be nonlinearly stable. Transport is subdued when a fully developed staircase structure in the E × B shearing rate forms. Just above the threshold, an incomplete staircase develops, and transport is mediated by avalanche structures which propagate through the marginally stable regions.« less
Stress Memory in Fluid-Filled Fractures - Insights From Induced Seismicity
NASA Astrophysics Data System (ADS)
Dura-Gomez, I.; Talwani, P.
2007-05-01
Detailed studies of reservoir and injection induced seismicity provide an opportunity to study the characteristics of the fractures associated with fluid-induced seismicity. In 1996, we noted that the first three series of earthquakes with M greater or equal than 5.0 in the vicinity of the Koyna reservoir, occurred only when the reservoir levels had exceeded the previous maxima. In subsequent years, three more similar episodes were noted in the vicinity of the Koyna and the nearby Warna reservoir, without a single repetition in the epicentral location. This behavior was similar to Kaiser effect observed in the laboratory. A similar behavior has been observed in many cases of injection induced seismicity. At the Denver arsenal well in the 1960s and the Soultz, France, hot rock site in the 1990s, among others, seismicity only occurred when the differential pressure, (downhole well borehole pressure excess over the ambient natural pressure) reached a threshold value. These threshold values differed for different wells and depths. The seismicity stopped when the differential pressure was lowered below the threshold value. These observations show that the stress memory (associated with Kaiser effect) observed in the laboratory with small samples, is also displayed in nature where the volume of rocks is from hundreds to thousands of cu.km. The fluid-filled seismogenic fractures near the reservoirs and bore wells, associated with fluid-induced seismicity, seems to behave like a finely tuned, sensitive system that "remember" the largest stress perturbation they have been subjected to. Here we present these observations of stress memory in fluid-filled fractures associated with induced seismicity and suggest possible causes.
Balasch, J; Vidal, E; Peñarrubia, J; Casamitjana, R; Carmona, F; Creus, M; Fábregues, F; Vanrell, J A
2001-08-01
It has been recently suggested that gonadotrophin-releasing hormone agonist down-regulation in some normogonadotrophic women may result in profound suppression of LH concentrations, impairing adequate oestradiol synthesis and IVF and pregnancy outcome. The aims of this study, where receiver-operating characteristic (ROC) analysis was used, were: (i) to assess the usefulness of serum LH measurement on stimulation day 7 (S7) as a predictor of ovarian response, IVF outcome, implantation, and the outcome of pregnancy in patients treated with recombinant FSH under pituitary suppression; and (ii) to define the best threshold value, if any, to discriminate between women with 'low' or 'normal' LH concentrations. A total of 144 infertile women undergoing IVF/intracytoplasmic sperm injection (ICSI) treatment were included. Seventy-two consecutive patients having a positive pregnancy test (including 58 ongoing pregnancies and 14 early pregnancy losses) were initially selected. As a control non-pregnant group, the next non-conception IVF/ICSI cycle after each conceptual cycle in our assisted reproduction programme was used. The median and range of LH values in non-conception cycles, conception cycles, ongoing pregnancies, and early pregnancy losses, clearly overlapped. ROC analysis showed that serum LH concentration on S7 was unable to discriminate between conception and non-conception cycles (AUC(ROC) = 0.52; 95% CI: 0.44 to 0.61) or ongoing pregnancy versus early pregnancy loss groups (AUC(ROC) = 0.59; 95% CI: 0.46 to 0.70). To assess further the potential impact of suppressed concentrations of circulating LH during ovarian stimulation on the outcome of IVF/ICSI treatment, the three threshold values of mid-follicular serum LH proposed in the literature (<1, < or =0.7, <0.5 IU/l) to discriminate between women with 'low' or 'normal' LH were applied to our study population. No significant differences were found with respect to ovarian response, IVF/ICSI outcome, implantation, and the outcome of pregnancy between 'low' and 'normal' S7 LH women as defined by those threshold values. Our results do not support the need for additional exogenous LH supplementation in down-regulated women receiving a recombinant FSH-only preparation.
2017-01-01
Objective To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Methods Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. Results VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. Conclusion We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia. PMID:29201816
Park, Chul-Hyun; Kim, Don-Kyu; Lee, Yong-Taek; Yi, Youbin; Lee, Jung-Sang; Kim, Kunwoo; Park, Jung Ho; Yoon, Kyung Jae
2017-10-01
To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia.
Qin, Xike; Liu, Bolin; Soulard, Jonathan; Morse, David; Cappadocia, Mario
2006-01-01
A method for the quantification of S-RNase levels in single styles of self-incompatible Solanum chacoense was developed and applied toward an experimental determination of the S-RNase threshold required for pollen rejection. It was found that, when single style values are averaged, accumulated levels of the S(11)- and S(12)-RNases can differ up to 10-fold within a genotype, while accumulated levels of the S(12)-RNase can differ by over 3-fold when different genotypes are compared. Surprisingly, the amount of S(12)-RNase accumulated in different styles of the same plant can differ by over 20-fold. A low level of 160 ng S-RNase in individual styles of fully incompatible plants, and a high value of 68 ng in a sporadic self-compatible (SSC) line during a bout of complete compatibility was measured, suggesting that these values bracket the threshold level of S-RNase needed for pollen rejection. Remarkably, correlations of S-RNase values to average fruit sets in different plant lines displaying sporadic self-compatibility (SSC) to different extents as well as to fruit set in immature flowers, are all consistent with a threshold value of 80 ng S(12)-RNase. Taken together, these results suggest that S-RNase levels alone are the principal determinant of the incompatibility phenotype. Interestingly, while the S-RNase threshold required for rejection of S(12)-pollen from a given genetic background is the same in styles of different genetic backgrounds, it is different when pollen donors of different genetic backgrounds are used. These results reveal a previously unsuspected level of complexity in the incompatibility reaction.
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
A Universal Threshold for the Assessment of Load and Output Residuals of Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new universal residual threshold for the detection of load and gage output residual outliers of wind tunnel strain{gage balance data was developed. The threshold works with both the Iterative and Non{Iterative Methods that are used in the aerospace testing community to analyze and process balance data. It also supports all known load and gage output formats that are traditionally used to describe balance data. The threshold's definition is based on an empirical electrical constant. First, the constant is used to construct a threshold for the assessment of gage output residuals. Then, the related threshold for the assessment of load residuals is obtained by multiplying the empirical electrical constant with the sum of the absolute values of all first partial derivatives of a given load component. The empirical constant equals 2.5 microV/V for the assessment of balance calibration or check load data residuals. A value of 0.5 microV/V is recommended for the evaluation of repeat point residuals because, by design, the calculation of these residuals removes errors that are associated with the regression analysis of the data itself. Data from a calibration of a six-component force balance is used to illustrate the application of the new threshold definitions to real{world balance calibration data.
Photoelectron spectroscopy of color centers in negatively charged cesium iodide nanocrystals
NASA Astrophysics Data System (ADS)
Sarkas, Harry W.; Kidder, Linda H.; Bowen, Kit H.
1995-01-01
We present the photoelectron spectra of negatively charged cesium iodide nanocrystals recorded using 2.540 eV photons. The species examined were produced using an inert gas condensation cluster ion source, and they ranged in size from (CsI)-n=13 to nanocrystal anions comprised of 330 atoms. Nanocrystals showing two distinct types of photoemission behavior were observed. For (CsI)-n=13 and (CsI)-n=36-165, a plot of cluster anion photodetachment threshold energies vs n-1/3 gives a straight line extrapolating (at n-1/3=0, i.e., n=∞) to 2.2 eV, the photoelectric threshold energy for F centers in bulk cesium iodide. The linear extrapolation of the cluster anion data to the corresponding bulk property implies that the electron localization in these gas-phase nanocrystals is qualitatively similar to that of F centers in extended alkali halide crystals. These negatively charged cesium iodide nanocrystals are thus shown to support embryonic forms of F centers, which mature with increasing cluster size toward condensed phase impurity centers. Under an alternative set of source conditions, nanocrystals were produced which showed significantly lower photodetachment thresholds than the aforementioned F-center cluster anions. For these species, containing 83-131 atoms, a plot of their cluster anion photodetachment threshold energies versus n-1/3 gives a straight line which extrapolates to 1.4 eV. This value is in accord with the expected photoelectric threshold energy for F' centers in bulk cesium iodide, i.e., color centers with two excess electrons in a single defect site. These nanocrystals are interpreted to be the embryonic F'-center containing species, Cs(CsI)-n=41-65.
Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide
2016-01-01
The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females. © The Author(s) 2015.
Research on energy stock market associated network structure based on financial indicators
NASA Astrophysics Data System (ADS)
Xi, Xian; An, Haizhong
2018-01-01
A financial market is a complex system consisting of many interacting units. In general, due to the various types of information exchange within the industry, there is a relationship between the stocks that can reveal their clear structural characteristics. Complex network methods are powerful tools for studying the internal structure and function of the stock market, which allows us to better understand the stock market. Applying complex network methodology, a stock associated network model based on financial indicators is created. Accordingly, we set threshold value and use modularity to detect the community network, and we analyze the network structure and community cluster characteristics of different threshold situations. The study finds that the threshold value of 0.7 is the abrupt change point of the network. At the same time, as the threshold value increases, the independence of the community strengthens. This study provides a method of researching stock market based on the financial indicators, exploring the structural similarity of financial indicators of stocks. Also, it provides guidance for investment and corporate financial management.
Sofonia, Jeremy J; Unsworth, Richard K F
2010-01-01
Given the potential for adverse effects of ocean dredging on marine organisms, particularly benthic primary producer communities, the management and monitoring of those activities which cause elevated turbidity and sediment loading is critical. In practice, however, this has proven challenging as the development of water quality threshold values, upon which management responses are based, are subject to a large number of physical and biological parameters that are spatially and temporally specific. As a consequence, monitoring programs to date have taken a wide range of different approaches, most focusing on measures of turbidity reported as nephelometric turbidity units (NTU). This paper presents a potential approach in the determination of water quality thresholds which utilises data gathered through the long-term deployment of in situ water instruments, but suggests a focus on photosynthetic active radiation (PAR) rather than NTU as it is more relevant biologically and inclusive of other site conditions. A simple mathematical approach to data interpretation is also presented which facilitates threshold value development, not individual values of concentrations over specific intervals, but as an equation which may be utilized in numerical modelling.
Rich Sliding Motion and Dynamics in a Filippov Plant-Disease System
NASA Astrophysics Data System (ADS)
Chen, Can; Chen, Xi
In order to reduce the spread of plant diseases and maintain the number of infected trees below an economic threshold, we choose the number of infected trees and the number of susceptible plants as the control indexes on whether to implement control strategies. Then a Filippov plant-disease model incorporating cutting off infected branches and replanting susceptible trees is proposed. Based on the theory of Filippov system, the sliding mode dynamics and conditions for the existence of all the possible equilibria and Lotka-Volterra cycles are presented. We find that model solutions ultimately approach the positive equilibrium that lies in the region above the infected threshold value TI, or the periodic trajectories that lie in the region below TI, or the pseudo-attractor ET = (TS,TI), as we vary the susceptible and infected threshold values. It indicates that the plant-disease transmission is tolerable if the trajectories approach ET = (TS,TI) or the periodic trajectories lie in the region below TI. Hence an acceptable level of the number of infected trees can be achieved when the susceptible and infected threshold values are chosen appropriately.
Lucero, Jorge C.; Koenig, Laura L.; Lourenço, Kelem G.; Ruty, Nicolas; Pelorson, Xavier
2011-01-01
This paper examines an updated version of a lumped mucosal wave model of the vocal fold oscillation during phonation. Threshold values of the subglottal pressure and the mean (DC) glottal airflow for the oscillation onset are determined. Depending on the nonlinear characteristics of the model, an oscillation hysteresis phenomenon may occur, with different values for the oscillation onset and offset threshold. The threshold values depend on the oscillation frequency, but the occurrence of the hysteresis is independent of it. The results are tested against pressure data collected from a mechanical replica of the vocal folds, and oral airflow data collected from speakers producing intervocalic ∕h∕. In the human speech data, observed differences between voice onset and offset may be attributed to variations in voice pitch, with a very small or inexistent hysteresis phenomenon. PMID:21428520
Krawczyk, Michał
2015-01-01
In this project I investigate the use and possible misuse of p values in papers published in five (high-ranked) journals in experimental psychology. I use a data set of over 135'000 p values from more than five thousand papers. I inspect (1) the way in which the p values are reported and (2) their distribution. The main findings are following: first, it appears that some authors choose the mode of reporting their results in an arbitrary way. Moreover, they often end up doing it in such a way that makes their findings seem more statistically significant than they really are (which is well known to improve the chances for publication). Specifically, they frequently report p values "just above" significance thresholds directly, whereas other values are reported by means of inequalities (e.g. "p<.1"), they round the p values down more eagerly than up and appear to choose between the significance thresholds and between one- and two-sided tests only after seeing the data. Further, about 9.2% of reported p values are inconsistent with their underlying statistics (e.g. F or t) and it appears that there are "too many" "just significant" values. One interpretation of this is that researchers tend to choose the model or include/discard observations to bring the p value to the right side of the threshold.
Dental ceramics: a CIEDE2000 acceptability thresholds for lightness, chroma and hue differences.
Perez, María Del Mar; Ghinea, Razvan; Herrera, Luis Javier; Ionescu, Ana Maria; Pomares, Héctor; Pulgar, Rosa; Paravina, Rade D
2011-12-01
To determine the visual 50:50% acceptability thresholds for lightness, chroma and hue for dental ceramics using CIEDE2000(K(L):K(C):K(H)) formula, and to evaluate the formula performance using different parametric factors. A 30-observer panel evaluated three subsets of ceramic samples: lightness subset (|ΔL'/ΔE(00)| ≥ 0.9), chroma subset (|ΔC'/ΔE(00)| ≥ 0.9) and hue subset (|ΔH'/ΔE(00)| ≥ 0.9). A Takagi-Sugeno-Kang Fuzzy Approximation was used as fitting procedure, and the 50:50% acceptability thresholds were calculated. A t-test was used in statistical analysis of the thresholds values. The performance of the CIEDE2000(1:1:1) and CIEDE2000(2:1:1) colour difference formulas against visual results was tested using PF/3 performance factor. The 50:50% CIEDE2000 acceptability thresholds were ΔL' = 2.92 (95% CI 1.22-4.96; r(2) = 0.76), ΔC' = 2.52 (95% CI 1.31-4.19; r(2) = 0.71) and ΔH' = 1.90 (95% CI 1.63-2.15; r(2) = 0.88). The 50:50% acceptability threshold for colour difference (ΔE') for CIEDE2000(1:1:1) was 1.87, whilst corresponding value for CIEDE2000(2:1:1) was 1.78. The PF/3 values were 139.86 for CIEDE2000(1:1:1), and 132.31 for CIEDE2000(2:1:1). There was a statistically significant difference amongst CIEDE2000 50:50% acceptability thresholds for lightness, chroma and hue differences for dental ceramics. The CIEDE2000(2:1:1) formula performed better than CIEDE2000(1:1:1). Copyright © 2011 Elsevier Ltd. All rights reserved.
Failure modes in electroactive polymer thin films with elastic electrodes
NASA Astrophysics Data System (ADS)
De Tommasi, D.; Puglisi, G.; Zurlo, G.
2014-02-01
Based on an energy minimization approach, we analyse the elastic deformations of a thin electroactive polymer (EAP) film sandwiched by two elastic electrodes with non-negligible stiffness. We analytically show the existence of a critical value of the electrode voltage for which non-homogeneous solutions bifurcate from the homogeneous equilibrium state, leading to the pull-in phenomenon. This threshold strongly decreases the limit value proposed in the literature considering only homogeneous deformations. We explicitly discuss the influence of geometric and material parameters together with boundary conditions in the attainment of the different failure modes observed in EAP devices. In particular, we obtain the optimum values of these parameters leading to the maximum activation performances of the device.
Fulcher, Lewis P.; Scherer, Ronald C.
2011-01-01
In an important paper on the physics of small amplitude oscillations, Titze showed that the essence of the vertical phase difference, which allows energy to be transferred from the flowing air to the motion of the vocal folds, could be captured in a surface wave model, and he derived a formula for the phonation threshold pressure with an explicit dependence on the geometrical and biomechanical properties of the vocal folds. The formula inspired a series of experiments [e.g., R. Chan and I. Titze, J. Acoust. Soc. Am 119, 2351–2362 (2006)]. Although the experiments support many aspects of Titze’s formula, including a linear dependence on the glottal half-width, the behavior of the experiments at the smallest values of this parameter is not consistent with the formula. It is shown that a key element for removing this discrepancy lies in a careful examination of the properties of the entrance loss coefficient. In particular, measurements of the entrance loss coefficient at small widths done with a physical model of the glottis (M5) show that this coefficient varies inversely with the glottal width. A numerical solution of the time-dependent equations of the surface wave model shows that adding a supraglottal vocal tract lowers the phonation threshold pressure by an amount approximately consistent with Chan and Titze’s experiments. PMID:21895097
Fulcher, Lewis P; Scherer, Ronald C
2011-09-01
In an important paper on the physics of small amplitude oscillations, Titze showed that the essence of the vertical phase difference, which allows energy to be transferred from the flowing air to the motion of the vocal folds, could be captured in a surface wave model, and he derived a formula for the phonation threshold pressure with an explicit dependence on the geometrical and biomechanical properties of the vocal folds. The formula inspired a series of experiments [e.g., R. Chan and I. Titze, J. Acoust. Soc. Am 119, 2351-2362 (2006)]. Although the experiments support many aspects of Titze's formula, including a linear dependence on the glottal half-width, the behavior of the experiments at the smallest values of this parameter is not consistent with the formula. It is shown that a key element for removing this discrepancy lies in a careful examination of the properties of the entrance loss coefficient. In particular, measurements of the entrance loss coefficient at small widths done with a physical model of the glottis (M5) show that this coefficient varies inversely with the glottal width. A numerical solution of the time-dependent equations of the surface wave model shows that adding a supraglottal vocal tract lowers the phonation threshold pressure by an amount approximately consistent with Chan and Titze's experiments. © 2011 Acoustical Society of America
Yang, Seung-Won
2014-11-01
This study reports the average and SD of professional baseball players' cardiorespiratory endurance, maximum oxygen consumption, oxygen consumption during anaerobic threshold, maximum oxygen consumption of anaerobic threshold %, maximum heart rate, and heart rate of anaerobic threshold. We also report the comparison between pitchers and fielders. Considering the total number of results, percentile was used and results were classified into 5 grades. One professional baseball players' organization with more than 14 years of experience participated in this study. First, we observed that the average pitchers' V[Combining Dot Above]O2max was 53.64 ml·kg·min, whereas the average fielders' was 52.30 ml·kg·min. These values were lower than other sports players. Second, in case of the V[Combining Dot Above]O2AT, pitchers showed 39.35 ml·kg·min and fielders showed 39.96 ml·kg·min. %V[Combining Dot Above]O2AT showed a significant difference of 71.13% between pitchers and fielders-pitchers, whereas fielders showed 73.89% (p < 0.01). Third, maximal heart rates were measured at 188.69 b·min (pitchers) and 187.79 b·min (fielders). These were lower than college baseball players and higher than other sports players. In conclusion, both professional baseball pitchers and fielders should be aware of the necessity of systematic cardiorespiratory endurance data analysis. Moreover, baseball teams, athletic trainers, and coaches should also be aware of the importance of cardiorespiratory endurance.
Two-threshold model for scaling laws of noninteracting snow avalanches
Faillettaz, J.; Louchet, F.; Grasso, J.-R.
2004-01-01
A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.
The Utility of Selection for Military and Civilian Jobs
1989-07-01
parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not
An entropy decision approach in flash flood warning: rainfall thresholds definition
NASA Astrophysics Data System (ADS)
Montesarchio, V.; Napolitano, F.; Ridolfi, E.
2009-09-01
Flash floods events are floods characterised by very rapid response of the basins to the storms, and often they involve loss of life and damage to common and private properties. Due to the specific space-time scale of this kind of flood, generally only a short lead time is available for triggering civil protection measures. Thresholds values specify the precipitation amount for a given duration that generates a critical discharge in a given cross section. The overcoming of these values could produce a critical situation in river sites exposed to alluvial risk, so it is possible to compare directly the observed or forecasted precipitation with critical reference values, without running on line real time forecasting systems. This study is focused on the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated minimising an utility function based on the informative entropy concept. The study concludes with a system performance analysis, in terms of correctly issued warning, false alarms and missed alarms.
Elizabeth A. Freeman; Gretchen G. Moisen
2008-01-01
Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence - absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have...
An acoustic emission study of plastic deformation in polycrystalline aluminium
NASA Technical Reports Server (NTRS)
Bill, R. C.; Frederick, J. R.; Felbeck, D. K.
1979-01-01
Acoustic emission experiments were performed on polycrystalline and single crystal 99.99% aluminum while undergoing tensile deformation. It was found that acoustic emission counts as a function of grain size showed a maximum value at a particular grain size. Furthermore, the slip area associated with this particular grain size corresponded to the threshold level of detectability of single dislocation slip events. The rate of decline in acoustic emission activity as grain size is increased beyond the peak value suggests that grain boundary associated dislocation sources are giving rise to the bulk of the detected acoustic emissions.
Threshold-adaptive canny operator based on cross-zero points
NASA Astrophysics Data System (ADS)
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
NASA Astrophysics Data System (ADS)
Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto
2015-04-01
Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This has consequences in the ROC analysis. We applied the proposed procedure to a catalogue of rainfall conditions that have resulted in landslides, and to a set of rainfall events that - presumably - have not resulted in landslides, in Sicily, in the period 2002-2012. First, we determined regional event duration-cumulated event (ED) rainfall thresholds for shallow landslide occurrence using 200 rainfall conditions that have resulted in 223 shallow landslides in Sicily in the period 2002-2011. Next, we validated the thresholds using 29 rainfall conditions that have triggered 42 shallow landslides in Sicily in 2012, and 1250 rainfall events that presumably have not resulted in landslides in the same year. We performed a back analysis simulating the use of the thresholds in a hypothetical landslide warning system operating in 2012.
Follow-up of hearing thresholds among forge hammering workers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamal, A.A.; Mikael, R.A.; Faris, R.
Hearing threshold was reexamined in a group of forge hammering workers investigated 8 years ago with consideration of the age effect and of auditory symptoms. Workers were exposed to impact noise that ranged from 112 to 139 dB(A)--at an irregular rate of 20 to 50 drop/minute--and a continuous background noise that ranged from 90 to 94 dB(A). Similar to what was observed 8 years ago, the present permanent threshold shift (PTS) showed a maximum notch at the frequency of 6 kHz and considerable elevations at the frequencies of 0.25-1 kHz. The age-corrected PTS and the postexposure hearing threshold were significantlymore » higher than the corresponding previous values at the frequencies 0.25, 0.5, 1, and 8 kHz only. The rise was more evident at the low than at the high frequencies. Temporary threshold shift (TTS) values were significantly less than those 8 years ago. Contrary to the previous TTS, the present TTS were higher at low than at high frequencies. Although progression of PTS at the frequencies 0.25 and 0.5 kHz was continuous throughout the observed durations of exposure, progression at higher frequencies occurred essentially in the first 10 to 15 years of exposure. Thereafter, it followed a much slower rate. Tinnitus was significantly associated with difficulty in hearing the human voice and with elevation of PTS at all the tested frequencies, while acoustic after-image was significantly associated with increment of PTS at the frequencies 0.25-2 kHz. No relation between PTS and smoking was found. PTS at low frequencies may provide an indication of progression of hearing damage when the sensitivity at 6 and 4 kHz diminishes after prolonged years of exposure. Tinnitus and acoustic after-image are related to the auditory effect of forge hammering noise.« less
Mwesigye, Abraham R; Young, Scott D; Bailey, Elizabeth H; Tumwebaze, Susan B
2016-12-15
The mining and processing of copper in Kilembe, Western Uganda, from 1956 to 1982 left over 15 Mt. of tailings containing cupriferous and cobaltiferous pyrite dumped within a mountain river valley. This pilot study was conducted to assess the nature and extent of risk to local populations from metal contamination arising from those mining activities. We determined trace element concentrations in mine tailings, soils, locally cultivated foods, house dust, drinking water and human biomarkers (toenails) using ICP-MS analysis of acid digested samples. The results showed that tailings, containing higher concentrations of Co, Cu, Ni and As compared with world average crust values had eroded and contaminated local soils. Pollution load indices revealed that 51% of agricultural soils sampled were contaminated with trace elements. Local water supplies were contaminated, with Co concentrations that exceeded Wisconsin (US) thresholds in 25% of domestic water supplies and 40% of Nyamwamba river water samples. Zinc exceeded WHO/FAO thresholds of 99.4mgkg -1 in 36% of Amaranthus vegetable samples, Cu exceeded EC thresholds of 20mgkg -1 in 19% of Amaranthus while Pb exceeded WHO thresholds of 0.3mgkg -1 in 47% of Amaranthus vegetables. In bananas, 20% of samples contained Pb concentrations that exceeded the WHO/FAO recommended threshold of 0.3mgkg -1 . However, risk assessment of local foods and water, based on hazard quotients (HQ values) revealed no potential health effects. The high external contamination of volunteers' toenails with some elements (even after a washing process) calls into question their use as a biomarker for metal exposure in human populations where feet are frequently exposed to soil dust. Any mitigation of Kilembe mine impacts should be aimed at remediation of agricultural soils, regulating the discharge of underground contaminated water but also containment of tailing erosion. Copyright © 2016 Elsevier B.V. All rights reserved.
Determining lower threshold concentrations for synergistic effects.
Bjergager, Maj-Britt Andersen; Dalhoff, Kristoffer; Kretschmann, Andreas; Nørgaard, Katrine Banke; Mayer, Philipp; Cedergreen, Nina
2017-01-01
Though only occurring rarely, synergistic interactions between chemicals in mixtures have long been a point of focus. Most studies analyzing synergistic interactions used unrealistically high chemical concentrations. The aim of the present study is to determine the threshold concentration below which proven synergists cease to act as synergists towards the aquatic crustacean Daphnia magna. To do this, we compared several approaches and test-setups to evaluate which approach gives the most conservative estimate for the lower threshold for synergy for three known azole synergists. We focus on synergistic interactions between the pyrethroid insecticide, alpha-cypermethrin, and one of the three azole fungicides prochloraz, propiconazole or epoxiconazole measured on Daphnia magna immobilization. Three different experimental setups were applied: A standard 48h acute toxicity test, an adapted 48h test using passive dosing for constant chemical exposure concentrations, and a 14-day test. Synergy was defined as occuring in mixtures where either EC 50 values decreased more than two-fold below what was predicted by concentration addition (horizontal assessment) or as mixtures where the fraction of immobile organisms increased more than two-fold above what was predicted by independent action (vertical assessment). All three tests confirmed the hypothesis of the existence of a lower azole threshold concentration below which no synergistic interaction was observed. The lower threshold concentration, however, decreased with increasing test duration from 0.026±0.013μM (9.794±4.897μgL -1 ), 0.425±0.089μM (145.435±30.46μgL -1 ) and 0.757±0.253μM (249.659±83.44μgL -1 ) for prochloraz, propiconazole and epoxiconazole in standard 48h toxicity tests to 0.015±0.004μM (5.651±1.507μgL -1 ), 0.145±0.025μM (49.619±8.555μgL -1 ) and 0.122±0.0417μM (40.236±13.75μgL -1 ), respectively, in the 14-days tests. Testing synergy in relation to concentration addition provided the most conservative values. The threshold values for the vertical assessments in tests where the two could be compared were in general 1.2 to 4.7 fold higher than the horizontal assessments. Using passive dosing rather than dilution series or spiking did not lower the threshold significantly. Below the threshold for synergy, slight antagony could often be observed. This is most likely due to induction of enzymes active in metabolization of alpha-cypermethrin. The results emphasize the importance of test duration when assessing synergy, but also show that azole concentrations within the typically monitored range of up to 0.5μgL -1 are not likely to cause severe synergy concerning Daphnia magna immobilization. Copyright © 2016 Elsevier B.V. All rights reserved.
Passive beam forming and spatial diversity in meteor scatter communication systems
NASA Astrophysics Data System (ADS)
Akram, Ammad; Cannon, Paul S.
1996-03-01
The method of passive beam formation using a four-element Butler matrix to improve the signal availability of meteor scatter communication systems is investigated. Signal availability, defined as the integrated time that the signal-to-noise ratio (snr) exceeds some snr threshold, serves as an important indicator of system performance. Butler matrix signal availability is compared with the performance of a single four-element Yagi reference system using ˜6.5 hours of data from a 720 km north-south temperate latitude link. The signal availability improvement factor of the Butler matrix is found to range between 1.6-1.8 over the snr threshold range of 20-30 dB in a 300-Hz bandwidth. Experimental values of the Butler matrix signal availability improvement factor are compared with analytical predictions. The experimental values show an expected snr threshold dependency with a dramatic increase at high snr. A theoretical analysis is developed to describe this increase. The signal availability can be further improved by ˜10-20% in a system employing two four-element Butler matrices with squinted beams so as to illuminate the sky with eight high-gain beams. Space diversity is found to increase the signal availability of a single antenna system by ˜10-15%, but the technique has very little advantage in a system already employing passive beam formation.
Mooij, Anne H; Frauscher, Birgit; Amiri, Mina; Otte, Willem M; Gotman, Jean
2016-12-01
To assess whether there is a difference in the background activity in the ripple band (80-200Hz) between epileptic and non-epileptic channels, and to assess whether this difference is sufficient for their reliable separation. We calculated mean and standard deviation of wavelet entropy in 303 non-epileptic and 334 epileptic channels from 50 patients with intracerebral depth electrodes and used these measures as predictors in a multivariable logistic regression model. We assessed sensitivity, positive predictive value (PPV) and negative predictive value (NPV) based on a probability threshold corresponding to 90% specificity. The probability of a channel being epileptic increased with higher mean (p=0.004) and particularly with higher standard deviation (p<0.0001). The performance of the model was however not sufficient for fully classifying the channels. With a threshold corresponding to 90% specificity, sensitivity was 37%, PPV was 80%, and NPV was 56%. A channel with a high standard deviation of entropy is likely to be epileptic; with a threshold corresponding to 90% specificity our model can reliably select a subset of epileptic channels. Most studies have concentrated on brief ripple events. We showed that background activity in the ripple band also has some ability to discriminate epileptic channels. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Orellana, Luis H.; Rodriguez-R, Luis M.; Konstantinidis, Konstantinos T.
2016-10-07
Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles andmore » related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N 2O, to inert N 2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes.« less
Effect of Repetition Rate on Femtosecond Laser-Induced Homogenous Microstructures
Biswas, Sanchari; Karthikeyan, Adya; Kietzig, Anne-Marie
2016-01-01
We report on the effect of repetition rate on the formation and surface texture of the laser induced homogenous microstructures. Different microstructures were micromachined on copper (Cu) and titanium (Ti) using femtosecond pulses at 1 and 10 kHz. We studied the effect of the repetition rate on structure formation by comparing the threshold accumulated pulse (FΣpulse) values and the effect on the surface texture through lacunarity analysis. Machining both metals at low FΣpulse resulted in microstructures with higher lacunarity at 10 kHz compared to 1 kHz. On increasing FΣpulse, the microstructures showed higher lacunarity at 1 kHz. The effect of the repetition rate on the threshold FΣpulse values were, however, considerably different on the two metals. With an increase in repetition rate, we observed a decrease in the threshold FΣpulse on Cu, while on Ti we observed an increase. These differences were successfully allied to the respective material characteristics and the resulting melt dynamics. While machining Ti at 10 kHz, the melt layer induced by one laser pulse persists until the next pulse arrives, acting as a dielectric for the subsequent pulse, thereby increasing FΣpulse. However, on Cu, the melt layer quickly resolidifies and no such dielectric like phase is observed. Our study contributes to the current knowledge on the effect of the repetition rate as an irradiation parameter. PMID:28774143
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orellana, Luis H.; Rodriguez-R, Luis M.; Konstantinidis, Konstantinos T.
Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles andmore » related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N 2O, to inert N 2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes.« less
2017-01-01
Abstract Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles and related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N2O, to inert N2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes. PMID:28180325
Mazzarino, Monica; Abate, Maria Gabriella; Alocci, Roberto; Rossi, Francesca; Stinchelli, Raffaella; Molaioni, Francesco; de la Torre, Xavier; Botrè, Francesco
2011-01-10
The presence of microorganisms in urine samples, under favourable conditions of storage and transportation, may alter the concentration of steroid hormones, thus altering the correct evaluation of the urinary steroid profile in doping control analysis. According to the rules of the World Anti-Doping Agency (WADA technical document TD2004 EAAS), a testosterone deconjugation higher than 5% and the presence of 5α-androstane-3,17-dione and 5β-androstane-3,17-dione in the deconjugated fraction, are reliable indicators of urine degradation. The determination of these markers would require an additional quantitative analysis since the steroids screening analysis, in anti-doping laboratories, is performed in the total (free+conjugated) fraction. The aim of this work is therefore to establish reliable threshold values for some representative compounds (namely 5α-androstane-3,17-dione and 5β-androstane-3,17-dione) in the total fraction in order to predict directly at the screening stage the potential microbial degradation of the urine samples. Preliminary evidence on the most suitable degradation indexes has been obtained by measuring the urinary concentration of testosterone, epitestosterone, 5α-androstane-3,17-dione and 5β-androstane-3,17-dione by gas chromatography-mass spectrometric every day for 15 days in the deconjugated, glucuronide and total fraction of 10 pools of urines from 60 healthy subjects, stored under different pH and temperature conditions, and isolating the samples with one or more markers of degradation according to the WADA technical document TD2004EAAS. The threshold values for 5α-androstane-3,17-dione and 5β-androstane-3,17-dione were therefore obtained correlating the testosterone deconjugation rate with the urinary concentrations of 5α-androstane-3,17-dione and 5β-androstane-3,17-dione in the total fraction. The threshold values suggested as indexes of urine degradation in the total fraction were: 10 ng mL(-1) for 5α-androstane-3,17-dione and 20 ng mL(-1) for 5β-androstane-3,17-dione. The validity of this approach was confirmed by the analysis of routine samples for more than five months (i.e. on a total of more than 4000 urine samples): samples with a concentration of total 5α-androstane-3,17-dione and 5β-androstane-3,17-dione higher than the threshold values showed a percentage of free testosterone higher than 5 of its total amount; whereas free testosterone in a percentage higher than 5 of its total amount was not detected in urines with a concentration of total 5α-androstane-3,17-dione and 5β-androstane-3,17-dione lower than the threshold values. Copyright © 2010 Elsevier B.V. All rights reserved.