Sample records for adaptive variable-ratio threshold

  1. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  2. Development of a Voice Activity Controlled Noise Canceller

    PubMed Central

    Abid Noor, Ali O.; Samad, Salina Abdul; Hussain, Aini

    2012-01-01

    In this paper, a variable threshold voice activity detector (VAD) is developed to control the operation of a two-sensor adaptive noise canceller (ANC). The VAD prohibits the reference input of the ANC from containing some strength of actual speech signal during adaptation periods. The novelty of this approach resides in using the residual output from the noise canceller to control the decisions made by the VAD. Thresholds of full-band energy and zero-crossing features are adjusted according to the residual output of the adaptive filter. Performance evaluation of the proposed approach is quoted in terms of signal to noise ratio improvements as well mean square error (MSE) convergence of the ANC. The new approach showed an improved noise cancellation performance when tested under several types of environmental noise. Furthermore, the computational power of the adaptive process is reduced since the output of the adaptive filter is efficiently calculated only during non-speech periods. PMID:22778667

  3. ADAPTIVE THRESHOLD LOGIC.

    DTIC Science & Technology

    The design and construction of a 16 variable threshold logic gate with adaptable weights is described. The operating characteristics of tape wound...and sizes as well as for the 16 input adaptive threshold logic gate. (Author)

  4. Multiratio fusion change detection with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.

    2017-04-01

    A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.

  5. Predicting coral bleaching hotspots: the role of regional variability in thermal stress and potential adaptation rates

    NASA Astrophysics Data System (ADS)

    Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.

    2012-03-01

    Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.

  6. Speckle Noise Reduction in Optical Coherence Tomography Using Two-dimensional Curvelet-based Dictionary Learning.

    PubMed

    Esmaeili, Mahdad; Dehnavi, Alireza Mehri; Rabbani, Hossein; Hajizadeh, Fedra

    2017-01-01

    The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted due to the large speckle noise. To address this problem, this paper proposes a new method using two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and rotations are separately thresholded with an adaptive data-driven thresholding method, then, each thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial dictionary dependent on the size of curvelet coefficients' matrix in each sub-band. We also modify each coefficient matrix to enhance intra-retinal layers, with noise suppression at the same time. We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD), and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio from 3.20 to 14.41 are obtained.

  7. Temporal resolution in children.

    PubMed

    Wightman, F; Allen, P; Dolan, T; Kistler, D; Jamieson, D

    1989-06-01

    The auditory temporal resolving power of young children was measured using an adaptive forced-choice psychophysical paradigm that was disguised as a video game. 20 children between 3 and 7 years of age and 5 adults were asked to detect the presence of a temporal gap in a burst of half-octave-band noise at band center frequencies of 400 and 2,000 Hz. The minimum detectable gap (gap threshold) was estimated adaptively in 20-trial runs. The mean gap thresholds in the 400-Hz condition were higher for the younger children than for the adults, with the 3-year-old children producing the highest thresholds. Gap thresholds in the 2,000-Hz condition were generally lower than in the 400-Hz condition and showed a similar age effect. All the individual adaptive runs were "adult-like," suggesting that the children were generally attentive to the task during each run. However, the variability of threshold estimates from run to run was substantial, especially in the 3-5-year-old children. Computer simulations suggested that this large within-subjects variability could have resulted from frequent, momentary lapses of attention, which would lead to "guessing" on a substantial portion of the trials.

  8. Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1996-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  9. Unipolar terminal-attractor based neural associative memory with adaptive threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1993-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  10. Synergy of adaptive thresholds and multiple transmitters in free-space optical communication.

    PubMed

    Louthain, James A; Schmidt, Jason D

    2010-04-26

    Laser propagation through extended turbulence causes severe beam spread and scintillation. Airborne laser communication systems require special considerations in size, complexity, power, and weight. Rather than using bulky, costly, adaptive optics systems, we reduce the variability of the received signal by integrating a two-transmitter system with an adaptive threshold receiver to average out the deleterious effects of turbulence. In contrast to adaptive optics approaches, systems employing multiple transmitters and adaptive thresholds exhibit performance improvements that are unaffected by turbulence strength. Simulations of this system with on-off-keying (OOK) showed that reducing the scintillation variations with multiple transmitters improves the performance of low-frequency adaptive threshold estimators by 1-3 dB. The combination of multiple transmitters and adaptive thresholding provided at least a 10 dB gain over implementing only transmitter pointing and receiver tilt correction for all three high-Rytov number scenarios. The scenario with a spherical-wave Rytov number R=0.20 enjoyed a 13 dB reduction in the required SNR for BER's between 10(-5) to 10(-3), consistent with the code gain metric. All five scenarios between 0.06 and 0.20 Rytov number improved to within 3 dB of the SNR of the lowest Rytov number scenario.

  11. Cost-effectiveness of different strategies for selecting and treating individuals at increased risk of osteoporosis or osteopenia: a systematic review.

    PubMed

    Müller, Dirk; Pulm, Jannis; Gandjour, Afschin

    2012-01-01

    To compare cost-effectiveness modeling analyses of strategies to prevent osteoporotic and osteopenic fractures either based on fixed thresholds using bone mineral density or based on variable thresholds including bone mineral density and clinical risk factors. A systematic review was performed by using the MEDLINE database and reference lists from previous reviews. On the basis of predefined inclusion/exclusion criteria, we identified relevant studies published since January 2006. Articles included for the review were assessed for their methodological quality and results. The literature search resulted in 24 analyses, 14 of them using a fixed-threshold approach and 10 using a variable-threshold approach. On average, 70% of the criteria for methodological quality were fulfilled, but almost half of the analyses did not include medication adherence in the base case. The results of variable-threshold strategies were more homogeneous and showed more favorable incremental cost-effectiveness ratios compared with those based on a fixed threshold with bone mineral density. For analyses with fixed thresholds, incremental cost-effectiveness ratios varied from €80,000 per quality-adjusted life-year in women aged 55 years to cost saving in women aged 80 years. For analyses with variable thresholds, the range was €47,000 to cost savings. Risk assessment using variable thresholds appears to be more cost-effective than selecting high-risk individuals by fixed thresholds. Although the overall quality of the studies was fairly good, future economic analyses should further improve their methods, particularly in terms of including more fracture types, incorporating medication adherence, and including or discussing unrelated costs during added life-years. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.

    PubMed

    Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang

    2016-10-10

    In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.

  13. Compensation for red-green contrast loss in anomalous trichromats

    PubMed Central

    Boehm, A. E.; MacLeod, D. I. A.; Bosten, J. M.

    2014-01-01

    For anomalous trichromats, threshold contrasts for color differences captured by the L and M cones and their anomalous analogs are much higher than for normal trichromats. The greater spectral overlap of the cone sensitivities reduces chromatic contrast both at and above threshold. But above threshold, adaptively nonlinear processing might compensate for the chromatically impoverished photoreceptor inputs. Ratios of sensitivity for threshold variations and for color appearance along the two cardinal axes of MacLeod-Boynton chromaticity space were calculated for three groups: normals (N = 15), deuteranomals (N = 9), and protanomals (N = 5). Using a four-alternative forced choice (4AFC) task, threshold sensitivity was measured in four color-directions along the two cardinal axes. For the same participants, we reconstructed perceptual color spaces for the positions of 25 hues using multidimensional scaling (MDS). From the reconstructed color spaces we extracted “color difference ratios,” defined as ratios for the size of perceived color differences along the L/(L + M) axis relative to those along the S/(L + M) axis, analogous to “sensitivity ratios” extracted from the 4AFC task. In the 4AFC task, sensitivity ratios were 38% of normal for deuteranomals and 19% of normal for protanomals. Yet, in the MDS results, color difference ratios were 86% of normal for deuteranomals and 67% of normal for protanomals. Thus, the contraction along the L/(L + M) axis shown in the perceptual color spaces of anomalous trichromats is far smaller than predicted by their reduced sensitivity, suggesting that an adaptive adjustment of postreceptoral gain may magnify the cone signals of anomalous trichromats to exploit the range of available postreceptoral neural signals. PMID:25413625

  14. Image segmentation for uranium isotopic analysis by SIMS: Combined adaptive thresholding and marker controlled watershed approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willingham, David G.; Naes, Benjamin E.; Heasler, Patrick G.

    A novel approach to particle identification and particle isotope ratio determination has been developed for nuclear safeguard applications. This particle search approach combines an adaptive thresholding algorithm and marker-controlled watershed segmentation (MCWS) transform, which improves the secondary ion mass spectrometry (SIMS) isotopic analysis of uranium containing particle populations for nuclear safeguards applications. The Niblack assisted MCWS approach (a.k.a. SEEKER) developed for this work has improved the identification of isotopically unique uranium particles under conditions that have historically presented significant challenges for SIMS image data processing techniques. Particles obtained from five NIST uranium certified reference materials (CRM U129A, U015, U150, U500more » and U850) were successfully identified in regions of SIMS image data 1) where a high variability in image intensity existed, 2) where particles were touching or were in close proximity to one another and/or 3) where the magnitude of ion signal for a given region was count limited. Analysis of the isotopic distributions of uranium containing particles identified by SEEKER showed four distinct, accurately identified 235U enrichment distributions, corresponding to the NIST certified 235U/238U isotope ratios for CRM U129A/U015 (not statistically differentiated), U150, U500 and U850. Additionally, comparison of the minor uranium isotope (234U, 235U and 236U) atom percent values verified that, even in the absence of high precision isotope ratio measurements, SEEKER could be used to segment isotopically unique uranium particles from SIMS image data. Although demonstrated specifically for SIMS analysis of uranium containing particles for nuclear safeguards, SEEKER has application in addressing a broad set of image processing challenges.« less

  15. Vibratory Adaptation of Cutaneous Mechanoreceptive Afferents

    PubMed Central

    Bensmaïa, S. J.; Leung, Y. Y.; Hsiao, S. S.; Johnson, K. O.

    2007-01-01

    The objective of this study was to investigate the effects of extended suprathreshold vibratory stimulation on the sensitivity of slowly adapting type 1 (SA1), rapidly adapting (RA), and Pacinian (PC) afferents. To that end, an algorithm was developed to track afferent absolute (I0) and entrainment (I1) thresholds as they change over time. We recorded afferent responses to periliminal vibratory test stimuli, which were interleaved with intense vibratory conditioning stimuli during the adaptation period of each experimental run. From these measurements, the algorithm allowed us to infer changes in the afferents’ sensitivity. We investigated the stimulus parameters that affect adaptation by assessing the degree to which adaptation depends on the amplitude and frequency of the adapting stimulus. For all three afferent types, I0 and I1 increased with increasing adaptation frequency and amplitude. The degree of adaptation seems to be independent of the firing rate evoked in the afferent by the conditioning stimulus. In the analysis, we distinguished between additive adaptation (in which I0 and I1 shift equally) and multiplicative effects (in which the ratio I1/I0 remains constant). RA threshold shifts are almost perfectly additive. SA1 threshold shifts are close to additive and far from multiplicative (I1 threshold shifts are twice the shifts). PC shifts are more difficult to classify. We used an I0 integrate-and-fire model to study the possible neural mechanisms. A change in transducer gain predicts a multiplicative change in I0 and I1 and is thus ruled out as a mechanism underlying SA1 and RA adaptation. A change in the resting action potential threshold predicts equal, additive change in I0 and I1 and thus accounts well for RA adaptation. A change in the degree of refractoriness during the relative refractory period predicts an additional change in I1 such as that observed for SA1 fibers. We infer that adaptation is caused by an increase in spiking thresholds produced by ion flow through transducer channels in the receptor membrane. In a companion paper, we describe the time-course of vibratory adaptation and recovery for SA1, RA, and PC fibers. PMID:16014802

  16. Outlier detection for particle image velocimetry data using a locally estimated noise variance

    NASA Astrophysics Data System (ADS)

    Lee, Yong; Yang, Hua; Yin, ZhouPing

    2017-03-01

    This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.

  17. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  18. Stochastic analysis of epidemics on adaptive time varying networks

    NASA Astrophysics Data System (ADS)

    Kotnis, Bhushan; Kuri, Joy

    2013-06-01

    Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an “adaptive threshold,” i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.

  19. Positive-negative corresponding normalized ghost imaging based on an adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.

    2016-11-01

    Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.

  20. Detection of testosterone administration based on the carbon isotope ratio profiling of endogenous steroids: international reference populations of professional soccer players.

    PubMed

    Strahm, E; Emery, C; Saugy, M; Dvorak, J; Saudan, C

    2009-12-01

    The determination of the carbon isotope ratio in androgen metabolites has been previously shown to be a reliable, direct method to detect testosterone misuse in the context of antidoping testing. Here, the variability in the 13C/12C ratios in urinary steroids in a widely heterogeneous cohort of professional soccer players residing in different countries (Argentina, Italy, Japan, South Africa, Switzerland and Uganda) is examined. Carbon isotope ratios of selected androgens in urine specimens were determined using gas chromatography/combustion/isotope ratio mass spectrometry (GC-C-IRMS). Urinary steroids in Italian and Swiss populations were found to be enriched in 13C relative to other groups, reflecting higher consumption of C3 plants in these two countries. Importantly, detection criteria based on the difference in the carbon isotope ratio of androsterone and pregnanediol for each population were found to be well below the established threshold value for positive cases. The results obtained with the tested diet groups highlight the importance of adapting the criteria if one wishes to increase the sensitivity of exogenous testosterone detection. In addition, confirmatory tests might be rendered more efficient by combining isotope ratio mass spectrometry with refined interpretation criteria for positivity and subject-based profiling of steroids.

  1. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  2. Use of Biotechnological Devices in the Quantification of Psychophysiological Workload of Professional Chess Players.

    PubMed

    Fuentes, Juan P; Villafaina, Santos; Collado-Mateo, Daniel; de la Vega, Ricardo; Gusi, Narcis; Clemente-Suárez, Vicente Javier

    2018-01-19

    Psychophysiological requirements of chess players are poorly understood, and periodization of training is often made without any empirical basis. For this reason, the aim of the present study was to investigate the psychophysiological response and quantify the player internal load during, and after playing a chess game. The participant was an elite 33 year-old male chess player ranked among the 300 best chess players in the world. Thus, cortical arousal by critical flicker fusion threshold, electroencephalogram by the theta Fz/alpha Pz ratio and autonomic modulation by heart rate variability were analyzed. Data revealed that cortical arousal by critical flicker fusion threshold and theta Fz/alpha Pz ratio increased and heart rate variability decreased during chess game. All these changes indicated that internal load increased during the chess game. In addition, pre-activation was detected in pre-game measure, suggesting that the prefrontal cortex might be preparatory activated. For these reasons, electroencephalogram, critical flicker fusion threshold and heart rate variability analysis may be highly applicable tools to control and monitor workload in chess player.

  3. Auditory Sensitivity and Masking Profiles for the Sea Otter (Enhydra lutris).

    PubMed

    Ghoul, Asila; Reichmuth, Colleen

    2016-01-01

    Sea otters are threatened marine mammals that may be negatively impacted by human-generated coastal noise, yet information about sound reception in this species is surprisingly scarce. We investigated amphibious hearing in sea otters by obtaining the first measurements of absolute sensitivity and critical masking ratios. Auditory thresholds were measured in air and underwater from 0.125 to 40 kHz. Critical ratios derived from aerial masked thresholds from 0.25 to 22.6 kHz were also obtained. These data indicate that although sea otters can detect underwater sounds, their hearing appears to be primarily air adapted and not specialized for detecting signals in background noise.

  4. Structured decision making as a conceptual framework to identify thresholds for conservation and management

    USGS Publications Warehouse

    Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.

    2009-01-01

    Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.

  5. Landscape genomics of Sphaeralcea ambigua in the Mojave Desert: a multivariate, spatially-explicit approach to guide ecological restoration

    USGS Publications Warehouse

    Shryock, Daniel F.; Havrilla, Caroline A.; DeFalco, Lesley; Esque, Todd C.; Custer, Nathan; Wood, Troy E.

    2015-01-01

    Local adaptation influences plant species’ responses to climate change and their performance in ecological restoration. Fine-scale physiological or phenological adaptations that direct demographic processes may drive intraspecific variability when baseline environmental conditions change. Landscape genomics characterize adaptive differentiation by identifying environmental drivers of adaptive genetic variability and mapping the associated landscape patterns. We applied such an approach to Sphaeralcea ambigua, an important restoration plant in the arid southwestern United States, by analyzing variation at 153 amplified fragment length polymorphism loci in the context of environmental gradients separating 47 Mojave Desert populations. We identified 37 potentially adaptive loci through a combination of genome scan approaches. We then used a generalized dissimilarity model (GDM) to relate variability in potentially adaptive loci with spatial gradients in temperature, precipitation, and topography. We identified non-linear thresholds in loci frequencies driven by summer maximum temperature and water stress, along with continuous variation corresponding to temperature seasonality. Two GDM-based approaches for mapping predicted patterns of local adaptation are compared. Additionally, we assess uncertainty in spatial interpolations through a novel spatial bootstrapping approach. Our study presents robust, accessible methods for deriving spatially-explicit models of adaptive genetic variability in non-model species that will inform climate change modelling and ecological restoration.

  6. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    PubMed

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  7. When do Indians feel hot? Internet searches indicate seasonality suppresses adaptation to heat

    NASA Astrophysics Data System (ADS)

    Singh, Tanya; Siderius, Christian; Van der Velde, Ype

    2018-05-01

    In a warming world an increasing number of people are being exposed to heat, making a comfortable thermal environment an important need. This study explores the potential of using Regional Internet Search Frequencies (RISF) for air conditioning devices as an indicator for thermal discomfort (i.e. dissatisfaction with the thermal environment) with the aim to quantify the adaptation potential of individuals living across different climate zones and at the high end of the temperature range, in India, where access to health data is limited. We related RISF for the years 2011–2015 to daily daytime outdoor temperature in 17 states and determined at which temperature RISF for air conditioning starts to peak, i.e. crosses a ‘heat threshold’, in each state. Using the spatial variation in heat thresholds, we explored whether people continuously exposed to higher temperatures show a lower response to heat extremes through adaptation (e.g. physiological, behavioural or psychological). State-level heat thresholds ranged from 25.9 °C in Madhya Pradesh to 31.0 °C in Orissa. Local adaptation was found to occur at state level: the higher the average temperature in a state, the higher the heat threshold; and the higher the intra-annual temperature range (warmest minus coldest month) the lower the heat threshold. These results indicate there is potential within India to adapt to warmer temperatures, but that a large intra-annual temperature variability attenuates this potential to adapt to extreme heat. This winter ‘reset’ mechanism should be taken into account when assessing the impact of global warming, with changes in minimum temperatures being an important factor in addition to the change in maximum temperatures itself. Our findings contribute to a better understanding of local heat thresholds and people’s adaptive capacity, which can support the design of local thermal comfort standards and early heat warning systems.

  8. Variable threshold algorithm for division of labor analyzed as a dynamical system.

    PubMed

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro

    2014-12-01

    Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

  9. Data Transmission Signal Design and Analysis

    NASA Technical Reports Server (NTRS)

    Moore, J. D.

    1972-01-01

    The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.

  10. Detection of testosterone administration based on the carbon isotope ratio profiling of endogenous steroids: international reference populations of professional soccer players

    PubMed Central

    Strahm, E; Emery, C; Saugy, M; Dvorak, J; Saudan, C

    2009-01-01

    Background and objectives: The determination of the carbon isotope ratio in androgen metabolites has been previously shown to be a reliable, direct method to detect testosterone misuse in the context of antidoping testing. Here, the variability in the 13C/12C ratios in urinary steroids in a widely heterogeneous cohort of professional soccer players residing in different countries (Argentina, Italy, Japan, South Africa, Switzerland and Uganda) is examined. Methods: Carbon isotope ratios of selected androgens in urine specimens were determined using gas chromatography/combustion/isotope ratio mass spectrometry (GC-C-IRMS). Results: Urinary steroids in Italian and Swiss populations were found to be enriched in 13C relative to other groups, reflecting higher consumption of C3 plants in these two countries. Importantly, detection criteria based on the difference in the carbon isotope ratio of androsterone and pregnanediol for each population were found to be well below the established threshold value for positive cases. Conclusions: The results obtained with the tested diet groups highlight the importance of adapting the criteria if one wishes to increase the sensitivity of exogenous testosterone detection. In addition, confirmatory tests might be rendered more efficient by combining isotope ratio mass spectrometry with refined interpretation criteria for positivity and subject-based profiling of steroids. PMID:19549614

  11. Policy Tree Optimization for Adaptive Management of Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Giuliani, M.

    2016-12-01

    Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points", which are threshold values of indicator variables that signal a change in policy. However, there remains a need for a general method to optimize the choice of indicators and their threshold values in a way that is easily interpretable for decision makers. Here we propose a conceptual framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. We demonstrate the approach using Folsom Reservoir, California as a case study, in which operating policies must balance the risk of both floods and droughts. Given a set of feature variables, such as reservoir level, inflow observations and forecasts, and time of year, the resulting policy defines the conditions under which flood control and water supply hedging operations should be triggered. Importantly, the tree-based rule sets are easy to interpret for decision making, and can be compared to historical operating policies to understand the adaptations needed under possible climate change scenarios. Several remaining challenges are discussed, including the empirical convergence properties of the method, and extensions to irreversible decisions such as infrastructure. Policy tree optimization, and corresponding open-source software, provide a generalizable, interpretable approach to designing adaptive policies under uncertainty for water resources systems.

  12. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.

    PubMed

    Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo

    2013-06-20

    A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.

  14. A general theoretical framework for interpreting patient-reported outcomes estimated from ordinally scaled item responses.

    PubMed

    Massof, Robert W

    2014-10-01

    A simple theoretical framework explains patient responses to items in rating scale questionnaires. Fixed latent variables position each patient and each item on the same linear scale. Item responses are governed by a set of fixed category thresholds, one for each ordinal response category. A patient's item responses are magnitude estimates of the difference between the patient variable and the patient's estimate of the item variable, relative to his/her personally defined response category thresholds. Differences between patients in their personal estimates of the item variable and in their personal choices of category thresholds are represented by random variables added to the corresponding fixed variables. Effects of intervention correspond to changes in the patient variable, the patient's response bias, and/or latent item variables for a subset of items. Intervention effects on patients' item responses were simulated by assuming the random variables are normally distributed with a constant scalar covariance matrix. Rasch analysis was used to estimate latent variables from the simulated responses. The simulations demonstrate that changes in the patient variable and changes in response bias produce indistinguishable effects on item responses and manifest as changes only in the estimated patient variable. Changes in a subset of item variables manifest as intervention-specific differential item functioning and as changes in the estimated person variable that equals the average of changes in the item variables. Simulations demonstrate that intervention-specific differential item functioning produces inefficiencies and inaccuracies in computer adaptive testing. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  15. Impact of Fast Sodium Channel Inactivation on Spike Threshold Dynamics and Synaptic Integration

    PubMed Central

    Platkiewicz, Jonathan; Brette, Romain

    2011-01-01

    Neurons spike when their membrane potential exceeds a threshold value. In central neurons, the spike threshold is not constant but depends on the stimulation. Thus, input-output properties of neurons depend both on the effect of presynaptic spikes on the membrane potential and on the dynamics of the spike threshold. Among the possible mechanisms that may modulate the threshold, one strong candidate is Na channel inactivation, because it specifically impacts spike initiation without affecting the membrane potential. We collected voltage-clamp data from the literature and we found, based on a theoretical criterion, that the properties of Na inactivation could indeed cause substantial threshold variability by itself. By analyzing simple neuron models with fast Na inactivation (one channel subtype), we found that the spike threshold is correlated with the mean membrane potential and negatively correlated with the preceding depolarization slope, consistent with experiments. We then analyzed the impact of threshold dynamics on synaptic integration. The difference between the postsynaptic potential (PSP) and the dynamic threshold in response to a presynaptic spike defines an effective PSP. When the neuron is sufficiently depolarized, this effective PSP is briefer than the PSP. This mechanism regulates the temporal window of synaptic integration in an adaptive way. Finally, we discuss the role of other potential mechanisms. Distal spike initiation, channel noise and Na activation dynamics cannot account for the observed negative slope-threshold relationship, while adaptive conductances (e.g. K+) and Na inactivation can. We conclude that Na inactivation is a metabolically efficient mechanism to control the temporal resolution of synaptic integration. PMID:21573200

  16. A principled approach to setting optimal diagnostic thresholds: where ROC and indifference curves meet.

    PubMed

    Irwin, R John; Irwin, Timothy C

    2011-06-01

    Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  17. A Continuous Threshold Expectile Model.

    PubMed

    Zhang, Feipeng; Li, Qunhua

    2017-12-01

    Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .

  18. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  19. Method for Assessing Contrast Performance under Lighting Conditions such as Entering a Tunnel on Sunny Day.

    PubMed

    Huang, Y; Menozzi, M

    2015-04-01

    Clinical assessment of dark adaptation is time consuming and requires a specialised instrumentation such as a nyktometer. It is therefore not surprising that dark adaptation is rarely tested in practice. As for the case of testing fitness of a driver, demands on adaptation in daily driving tasks mostly depart from settings in a nyktometer. In daily driving, adaptation is stressed by high and fast transitions of light levels, and the period of time which is relevant to safe driving starts right after a transition and ends several seconds later. In the nyktometer dark adaptation is tested after completion of the adaptation process. RESULTS of a nyktometer test may therefore deliver little information about adaptation shortly after light transitions. In an attempt to develop a clinical test aiming to fulfill both a short measurement time and offering test conditions comparable to conditions in driving, we conducted a preliminary study in which contrast sensitivity thresholds were recorded for light transitions as found in daily driving tasks and for various times after transition onsets. Contrast sensitivity performance is compared to dark adaptation performance as assessed by a myktometer. Contrast sensitivity thresholds were recorded in 17 participants by means of a twin projection apparatus. The apparatus enabled the projection of an adapting field and of a Landolt ring both with a variable luminance. Five different stepwise transitions in levels of adapting luminance were tested. All transitions occurred from bright to dark. The Landolt ring was flashed 100 or 500 ms after the transition had occurred. Participants were instructed to report the orientation of the Landolt ring. A Rodenstock Nyktometer, Plate 501, was used to record dark adaptation threshold. Experimental data from the proposed test revealed a noticeably increasing contrast detection threshold measured in dark adaptation in the stronger transition from 14 000 to 8 cd/m2 than in the weaker transition from 2000 to 8 cd/m2. By raising the dark adaption luminance level from 8 to 60 cd/m2 in the stronger transition case, the contrast detection threshold was then improved by a factor of four. Another main finding showed that for the adaptation process from strong glare stimuli to the dark adaptation, a peak deterioration in contrast sensitivity occurred at the light adaptation level of 6000 cd/m2. Comparing the contrast performance assessed by the proposed test with that of the nyktometer test, there was no clear correlation between the two methods. Our suggested method to assess dark adaptation performance proved to be practical in use and, since the patient does not have to spend a long time to attain complete dark adaptation, the method required a short time for measurement. Our negative experience in the use of the myktometer was in agreement with reported experience in the literature. Georg Thieme Verlag KG Stuttgart · New York.

  20. Erosive Augmentation of Solid Propellant Burning Rate: Motor Size Scaling Effect

    NASA Technical Reports Server (NTRS)

    Strand, L. D.; Cohen, Norman S.

    1990-01-01

    Two different independent variable forms, a difference form and a ratio form, were investigated for correlating the normalized magnitude of the measured erosive burning rate augmentation above the threshold in terms of the amount that the driving parameter (mass flux or Reynolds number) exceeds the threshold value for erosive augmentation at the test condition. The latter was calculated from the previously determined threshold correlation. Either variable form provided a correlation for each of the two motor size data bases individually. However, the data showed a motor size effect, supporting the general observation that the magnitude of erosive burning rate augmentation is reduced for larger rocket motors. For both independent variable forms, the required motor size scaling was attained by including the motor port radius raised to a power in the independent parameter. A boundary layer theory analysis confirmed the experimental finding, but showed that the magnitude of the scale effect is itself dependent upon scale, tending to diminish with increasing motor size.

  1. Sex differences in the relationships between parasympathetic activity and pain modulation.

    PubMed

    Nahman-Averbuch, Hadas; Dayan, Lior; Sprecher, Elliot; Hochberg, Uri; Brill, Silviu; Yarnitsky, David; Jacob, Giris

    2016-02-01

    Higher parasympathetic activity is related to lower pain perception in healthy subjects and pain patients. We aimed to examine whether this relationship depends on sex, in healthy subjects. Parasympathetic activity was assessed using time- and frequency-domain heart rate variability indices and deep breathing ratio. Pain perception parameters, consisting of heat pain thresholds and pain ratings of supra-thresholds stimuli, as well as pain modulation parameters of mechanical temporal summation, pain adaptation, offset analgesia and conditioned pain modulation (CPM) response were examined. Forty healthy subjects were examined (20 men). Women demonstrated higher parasympathetic activity compared to men (high frequency power of 0.55±0.2 and 0.40±0.2, respectively, p=0.02) and less pain reduction in the offset analgesia paradigm (-35.4±29.1 and -55.0±31.2, respectively, p=0.046). Separate slopes models analyses revealed sex differences such that a significant negative correlation was observed between higher rMSSD (the root mean square of successive differences) and higher pain adaptation in men (r=-0.649, p=0.003) but not in women (r=0.382, p=0.106). Similarly, a significant negative correlation was found between higher rMSSD and higher efficiency of the CPM response in men (r=-0.510, p=0.026) but not in women (r=0.406, p=0.085). Sex hormones levels, psychological factors or baseline autonomic activity can be possible explanations for these sex differences. Future autonomic interventions destined to change pain modulation should consider sex as an important intervening factor. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Genetic variation in threshold reaction norms for alternative reproductive tactics in male Atlantic salmon, Salmo salar.

    PubMed

    Piché, Jacinthe; Hutchings, Jeffrey A; Blanchard, Wade

    2008-07-07

    Alternative reproductive tactics may be a product of adaptive phenotypic plasticity, such that discontinuous variation in life history depends on both the genotype and the environment. Phenotypes that fall below a genetically determined threshold adopt one tactic, while those exceeding the threshold adopt the alternative tactic. We report evidence of genetic variability in maturation thresholds for male Atlantic salmon (Salmo salar) that mature either as large (more than 1 kg) anadromous males or as small (10-150 g) parr. Using a common-garden experimental protocol, we find that the growth rate at which the sneaker parr phenotype is expressed differs among pure- and mixed-population crosses. Maturation thresholds of hybrids were intermediate to those of pure crosses, consistent with the hypothesis that the life-history switch points are heritable. Our work provides evidence, for a vertebrate, that thresholds for alternative reproductive tactics differ genetically among populations and can be modelled as discontinuous reaction norms for age and size at maturity.

  3. Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2017-02-01

    Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.

  4. An evaluation of the effect of recent temperature variability on the prediction of coral bleaching events.

    PubMed

    Donner, Simon D

    2011-07-01

    Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.

  5. Effect of difference in occlusal contact area of mandibular free-end edentulous area implants on periodontal mechanosensitive threshold of adjacent premolars.

    PubMed

    Terauchi, Rie; Arai, Korenori; Tanaka, Masahiro; Kawazoe, Takayoshi; Baba, Shunsuke

    2015-01-01

    Implant treatment is believed to cause minimal invasion of remaining teeth. However, few studies have examined teeth adjacent to an implant region. Therefore, this study investigated the effect of occlusal contact size of implants on the periodontal mechanosensitive threshold of adjacent premolars. A cross-sectional study design was adopted. The Department of Oral Implantology, Osaka Dental University, was the setting where patients underwent implant treatment in the mandibular free-end edentulous area. The study population comprised of 87 patients (109 teeth) who underwent follow-up observation for at least 3 years following implant superstructure placement. As variables, age, sex, duration following superstructure placement, presence or absence of dental pulp, occlusal contact area, and periodontal mechanosensitive threshold were considered. The occlusal contact area was measured using Blue Silicone(®)and Bite Eye BE-I(®). Periodontal mechanosensitive threshold were measured using von Frey hair. As quantitative variables for periodontal mechanosensitive threshold, we divided subjects into two groups: normal (≤5 g) and high (≥5.1 g). For statistical analysis, we compared the two groups for the sensation thresholds using the Chi square test for categorical data and the Mann-Whitney U test for continuous volume data. For variables in which a significant difference was noted, we calculated the odds ratio (95 % confidence interval) and the effective dose. There were 93 teeth in the normal group and 16 teeth in the high group based on periodontal mechanosensitive threshold. Comparison of the two groups indicated no significant differences associated with age, sex, duration following superstructure placement, or presence or absence of dental pulp. A significant difference was noted with regard to occlusal contact area, with several high group subjects belonging to the small contact group (odds ratio: 4.75 [1.42-15.87]; effective dose: 0.29). The results of this study suggest an association between implant occlusal contact area and the periodontal mechanosensitive threshold of adjacent premolars. Smaller occlusal contact application resulted in an increased threshold. It appears that prosthodontic treatment should aim not only to improve occlusal function but also to maintain oromandibular function with regard to the preservation of remaining teeth.

  6. Effects of oxygen on responses to heating in two lizard species sampled along an elevational gradient.

    PubMed

    DuBois, P Mason; Shea, Tanner K; Claunch, Natalie M; Taylor, Emily N

    2017-08-01

    Thermal tolerance is an important variable in predictive models about the effects of global climate change on species distributions, yet the physiological mechanisms responsible for reduced performance at high temperatures in air-breathing vertebrates are not clear. We conducted an experiment to examine how oxygen affects three variables exhibited by ectotherms as they heat-gaping threshold, panting threshold, and loss of righting response (the latter indicating the critical thermal maximum)-in two lizard species along an elevational (and therefore environmental oxygen partial pressure) gradient. Oxygen partial pressure did not impact these variables in either species. We also exposed lizards at each elevation to severely hypoxic gas to evaluate their responses to hypoxia. Severely low oxygen partial pressure treatments significantly reduced the gaping threshold, panting threshold, and critical thermal maximum. Further, under these extreme hypoxic conditions, these variables were strongly and positively related to partial pressure of oxygen. In an elevation where both species overlapped, the thermal tolerance of the high elevation species was less affected by hypoxia than that of the low elevation species, suggesting the high elevation species may be adapted to lower oxygen partial pressures. In the high elevation species, female lizards had higher thermal tolerance than males. Our data suggest that oxygen impacts the thermal tolerance of lizards, but only under severely hypoxic conditions, possibly as a result of hypoxia-induced anapyrexia. Copyright © 2017. Published by Elsevier Ltd.

  7. A human visual based binarization technique for histological images

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  8. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    PubMed Central

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  9. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time.

    PubMed

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-02-24

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.

  10. Relationships Between Vestibular Measures as Potential Predictors for Spaceflight Sensorimotor Adaptation

    NASA Technical Reports Server (NTRS)

    Clark, T. K.; Peters, B.; Gadd, N. E.; De Dios, Y. E.; Wood, S.; Bloomberg, J. J.; Mulavara, A. P.

    2016-01-01

    Introduction: During space exploration missions astronauts are exposed to a series of novel sensorimotor environments, requiring sensorimotor adaptation. Until adaptation is complete, sensorimotor decrements occur, affecting critical tasks such as piloted landing or docking. Of particularly interest are locomotion tasks such as emergency vehicle egress or extra-vehicular activity. While nearly all astronauts eventually adapt sufficiently, it appears there are substantial individual differences in how quickly and effectively this adaptation occurs. These individual differences in capacity for sensorimotor adaptation are poorly understood. Broadly, we aim to identify measures that may serve as pre-flight predictors of and individual's adaptation capacity to spaceflight-induced sensorimotor changes. As a first step, since spaceflight is thought to involve a reinterpretation of graviceptor cues (e.g. otolith cues from the vestibular system) we investigate the relationships between various measures of vestibular function in humans. Methods: In a set of 15 ground-based control subjects, we quantified individual differences in vestibular function using three measures: 1) ocular vestibular evoked myogenic potential (oVEMP), 2) computerized dynamic posturography and 3) vestibular perceptual thresholds. oVEMP responses are elicited using a mechanical stimuli approach. Computerized dynamic posturography was used to quantify Sensory Organization Tests (SOTs), including SOT5M which involved performing pitching head movements while balancing on a sway-reference support surface with eyes closed. We implemented a vestibular perceptual threshold task using the tilt capabilities of the Tilt-Translation Sled (TTS) at JSC. On each trial, the subject was passively roll-tilted left ear down or right ear down in the dark and verbally provided a forced-choice response regarding which direction they felt tilted. The motion profile was a single-cycle sinusoid of angular acceleration with a duration of 5 seconds (frequency of 0.2 Hz), which was selected as it requires sensory integration of otolith and semicircular canal cues. Stimuli direction was randomized and magnitude was determined using an adaptive sampling procedure. One hundred trials were provided and each subject's responses were fit with a psychometric curve to estimate the subject's threshold. Results: Roll tilt perceptual thresholds at 0.2 Hz ranged from 0.5 degrees to 1.82 degrees across the 15 subjects (geometric mean of 1.04 degrees), consistent with previous studies. The inter-individual variability in thresholds may be able to help explain individual differences observed in sensorimotor adaptation to spaceflight. Analysis is ongoing for the oVEMPS and computerized dynamic posturography to identify relationships between the various vestibular measures. Discussion: Predicting individual differences in sensorimotor adaptation is critical both for the development of personalized countermeasures and mission planning. Here we aim to develop a basis of vestibular tests and parameters which may serve as predictors of individual differences in sensorimotor adaptability through studying the relationship between these measures.

  11. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  12. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding.

    PubMed

    Huang, Chao; Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-06-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information.

  13. Female turtles from hot nests: is it duration of incubation or proportion of development at high temperatures that matters?

    PubMed

    Georges, Arthur

    1989-11-01

    Mean daily temperature in natural nests of freshwater turtles with temperature-dependent sex determination is known to be a poor predictor of hatchling sex ratios when nest temperatures fluctuate. To account for this, a model was developed on the assumption that females will emerge from eggs when more than half of embryonic development occurs above the threshold temperature for sex determination rather than from eggs that spend more than half their time above the threshold. The model is consistent with previously published data and in particular explains the phenomenon whereby the mean temperature that best distinguishes between male and female nests decreases with increasing variability in nest temperature. The model, if verified by controlled experiments, has important implications for our understanding of temperature-dependent sex determination in natural nests. Both mean nest temperature and "hours spent above the threshold" will be poor predictors of hatchling sex ratios. Studies designed to investigate latitudinal trends and inter-specific differences in the threshold temperature will need to consider latitudinal and inter-specific variation in the magnitude of diel fluctuations in nest temperature, and variation in factors influencing the magnitude of those fluctuations, such as nest depth. Furthermore, any factor that modifies the relationship between developmental rate and temperature can be expected to influence hatchling sex ratios in natural nests, especially when nest temperatures are close to the threshold.

  14. Complexity reduction in the H.264/AVC using highly adaptive fast mode decision based on macroblock motion activity

    NASA Astrophysics Data System (ADS)

    Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir

    2015-11-01

    The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.

  15. Contributions of adaptation currents to dynamic spike threshold on slow timescales: Biophysical insights from conductance-based models

    NASA Astrophysics Data System (ADS)

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin; Li, Huiyan; Che, Yanqiu

    2017-06-01

    Spike-frequency adaptation (SFA) mediated by various adaptation currents, such as voltage-gated K+ current (IM), Ca2+-gated K+ current (IAHP), or Na+-activated K+ current (IKNa), exists in many types of neurons, which has been shown to effectively shape their information transmission properties on slow timescales. Here we use conductance-based models to investigate how the activation of three adaptation currents regulates the threshold voltage for action potential (AP) initiation during the course of SFA. It is observed that the spike threshold gets depolarized and the rate of membrane depolarization (dV/dt) preceding AP is reduced as adaptation currents reduce firing rate. It is indicated that the presence of inhibitory adaptation currents enables the neuron to generate a dynamic threshold inversely correlated with preceding dV/dt on slower timescales than fast dynamics of AP generation. By analyzing the interactions of ionic currents at subthreshold potentials, we find that the activation of adaptation currents increase the outward level of net membrane current prior to AP initiation, which antagonizes inward Na+ to result in a depolarized threshold and lower dV/dt from one AP to the next. Our simulations demonstrate that the threshold dynamics on slow timescales is a secondary effect caused by the activation of adaptation currents. These findings have provided a biophysical interpretation of the relationship between adaptation currents and spike threshold.

  16. The respiration pattern as an indicator of the anaerobic threshold.

    PubMed

    Mirmohamadsadeghi, Leila; Vesin, Jean-Marc; Lemay, Mathieu; Deriaz, Olivier

    2015-08-01

    The anaerobic threshold (AT) is a good index of personal endurance but needs a laboratory setting to be determined. It is important to develop easy AT field measurements techniques in order to rapidly adapt training programs. In the present study, it is postulated that the variability of the respiratory parameters decreases with exercise intensity (especially at the AT level). The aim of this work was to assess, on healthy trained subjects, the putative relationships between the variability of some respiration parameters and the AT. The heart rate and respiratory variables (volume, rate) were measured during an incremental exercise performed on a treadmill by healthy moderately trained subjects. Results show a decrease in the variance of 1/tidal volume with the intensity of exercise. Consequently, the cumulated variance (sum of the variance measured at each level of the exercise) follows an exponential relationship with respect to the intensity to reach eventually a plateau. The amplitude of this plateau is closely related to the AT (r=-0.8). It is concluded that the AT is related to the variability of the respiration.

  17. Threshold setting by the surround of cat retinal ganglion cells.

    PubMed

    Barlow, H B; Levick, W R

    1976-08-01

    1. The slope of curves relating the log increment threshold to log background luminance in cat retinal ganglion cells is affected by the area and duration of the test stimulus, as it is in human pyschophysical experiments. 2. Using large area, long duration stimuli the slopes average 0-82 and approach close to 1 (Weber's Law) in the steepest cases. Small stimuli gave an average of 0-53 for on-centre units using brief stimuli, and 0-56 for off-centre units, using long stimuli. Slopes under 0-5 (square root law) were not found over an extended range of luminances. 3. On individual units the slope was generally greater for larger and longer test stimulus, but no unit showed the full extent of change from slope of 0-5 to slope of 1. 4. The above differences hold for objective measures of quantum/spike ratio, as well as for thresholds either judged by ear or assessed by calculation. 5. The steeper slope of the curves for large area, long duration test stimuli compared with small, long duration stimuli, is associated with the increased effectiveness of antagonism from the surround at high backgrounds. This change may be less pronounced in off-centre units, one of which (probably transient Y-type) showed no difference of slope, and gave parallel area-threshold curves at widely separated background luminances, confirming the importance of differential surround effectiveness in changing the slope of the curves. 6. In on-centre units, the increased relative effectiveness of the surround is associated with the part of the raised background light that falls on the receptive field centre. 7. It is suggested that the variable surround functions as a zero-offset control that sets the threshold excitation required for generating impulses, and that this is separate from gain-setting adaptive mechanisms. This may be how ganglion cells maintain high incremental sensitivity in spite of a strong maintained excitatory drive that would otherwise cause compressive response non-linearities.

  18. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  19. Searching for signposts: Adaptive planning thresholds in long-term water supply projections for the Western U.S.

    NASA Astrophysics Data System (ADS)

    Robinson, B.; Herman, J. D.

    2017-12-01

    Long-term water supply planning is challenged by highly uncertain streamflow projections across climate models and emissions scenarios. Recent studies have devised infrastructure and policy responses that can withstand or adapt to an ensemble of scenarios, particularly those outside the envelope of historical variability. An important aspect of this process is whether the proposed thresholds for adaptation (i.e., observations that trigger a response) truly represent a trend toward future change. Here we propose an approach to connect observations of annual mean streamflow with long-term projections by filtering GCM-based streamflow ensembles. Visualizations are developed to investigate whether observed changes in mean annual streamflow can be linked to projected changes in end-of-century mean and variance relative to the full ensemble. A key focus is identifying thresholds that point to significant long-term changes in the distribution of streamflow (+/- 20% or greater) as early as possible. The analysis is performed on 87 sites in the Western United States, using streamflow ensembles through 2100 from a recent study by the U.S. Bureau of Reclamation. Results focus on three primary questions: (1) how many years of observed data are needed to identify the most extreme scenarios, and by what year can they be identified? (2) are these features different between sites? and (3) using this analysis, do observed flows to date at each site point to significant long-term changes? This study addresses the challenge of severe uncertainty in long-term streamflow projections by identifying key thresholds that can be observed to support water supply planning.

  20. EMG biofeedback: the effects of CRF, FR, VR, FI, and VI schedules of reinforcement on the acquisition and extinction of increases in forearm muscle tension.

    PubMed

    Cohen, S L; Richardson, J; Klebez, J; Febbo, S; Tucker, D

    2001-09-01

    Biofeedback was used to increase forearm-muscle tension. Feedback was delivered under continuous reinforcement (CRF), variable interval (VI), fixed interval (FI), variable ratio (VR), and fixed ratio (FR) schedules of reinforcement when college students increased their muscle tension (electromyograph, EMG) above a high threshold. There were three daily sessions of feedback, and Session 3 was immediately followed by a session without feedback (extinction). The CRF schedule resulted in the highest EMG, closely followed by the FR and VR schedules, and the lowest EMG scores were produced by the FI and VI schedules. Similarly, the CRF schedule resulted in the greatest amount of time-above-threshold and the VI and FI schedules produced the lowest time-above-threshold. The highest response rates were generated by the FR schedule, followed by the VR schedule. The CRF schedule produced relatively low response rates, comparable to the rates under the VI and FI schedules. Some of the data are consistent with the partial-reinforcement-extinction effect. The present data suggest that different schedules of feedback should be considered in muscle-strengthening-contexts such as during the rehabilitation of muscles following brain damage or peripheral nervous-system injury.

  1. Visual adaptation and the amplitude spectra of radiological images.

    PubMed

    Kompaniez-Dunigan, Elysse; Abbey, Craig K; Boone, John M; Webster, Michael A

    2018-01-01

    We examined how visual sensitivity and perception are affected by adaptation to the characteristic amplitude spectra of X-ray mammography images. Because of the transmissive nature of X-ray photons, these images have relatively more low-frequency variability than natural images, a difference that is captured by a steeper slope of the amplitude spectrum (~ - 1.5) compared to the ~ 1/f (slope of - 1) spectra common to natural scenes. Radiologists inspecting these images are therefore exposed to a different balance of spectral components, and we measured how this exposure might alter spatial vision. Observers (who were not radiologists) were adapted to images of normal mammograms or the same images sharpened by filtering the amplitude spectra to shallower slopes. Prior adaptation to the original mammograms significantly biased judgments of image focus relative to the sharpened images, demonstrating that the images are sufficient to induce substantial after-effects. The adaptation also induced strong losses in threshold contrast sensitivity that were selective for lower spatial frequencies, though these losses were very similar to the threshold changes induced by the sharpened images. Visual search for targets (Gaussian blobs) added to the images was also not differentially affected by adaptation to the original or sharper images. These results complement our previous studies examining how observers adapt to the textural properties or phase spectra of mammograms. Like the phase spectrum, adaptation to the amplitude spectrum of mammograms alters spatial sensitivity and visual judgments about the images. However, unlike the phase spectrum, adaptation to the amplitude spectra did not confer a selective performance advantage relative to more natural spectra.

  2. Reducing variable frequency vibrations in a powertrain system with an adaptive tuned vibration absorber group

    NASA Astrophysics Data System (ADS)

    Gao, Pu; Xiang, Changle; Liu, Hui; Zhou, Han

    2018-07-01

    Based on a multiple degrees of freedom dynamic model of a vehicle powertrain system, natural vibration analyses and sensitivity analyses of the eigenvalues are performed to determine the key inertia for each natural vibration of a powertrain system. Then, the results are used to optimize the installation position of each adaptive tuned vibration absorber. According to the relationship between the variable frequency torque excitation and the natural vibration of a powertrain system, the entire vibration frequency band is divided into segments, and the auxiliary vibration absorber and dominant vibration absorber are determined for each sensitive frequency band. The optimum parameters of the auxiliary vibration absorber are calculated based on the optimal frequency ratio and the optimal damping ratio of the passive vibration absorber. The instantaneous change state of the natural vibrations of a powertrain system with adaptive tuned vibration absorbers is studied, and the optimized start and stop tuning frequencies of the adaptive tuned vibration absorber are obtained. These frequencies can be translated into the optimum parameters of the dominant vibration absorber. Finally, the optimal tuning scheme for the adaptive tuned vibration absorber group, which can be used to reduce the variable frequency vibrations of a powertrain system, is proposed, and corresponding numerical simulations are performed. The simulation time history signals are transformed into three-dimensional information related to time, frequency and vibration energy via the Hilbert-Huang transform (HHT). A comprehensive time-frequency analysis is then conducted to verify that the optimal tuning scheme for the adaptive tuned vibration absorber group can significantly reduce the variable frequency vibrations of a powertrain system.

  3. Specific conditions of distress in the dental situation.

    PubMed

    Hentschel, U; Allander, L; Winholt, A S

    1977-01-01

    The general feeling of distress in the dental situation has been studied in 60 female dental patients and correlated to the following variables: Experimentally evaluated sensitivity to pain, self-rating and the dentist's rating of sensitivity to pain, the pain-threshold value in the teeth, the need of local anesthesia, extraversion-introversion, neuroticism, and some percept-genetic psychological measures of adaptive behavior. The subjects have also answered a questionnaire for grading their distress in regard to different aspects of the treatment-situation, which were combined into eight groups using factor analysis and then correlated to the general distress. The variables having a significant relation to distress in the dental situation were: the dentist's rating of the patient's sensitivity, the need of anesthesia, four groups of treatment-components and two of the percept-genetic measures. There was also a certain relation to the pain threshold in the teeth.

  4. The Limits to Adaptation; A Systems Approach

    EPA Science Inventory

    The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...

  5. An adaptive technique for multiscale approximate entropy (MAEbin) threshold (r) selection: application to heart rate variability (HRV) and systolic blood pressure variability (SBPV) under postural stress.

    PubMed

    Singh, Amritpal; Saini, Barjinder Singh; Singh, Dilbag

    2016-06-01

    Multiscale approximate entropy (MAE) is used to quantify the complexity of a time series as a function of time scale τ. Approximate entropy (ApEn) tolerance threshold selection 'r' is based on either: (1) arbitrary selection in the recommended range (0.1-0.25) times standard deviation of time series (2) or finding maximum ApEn (ApEnmax) i.e., the point where self-matches start to prevail over other matches and choosing the corresponding 'r' (rmax) as threshold (3) or computing rchon by empirically finding the relation between rmax, SD1/SD2 ratio and N using curve fitting, where, SD1 and SD2 are short-term and long-term variability of a time series respectively. None of these methods is gold standard for selection of 'r'. In our previous study [1], an adaptive procedure for selection of 'r' is proposed for approximate entropy (ApEn). In this paper, this is extended to multiple time scales using MAEbin and multiscale cross-MAEbin (XMAEbin). We applied this to simulations i.e. 50 realizations (n = 50) of random number series, fractional Brownian motion (fBm) and MIX (P) [1] series of data length of N = 300 and short term recordings of HRV and SBPV performed under postural stress from supine to standing. MAEbin and XMAEbin analysis was performed on laboratory recorded data of 50 healthy young subjects experiencing postural stress from supine to upright. The study showed that (i) ApEnbin of HRV is more than SBPV in supine position but is lower than SBPV in upright position (ii) ApEnbin of HRV decreases from supine i.e. 1.7324 ± 0.112 (mean ± SD) to upright 1.4916 ± 0.108 due to vagal inhibition (iii) ApEnbin of SBPV increases from supine i.e. 1.5535 ± 0.098 to upright i.e. 1.6241 ± 0.101 due sympathetic activation (iv) individual and cross complexities of RRi and systolic blood pressure (SBP) series depend on time scale under consideration (v) XMAEbin calculated using ApEnmax is correlated with cross-MAE calculated using ApEn (0.1-0.26) in steps of 0.02 at each time scale in supine and upright position and is concluded that ApEn0.26 has highest correlation at most scales (vi) choice of 'r' is critical in interpreting interactions between RRi and SBP and in ascertaining true complexity of the individual RRi and SBP series.

  6. Near-threshold fatigue behavior of copper alloys in air and aqueous environments: A high cyclic frequency study

    NASA Astrophysics Data System (ADS)

    Ahmed, Tawfik M.

    The near-threshold fatigue crack propagation behavior of alpha-phase copper alloys in desiccated air and several aqueous environments has been investigated. Three commercial alloys of nominal composition Cu-30Ni (Cu-Ni), Cu-30Zn (Cu-Zn) and 90Cu-7Al-3Fe (Cu-Al) were tested. Fatigue tests were conducted using standard prefatigued single edged notched (SEN) specimens loaded in tension at a high frequency of ˜100 Hz. Different R-ratios were employed, mostly at R-ratios of 0.5. Low loading levels were used that corresponded to the threshold and near-threshold regions where Delta Kth ≤ DeltaK ≤ 11 MPa√m. Fatigue tests in the aqueous solutions showed that the effect of different corrosive environments during high frequency testing (˜100 Hz) was not as pronounced as was expected when compared relative to air. Further testing revealed that environmental effects were present and fatigue crack growth rates were influenced by the fluid-induced closure effects which are generally reported in the fatigue literature to be operative only in viscous liquids, not in aqueous solutions. It was concluded that high frequency testing in aqueous environments consistently decreased crack growth rates in a manner similar to crack retardation effects in viscous fluids. Several theoretical models reported in the literature have underestimated, if not failed, to adequately predict the fluid induced closure in aqueous solutions. Results from the desiccated air tests confirmed that, under closure-free conditions (high R-ratios), both threshold values and fatigue crack growth rate of stage II can be related to Young's modulus, in agreement with results from the literature. The role of different mechanical and environmental variables on fatigue behavior becomes most visible in the low R -ratio regime, and contribute to various closure processes.

  7. Automated detection system for pulmonary emphysema on 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Hara, Takeshi; Yamamoto, Akira; Zhou, Xiangrong; Iwano, Shingo; Itoh, Shigeki; Fujita, Hiroshi; Ishigaki, Takeo

    2004-05-01

    An automatic extraction of pulmonary emphysema area on 3-D chest CT images was performed using an adaptive thresholding technique. We proposed a method to estimate the ratio of the emphysema area to the whole lung volume. We employed 32 cases (15 normal and 17 abnormal) which had been already diagnosed by radiologists prior to the study. The ratio in all the normal cases was less than 0.02, and in abnormal cases, it ranged from 0.01 to 0.26. The effectiveness of our approach was confirmed through the results of the present study.

  8. The hip strength:ankle proprioceptive threshold ratio predicts falls and injury in diabetic neuropathy

    PubMed Central

    Richardson, James K.; DeMott, Trina; Allet, Lara; Kim; Ashton-Miller, James A.

    2014-01-01

    Introduction We determined lower limb neuromuscular capacities associated with falls and fall-related injuries in older people with declining peripheral nerve function. Methods Thirty-two subjects (67.4 ± 13.4 years; 19 with type 2 diabetes), representing a spectrum of peripheral neurologic function, were evaluated with frontal plane proprioceptive thresholds at the ankle, frontal plane motor function at the ankle and hip, and prospective follow-up for 1 year. Results Falls and fall-related injuries were reported by 20 (62.5%) and 14 (43.8%) subjects, respectively. The ratio of hip adductor rate of torque development to ankle proprioceptive threshold (HipSTR/AnkPRO) predicted falls (pseudo-R2 = .726) and injury (pseudo-R2 = .382). No other variable maintained significance in the presence of HipSTR/AnkPRO. Discussion Fall and injury risk in the population studied is related inversely to HipSTR/AnkPRO. Increasing rapidly available hip strength in patients with neuropathic ankle sensory impairment may decrease risk of falls and related injuries. PMID:24282041

  9. Predictions of the Contribution of HCN Half-Maximal Activation Potential Heterogeneity to Variability in Intrinsic Adaptation of Spiral Ganglion Neurons.

    PubMed

    Boulet, Jason; Bruce, Ian C

    2017-04-01

    Spiral ganglion neurons (SGNs) exhibit a wide range in their strength of intrinsic adaptation on a timescale of 10s to 100s of milliseconds in response to electrical stimulation from a cochlear implant (CI). The purpose of this study was to determine how much of that variability could be caused by the heterogeneity in half-maximal activation potentials of hyperpolarization-activated cyclic nucleotide-gated cation (HCN) channels, which are known to produce intrinsic adaptation. In this study, a computational membrane model of cat type I SGN was developed based on the Hodgkin-Huxley model plus HCN and low-threshold potassium (KLT) conductances in which the half-maximal activation potential of the HCN channel was varied and the response of the SGN to pulse train and paired-pulse stimulation was simulated. Physiologically plausible variation of HCN half-maximal activation potentials could indeed determine the range of adaptation on the timescale of 10s to 100s of milliseconds and recovery from adaptation seen in the physiological data while maintaining refractoriness within physiological bounds. This computational model demonstrates that HCN channels may play an important role in regulating the degree of adaptation in response to pulse train stimulation and therefore contribute to variable constraints on acoustic information coding by CIs. This finding has broad implications for CI stimulation paradigms in that cell-to-cell variation of HCN channel properties are likely to significantly alter SGN excitability and therefore auditory perception.

  10. The role of glacier changes and threshold definition in the characterisation of future streamflow droughts in glacierised catchments

    NASA Astrophysics Data System (ADS)

    Van Tiel, Marit; Teuling, Adriaan J.; Wanders, Niko; Vis, Marc J. P.; Stahl, Kerstin; Van Loon, Anne F.

    2018-01-01

    Glaciers are essential hydrological reservoirs, storing and releasing water at various timescales. Short-term variability in glacier melt is one of the causes of streamflow droughts, here defined as deficiencies from the flow regime. Streamflow droughts in glacierised catchments have a wide range of interlinked causing factors related to precipitation and temperature on short and long timescales. Climate change affects glacier storage capacity, with resulting consequences for discharge regimes and streamflow drought. Future projections of streamflow drought in glacierised basins can, however, strongly depend on the modelling strategies and analysis approaches applied. Here, we examine the effect of different approaches, concerning the glacier modelling and the drought threshold, on the characterisation of streamflow droughts in glacierised catchments. Streamflow is simulated with the Hydrologiska Byråns Vattenbalansavdelning (HBV-light) model for two case study catchments, the Nigardsbreen catchment in Norway and the Wolverine catchment in Alaska, and two future climate change scenarios (RCP4.5 and RCP8.5). Two types of glacier modelling are applied, a constant and dynamic glacier area conceptualisation. Streamflow droughts are identified with the variable threshold level method and their characteristics are compared between two periods, a historical (1975-2004) and future (2071-2100) period. Two existing threshold approaches to define future droughts are employed: (1) the threshold from the historical period; (2) a transient threshold approach, whereby the threshold adapts every year in the future to the changing regimes. Results show that drought characteristics differ among the combinations of glacier area modelling and thresholds. The historical threshold combined with a dynamic glacier area projects extreme increases in drought severity in the future, caused by the regime shift due to a reduction in glacier area. The historical threshold combined with a constant glacier area results in a drastic decrease of the number of droughts. The drought characteristics between future and historical periods are more similar when the transient threshold is used, for both glacier area conceptualisations. With the transient threshold, factors causing future droughts can be analysed. This study revealed the different effects of methodological choices on future streamflow drought projections and it highlights how the options can be used to analyse different aspects of future droughts: the transient threshold for analysing future drought processes, the historical threshold to assess changes between periods, the constant glacier area to analyse the effect of short-term climate variability on droughts and the dynamic glacier area to model more realistic future discharges under climate change.

  11. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  12. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  13. Influence of background size, luminance and eccentricity on different adaptation mechanisms

    PubMed Central

    Gloriani, Alejandro H.; Matesanz, Beatriz M.; Barrionuevo, Pablo A.; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A.

    2016-01-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06–110 cd/m2) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5 cd/m2. In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. PMID:27210038

  14. Influence of background size, luminance and eccentricity on different adaptation mechanisms.

    PubMed

    Gloriani, Alejandro H; Matesanz, Beatriz M; Barrionuevo, Pablo A; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A

    2016-08-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06-110cd/m(2)) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5cd/m(2). In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Accuracy of cochlear implant recipients in speech reception in the presence of background music.

    PubMed

    Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia

    2012-12-01

    This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.

  16. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  17. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  18. Simplified flexible-PON upstream transmission using pulse position modulation at ONU and DSP-enabled soft-combining at OLT for adaptive link budgets.

    PubMed

    Liu, Xiang; Effenberger, Frank; Chand, Naresh

    2015-03-09

    We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.

  19. Sometimes processes don't matter: the general effect of short term climate variability on erosional systems.

    NASA Astrophysics Data System (ADS)

    Deal, Eric; Braun, Jean

    2017-04-01

    Climatic forcing undoubtedly plays an important role in shaping the Earth's surface. However, precisely how climate affects erosion rates, landscape morphology and the sedimentary record is highly debated. Recently there has been a focus on the influence of short-term variability in rainfall and river discharge on the relationship between climate and erosion rates. Here, we present a simple probabilistic argument, backed by modelling, that demonstrates that the way the Earth's surface responds to short-term climatic forcing variability is primarily determined by the existence and magnitude of erosional thresholds. We find that it is the ratio between the threshold magnitude and the mean magnitude of climatic forcing that determines whether variability matters or not and in which way. This is a fundamental result that applies regardless of the nature of the erosional process. This means, for example, that we can understand the role that discharge variability plays in determining fluvial erosion efficiency despite doubts about the processes involved in fluvial erosion. We can use this finding to reproduce the main conclusions of previous studies on the role of discharge variability in determining long-term fluvial erosion efficiency. Many aspects of the landscape known to influence discharge variability are affected by human activity, such as land use and river damming. Another important control on discharge variability, rainfall intensity, is also expected to increase with warmer temperatures. Among many other implications, our findings help provide a general framework to understand and predict the response of the Earth's surface to changes in mean and variability of rainfall and river discharge associated with the anthropogenic activity. In addition, the process independent nature of our findings suggest that previous work on river discharge variability and erosion thresholds can be applied to other erosional systems.

  20. Tourism development and economic growth a nonlinear approach

    NASA Astrophysics Data System (ADS)

    Po, Wan-Chen; Huang, Bwo-Nung

    2008-09-01

    We use cross sectional data (1995-2005 yearly averages) for 88 countries to investigate the nonlinear relationship between tourism development and economic growth when a threshold variable is used. The degree of tourism specialization ( qi, defined as receipts from international tourism as a percentage of GDP) is used as the threshold variable. The results of the tests for nonlinearity indicate that the 88 countries’ data should be separated into three different groups or regimes to analyze the tourism-growth nexus. The results of the threshold regression show that when the qi is below 4.0488% (regime 1, 57 countries) or above 4.7337% (regime 3, 23 countries), there exists a significantly positive relationship between tourism growth and economic growth. However, when the qi is above 4.0488% and below 4.7337% (regime 2, 8 countries), we are unable to find evidence of such a significant relationship. Further in-depth analysis reveals that relatively low ratios of the value added of the service industry to GDP, and the forested area per country area are able to explain why we are unable to find a significant relationship between these two variables in regime 2’s countries.

  1. Heat-related deaths in hot cities: estimates of human tolerance to high temperature thresholds.

    PubMed

    Harlan, Sharon L; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B; Morales Butler, Emmanuel J; Ruddell, Benjamin L; Ruddell, Darren M

    2014-03-20

    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages <65 and ≥ 65 during the months May-October for years 2000-2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90-97 °F; 32.2-36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males <65 years (ATmax = 102 °F; 38.9 °C). Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide.

  2. Physical Screening Predictors for Success in Completing Air Force Phase II Air Liaison Officer Aptitude Assessment.

    PubMed

    McGee, John Christopher; Wilson, Eric; Barela, Haley; Blum, Sharon

    2017-03-01

    Air Liaison Officer Aptitude Assessment (AAA) attrition is often associated with a lack of candidate physical preparation. The Functional Movement Screen, Tactical Fitness Assessment, and fitness metrics were collected (n = 29 candidates) to determine what physical factors could predict a candidate s success in completing AAA. Between-group comparisons were made between candidates completing AAA versus those who did not (p < 0.05). Upper 50% thresholds were established for all variables with R 2 < 0.8 and the data were converted to a binary form (0 = did not attain threshold, 1 = attained threshold). Odds-ratios, pre/post-test probabilities and positive likelihood ratios were computed and logistic regression applied to explain model variance. The following variables provided the most predictive value for AAA completion: Pull-ups (p = 0.01), Sit-ups (p = 0.002), Relative Powerball Toss (p = 0.017), and Pull-ups × Sit-ups interaction (p = 0.016). Minimum recommended guidelines for AAA screening are Pull-ups (10 maximum), Sit-ups (76/2 minutes), and a Relative Powerball Toss of 0.6980 ft × lb/BW. Associated benefits could be higher graduation rates, and a cost-savings associated from temporary duty and possible injury care for nonselected candidates. Recommended guidelines should be validated in future class cycles. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  3. The Limits to Adaptation: A Systems Approach

    EPA Science Inventory

    The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...

  4. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.

    PubMed

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.

  5. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  6. Gravity and Neuronal Adaptation. Neurophysiology of Reflexes from Hypo- to Hypergravity Conditions

    NASA Astrophysics Data System (ADS)

    Ritzmann, Ramona; Krause, Anne; Freyler, Kathrin; Gollhofer, Albert

    2017-02-01

    Introduction: For interplanetary and orbital missions in human space flight, knowledge about the gravity-sensitivity of the central nervous system (CNS) is required. The objective of this study was to assess neurophysiological correlates in variable hetero gravity conditions in regard to their timing and shaping. Methods: In ten subjects, peripheral nerve stimulation was used to elicit H-reflexes and M-waves in the M. soleus in Lunar, Martian, Earth and hypergravity. Gravity-dependencies were described by means of reflex latency, inter-peak-interval, duration, stimulation threshold and maximal amplitudes. Experiments were executed during the CNES/ESA/DLR JEPPFs. Results: H-reflex latency, inter-peak-interval and duration decreased with increasing gravitation (P<0.05); likewise, M-wave inter-peak-interval was diminished and latency prolonged with increasing gravity (P<0.05). Stimulation threshold of H-reflexes and M-waves decreased (P<0.05) while maximal amplitudes increased with an increase in gravitation (P<0.05). Conclusion: Adaptations in neurophysiological correlates in hetero gravity are associated with a shift in timing and shaping. For the first time, our results indicate that synaptic and axonal nerve conduction velocity as well as axonal and spinal excitability are diminished with reduced gravitational forces on the Moon and Mars and gradually increased when gravitation is progressively augmented up to hypergravity. Interrelated with the adaptation in threshold we conclude that neuronal circuitries are significantly affected by gravitation. As a consequence, movement control and countermeasures may be biased in extended space missions involving transitions between different force environments.

  7. Perceived pitch of vibrotactile stimuli: effects of vibration amplitude, and implications for vibration frequency coding.

    PubMed

    Morley, J W; Rowe, M J

    1990-12-01

    1. The effect of changes in amplitude on the perceived pitch of cutaneous vibratory stimuli was studied in psychophysical experiments designed to test whether the coding of information about the frequency of the vibration might be based on the ratio of recruitment of the PC (Pacinian corpuscle-associated) and RA (rapidly adapting) classes of tactile sensory fibres. The study was based on previous data which show that at certain vibration frequencies (e.g. 150 Hz) the ratio of recruitment of the PC and RA classes should vary as a function of vibration amplitude. 2. Sinusoidal vibration at either 30 Hz or 150 Hz, and at an amplitude 10 dB above subjective detection thresholds was delivered in a 1 s train to the distal phalangeal pad of the index finger in eight human subjects. This standard vibration was followed after 0.5 s by a 1 s comparison train of vibration which (unknown to the subject) was at the same frequency as the standard but at a range of amplitudes from 2 to 50 dB above the detection threshold. A two-alternative forced-choice procedure was used in which the subject had to indicate whether the comparison stimulus was higher or lower in pitch (frequency) than the standard. 3. Marked differences were seen from subject to subject in the effect of amplitude on perceived pitch at both 30 Hz and 150 Hz. At 150 Hz, five out of the eight subjects reported an increase in pitch as the amplitude of the comparison vibration increased, one experienced no change, and only two experienced the fall in perceived pitch that is predicted if the proposed ratio code contributes to vibrotactile pitch judgements. At 30 Hz similar intersubject variability was seen in the pitch-amplitude functions. 4. The results do not support the hypothesis that a ratio code contributes to vibrotactile pitch perception. We conclude that temporal patterning of impulse activity remains the major candidate code for pitch perception, at least over a substantial part of the vibrotactile frequency bandwidth.

  8. Neurofeedback in three patients in the state of unresponsive wakefulness.

    PubMed

    Keller, Ingo; Garbacenkaite, Ruta

    2015-12-01

    Some severely brain injured patients remain unresponsive, only showing reflex movements without any response to command. This syndrome has been named unresponsive wakefulness syndrome (UWS). The objective of the present study was to determine whether UWS patients are able to alter their brain activity using neurofeedback (NFB) technique. A small sample of three patients received a daily session of NFB for 3 weeks. We applied the ratio of theta and beta amplitudes as a feedback variable. Using an automatic threshold function, patients heard their favourite music whenever their theta/beta ratio dropped below the threshold. Changes in awareness were assessed weekly with the JFK Coma Recovery Scale-Revised for each treatment week, as well as 3 weeks before and after NFB. Two patients showed a decrease in their theta/beta ratio and theta-amplitudes during this period. The third patient showed no systematic changes in his EEG activity. The results of our study provide the first evidence that NFB can be used in patients in a state of unresponsive wakefulness.

  9. Role of extrinsic noise in the sensitivity of the rod pathway: rapid dark adaptation of nocturnal vision in humans.

    PubMed

    Reeves, Adam; Grayhem, Rebecca

    2016-03-01

    Rod-mediated 500 nm test spots were flashed in Maxwellian view at 5 deg eccentricity, both on steady 10.4 deg fields of intensities (I) from 0.00001 to 1.0 scotopic troland (sc td) and from 0.2 s to 1 s after extinguishing the field. On dim fields, thresholds of tiny (5') tests were proportional to √I (Rose-DeVries law), while thresholds after extinction fell within 0.6 s to the fully dark-adapted absolute threshold. Thresholds of large (1.3 deg) tests were proportional to I (Weber law) and extinction thresholds, to √I. rod thresholds are elevated by photon-driven noise from dim fields that disappears at field extinction; large spot thresholds are additionally elevated by neural light adaptation proportional to √I. At night, recovery from dimly lit fields is fast, not slow.

  10. A Data Centred Method to Estimate and Map Changes in the Full Distribution of Daily Precipitation and Its Exceedances

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily temperature or precipitation. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by 'heavy tailed' distributed variables such as daily precipitation. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those extreme precipitation days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results identify regionally consistent patterns which, dependent on location, show systematic increase in precipitation on the wettest days, shifts in precipitation patterns to less moderate days and more heavy days, and drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013 Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, S. C. Chapman, N. W. Watkins, 2013 Environ. Res. Lett. 8, 034031 [2] Haylock et al. 2008 J. Geophys. Res (Atmospheres), 113, D20119

  11. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  12. Multiparameter vision testing apparatus

    NASA Technical Reports Server (NTRS)

    Hunt, S. R., Jr.; Homkes, R. J.; Poteate, W. B.; Sturgis, A. C. (Inventor)

    1975-01-01

    Compact vision testing apparatus is described for testing a large number of physiological characteristics of the eyes and visual system of a human subject. The head of the subject is inserted into a viewing port at one end of a light-tight housing containing various optical assemblies. Visual acuity and other refractive characteristics and ocular muscle balance characteristics of the eyes of the subject are tested by means of a retractable phoroptor assembly carried near the viewing port and a film cassette unit carried in the rearward portion of the housing (the latter selectively providing a variety of different visual targets which are viewed through the optical system of the phoroptor assembly). The visual dark adaptation characteristics and absolute brightness threshold of the subject are tested by means of a projector assembly which selectively projects one or both of a variable intensity fixation target and a variable intensity adaptation test field onto a viewing screen located near the top of the housing.

  13. How to Assess the Value of Medicines?

    PubMed Central

    Simoens, Steven

    2010-01-01

    This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066

  14. How to assess the value of medicines?

    PubMed

    Simoens, Steven

    2010-01-01

    This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.

  15. Robust Adaptive Thresholder For Document Scanning Applications

    NASA Astrophysics Data System (ADS)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  16. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  17. Heat-Related Deaths in Hot Cities: Estimates of Human Tolerance to High Temperature Thresholds

    PubMed Central

    Harlan, Sharon L.; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B.; Morales Butler, Emmanuel J.; Ruddell, Benjamin L.; Ruddell, Darren M.

    2014-01-01

    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages <65 and ≥65 during the months May–October for years 2000–2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90–97 °F; 32.2‒36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males <65 years (ATmax = 102 °F; 38.9 °C). Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide. PMID:24658410

  18. Dynamics of chromatic visual system processing differ in complexity between children and adults.

    PubMed

    Boon, Mei Ying; Suttle, Catherine M; Henry, Bruce I; Dain, Stephen J

    2009-06-30

    Measures of chromatic contrast sensitivity in children are lower than those of adults. This may be related to immaturities in signal processing at or near threshold. We have found that children's VEPs in response to low contrast supra-threshold chromatic stimuli are more intra-individually variable than those recorded from adults. Here, we report on linear and nonlinear analyses of chromatic VEPs recorded from children and adults. Two measures of signal-to-noise ratio are similar between the adults and children, suggesting that relatively high noise is unlikely to account for the poor clarity of negative and positive peak components in the children's VEPs. Nonlinear analysis indicates higher complexity of adults' than children's chromatic VEPs, at levels of chromatic contrast around and well above threshold.

  19. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    NASA Astrophysics Data System (ADS)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  20. THE EFFECTS OF VARIATIONS IN THE CONCENTRATION OF OXYGEN AND OF GLUCOSE ON DARK ADAPTATION

    PubMed Central

    McFarland, R. A.; Forbes, W. H.

    1940-01-01

    In this study we have analyzed the effects of variations in the concentrations of oxygen and of blood sugar on light sensitivity; i.e. dark adaptation. The experiments were carried out in an air-conditioned light-proof chamber where the concentrations of oxygen could be changed by dilution with nitrogen or by inhaling oxygen from a cylinder. The blood sugar was lowered by the injection of insulin and raised by the ingestion of glucose. The dark adaptation curves were plotted from data secured with an apparatus built according to specifications outlined by Hecht and Shlaer. During each experiment, observations were first made in normal air with the subject under basal conditions followed by one, and in most instances two, periods under the desired experimental conditions involving either anoxia or hyper- or hypoglycemia or variations in both the oxygen tension and blood sugar at the same time. 1. Dark adaptation curves were plotted (threshold against time) in normal air and compared with those obtained while inhaling lowered concentrations of oxygen. A decrease in sensitivity was observed with lowered oxygen tensions. Both the rod and cone portions of the curves were influenced in a similar way. These effects were counteracted by inhaling oxygen, the final rod thresholds returning to about the level of the normal base line in air or even below it within 2 to 3 minutes. The impairment was greatest for those with a poorer tolerance for low O2. Both the inter- and intra-individual variability in thresholds increased significantly at the highest altitude. 2. In a second series of tests control curves were obtained in normal air. Then while each subject remained dark adapted, the concentrations of oxygen were gradually decreased. The regeneration of visual purple was apparently complete during the 40 minutes of dark adaptation, yet in each case the thresholds continued to rise in direct proportion to the degree of anoxia. The inhalation of oxygen from a cylinder quickly counteracted the effects for the thresholds returned to the original control level within 2 to 3 minutes. 3. In experiments where the blood sugar was raised by the ingestion of glucose in normal air, no significant changes in the thresholds were observed except when the blood sugar was rapidly falling toward the end of the glucose tolerance tests. However, when glucose was ingested at the end of an experiment in low oxygen, while the subject remained dark adapted, the effects of the anoxia were largely counteracted within 6 to 8 minutes. 4. The influence of low blood sugar on light sensitivity was then studied by injecting insulin. The thresholds were raised as soon as the effects of the insulin produced a fall in the blood sugar. When the subjects inhaled oxygen the thresholds were lowered. Then when the oxygen was withdrawn so that the subject was breathing normal air, the thresholds rose again within 1 to 2 minutes. Finally, if the blood sugar was raised by ingesting glucose, the average threshold fell to the original control level or even below it. 5. The combined effects of low oxygen and low blood sugar on light sensitivity were studied in one subject (W. F.). These effects appeared to be greater than when a similar degree of anoxia or hypoglycemia was brought about separately. 6. In a series of experiments on ten subjects the dark adaptation curves were obtained both in the basal state and after a normal breakfast. In nine of the ten subjects, the food increased the sensitivity of the subjects to light. 7. The experiments reported above lend support to the hypothesis that both anoxia and hypoglycemia produce their effects on light sensitivity in essentially the same way; namely, by slowing the oxidative processes. Consequently the effects of anoxia may be ameliorated by giving glucose and the effects of hypoglycemia by inhaling oxygen. In our opinion, the changes may be attributed directly to the effects on the nervous tissue of the visual mechanism and the brain rather than on the photochemical processes of the retina. PMID:19873200

  1. Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs

    NASA Astrophysics Data System (ADS)

    Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen

    2012-03-01

    The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.

  2. Sex ratio variation in Iberian pigs.

    PubMed

    Toro, M A; Fernández, A; García-Cortés, L A; Rodrigáñez, J; Silió, L

    2006-06-01

    Within the area of sex allocation, one of the topics that has attracted a lot of attention is the sex ratio problem. Fisher (1930) proposed that equal numbers of males and females have been promoted by natural selection and it has an adaptive significance. But the empirical success of Fisher's theory remains doubtful because a sex ratio of 0.50 is also expected from the chromosomal mechanism of sex determination. Another way of approaching the subject is to consider that Fisher's argument relies on the underlying assumption that offspring inherit their parent's tendency in biased sex ratio and therefore that genetic variance for this trait exists. Here, we analyzed sex ratio data of 56,807 piglets coming from 550 boars and 1893 dams. In addition to classical analysis of heterogeneity we performed analyses fitting linear and threshold animal models in a Bayesian framework using Gibbs sampling techniques. The marginal posterior mean of heritability was 2.63 x 10(-4) under the sire linear model and 9.17 x 10(-4) under the sire threshold model. The probability of the hypothesis p(h(2) = 0) fitting the last model was 0.996. Also, we did not detect any trend in sex ratio related to maternal age. From an evolutionary point of view, the chromosomal sex determination acts as a constraint that precludes control of offspring sex ratio in vertebrates and it should be included in the general theory of sex allocation. From a practical view that means that the sex ratio in domestic species is hardly susceptible to modification by artificial selection.

  3. Closed-loop adaptation of neurofeedback based on mental effort facilitates reinforcement learning of brain self-regulation.

    PubMed

    Bauer, Robert; Fels, Meike; Royter, Vladislav; Raco, Valerio; Gharabaghi, Alireza

    2016-09-01

    Considering self-rated mental effort during neurofeedback may improve training of brain self-regulation. Twenty-one healthy, right-handed subjects performed kinesthetic motor imagery of opening their left hand, while threshold-based classification of beta-band desynchronization resulted in proprioceptive robotic feedback. The experiment consisted of two blocks in a cross-over design. The participants rated their perceived mental effort nine times per block. In the adaptive block, the threshold was adjusted on the basis of these ratings whereas adjustments were carried out at random in the other block. Electroencephalography was used to examine the cortical activation patterns during the training sessions. The perceived mental effort was correlated with the difficulty threshold of neurofeedback training. Adaptive threshold-setting reduced mental effort and increased the classification accuracy and positive predictive value. This was paralleled by an inter-hemispheric cortical activation pattern in low frequency bands connecting the right frontal and left parietal areas. Optimal balance of mental effort was achieved at thresholds significantly higher than maximum classification accuracy. Rating of mental effort is a feasible approach for effective threshold-adaptation during neurofeedback training. Closed-loop adaptation of the neurofeedback difficulty level facilitates reinforcement learning of brain self-regulation. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  4. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  5. The Patient Protection and Affordable Care Act's provisions regarding medical loss ratios and quality: evidence from Texas.

    PubMed

    Quast, Troy

    2013-01-01

    The Patient Protection and Affordable Care Act (PPACA) includes a provision that penalizes insurance companies if their Medical Loss Ratio (MLR) falls below a specified threshold. The MLR is roughly measured as the ratio of health care expenses to premiums paid by enrollees. I investigate whether there is a relationship between MLRs and the quality of care provided by insurance companies. I employ a ten-year sample of market-level financial data and quality variables for Texas insurers, as well as relevant control variables, in regression analyses that utilize insurer and market fixed effects. Of the 15 quality measures, only one has a statistically significant relationship with the MLR. For this measure, the relationship is negative. Although the MLR provision may provide incentives for insurance companies to lower premiums, this sample does not suggest that there is likely to be a beneficial effect on quality.

  6. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less

  7. Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1994-01-01

    The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.

  8. Effects of urbanization on benthic macroinvertebrate communities in streams, Anchorage, Alaska

    USGS Publications Warehouse

    Ourso, Robert T.

    2001-01-01

    The effect of urbanization on stream macroinvertebrate communities was examined by using data gathered during a 1999 reconnaissance of 14 sites in the Municipality of Anchorage, Alaska. Data collected included macroinvertebrate abundance, water chemistry, and trace elements in bed sediments. Macroinvertebrate relative-abundance data were edited and used in metric and index calculations. Population density was used as a surrogate for urbanization. Cluster analysis (unweighted-paired-grouping method) using arithmetic means of macroinvertebrate presence-absence data showed a well-defined separation between urbanized and nonurbanized sites as well as extracted sites that did not cleanly fall into either category. Water quality in Anchorage generally declined with increasing urbanization (population density). Of 59 variables examined, 31 correlated with urbanization. Local regression analysis extracted 11 variables that showed a significant impairment threshold response and 6 that showed a significant linear response. Significant biological variables for determining the impairment threshold in this study were the Margalef diversity index, Ephemeroptera-Plecoptera-Trichoptera taxa richness, and total taxa richness. Significant thresholds were observed in the water-chemistry variables conductivity, dissolved organic carbon, potassium, and total dissolved solids. Significant thresholds in trace elements in bed sediments included arsenic, iron, manganese, and lead. Results suggest that sites in Anchorage that have ratios of population density to road density greater than 70, storm-drain densities greater than 0.45 miles per square mile, road densities greater than 4 miles per square mile, or population densities greater than 125-150 persons per square mile may require further monitoring to determine if the stream has become impaired. This population density is far less than the 1,000 persons per square mile used by the U.S. Census Bureau to define an urban area.

  9. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.

    PubMed

    Mihalaş, Stefan; Niebur, Ernst

    2009-03-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.

  10. A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors

    PubMed Central

    Mihalaş, Ştefan; Niebur, Ernst

    2010-01-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368

  11. Variable-Threshold Threshold Elements,

    DTIC Science & Technology

    A threshold element is a mathematical model of certain types of logic gates and of a biological neuron. Much work has been done on the subject of... threshold elements with fixed thresholds; this study concerns itself with elements in which the threshold may be varied, variable- threshold threshold ...elements. Physical realizations include resistor-transistor elements, in which the threshold is simply a voltage. Variation of the threshold causes the

  12. An assessment of postcranial indices, ratios, and body mass versus eco-geographical variables of prehistoric Jomon, Yayoi agriculturalists, and Kumejima Islanders of Japan.

    PubMed

    Seguchi, Noriko; Quintyn, Conrad B; Yonemoto, Shiori; Takamuku, Hirofumi

    2017-09-10

    We explore variations in body and limb proportions of the Jomon hunter-gatherers (14,000-2500 BP), the Yayoi agriculturalists (2500-1700 BP) of Japan, and the Kumejima Islanders of the Ryukyus (1600-1800 AD) with 11 geographically diverse skeletal postcranial samples from Africa, Europe, Asia, Australia, and North America using brachial-crural indices, femur head-breadth-to-femur length ratio, femur head-breadth-to-lower-limb-length ratio, and body mass as indicators of phenotypic climatic adaptation. Specifically, we test the hypothesis that variation in limb proportions seen in Jomon, Yayoi, and Kumejima is a complex interaction of genetic adaptation; development and allometric constraints; selection, gene flow and genetic drift with changing cultural factors (i.e., nutrition) and climate. The skeletal data (1127 individuals) were subjected to principle components analysis, Manly's permutation multiple regression tests, and Relethford-Blangero analysis. The results of Manly's tests indicate that body proportions and body mass are significantly correlated with latitude, and minimum and maximum temperatures while limb proportions were not significantly correlated with these climatic variables. Principal components plots separated "climatic zones:" tropical, temperate, and arctic populations. The indigenous Jomon showed cold-adapted body proportions and warm-adapted limb proportions. Kumejima showed cold-adapted body proportions and limbs. The Yayoi adhered to the Allen-Bergmann expectation of cold-adapted body and limb proportions. Relethford-Blangero analysis showed that Kumejima experienced gene flow indicated by high observed variances while Jomon experienced genetic drift indicated by low observed variances. The complex interaction of evolutionary forces and development/nutritional constraints are implicated in the mismatch of limb and body proportions. © 2017 Wiley Periodicals, Inc.

  13. Adaptive threshold determination for efficient channel sensing in cognitive radio network using mobile sensors

    NASA Astrophysics Data System (ADS)

    Morshed, M. N.; Khatun, S.; Kamarudin, L. M.; Aljunid, S. A.; Ahmad, R. B.; Zakaria, A.; Fakir, M. M.

    2017-03-01

    Spectrum saturation problem is a major issue in wireless communication systems all over the world. Huge number of users is joining each day to the existing fixed band frequency but the bandwidth is not increasing. These requirements demand for efficient and intelligent use of spectrum. To solve this issue, the Cognitive Radio (CR) is the best choice. Spectrum sensing of a wireless heterogeneous network is a fundamental issue to detect the presence of primary users' signals in CR networks. In order to protect primary users (PUs) from harmful interference, the spectrum sensing scheme is required to perform well even in low signal-to-noise ratio (SNR) environments. Meanwhile, the sensing period is usually required to be short enough so that secondary (unlicensed) users (SUs) can fully utilize the available spectrum. CR networks can be designed to manage the radio spectrum more efficiently by utilizing the spectrum holes in primary user's licensed frequency bands. In this paper, we have proposed an adaptive threshold detection method to detect presence of PU signal using free space path loss (FSPL) model in 2.4 GHz WLAN network. The model is designed for mobile sensors embedded in smartphones. The mobile sensors acts as SU while the existing WLAN network (channels) works as PU. The theoretical results show that the desired threshold range detection of mobile sensors mainly depends on the noise floor level of the location in consideration.

  14. Phase coherence adaptive processor for automatic signal detection and identification

    NASA Astrophysics Data System (ADS)

    Wagstaff, Ronald A.

    2006-05-01

    A continuously adapting acoustic signal processor with an automatic detection/decision aid is presented. Its purpose is to preserve the signals of tactical interest, and filter out other signals and noise. It utilizes single sensor or beamformed spectral data and transforms the signal and noise phase angles into "aligned phase angles" (APA). The APA increase the phase temporal coherence of signals and leave the noise incoherent. Coherence thresholds are set, which are representative of the type of source "threat vehicle" and the geographic area or volume in which it is operating. These thresholds separate signals, based on the "quality" of their APA coherence. An example is presented in which signals from a submerged source in the ocean are preserved, while clutter signals from ships and noise are entirely eliminated. Furthermore, the "signals of interest" were identified by the processor's automatic detection aid. Similar performance is expected for air and ground vehicles. The processor's equations are formulated in such a manner that they can be tuned to eliminate noise and exploit signal, based on the "quality" of their APA temporal coherence. The mathematical formulation for this processor is presented, including the method by which the processor continuously self-adapts. Results show nearly complete elimination of noise, with only the selected category of signals remaining, and accompanying enhancements in spectral and spatial resolution. In most cases, the concept of signal-to-noise ratio looses significance, and "adaptive automated /decision aid" is more relevant.

  15. A modified varying-stage adaptive phase II/III clinical trial design.

    PubMed

    Dong, Gaohong; Vandemeulebroecke, Marc

    2016-07-01

    Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Point of no return: experimental determination of the lethal hydraulic threshold during drought for loblolly pine (Pinus taeda)

    NASA Astrophysics Data System (ADS)

    Hammond, W.; Yu, K.; Wilson, L. A.; Will, R.; Anderegg, W.; Adams, H. D.

    2017-12-01

    The strength of the terrestrial carbon sink—dominated by forests—remains one of the greatest uncertainties in climate change modelling. How forests will respond to increased variability in temperature and precipitation is poorly understood, and experimental study to better inform global vegetation models in this area is needed. Necessary for achieving­­­­ this goal is an understanding of how increased temperatures and drought will affect landscape level distributions of plant species. Quantifying physiological thresholds representing a point of no return from drought stress, including thresholds in hydraulic function, is critical to this end. Recent theoretical, observational, and modelling research has converged upon a threshold of 60 percent loss of hydraulic conductivity at mortality (PLClethal). However, direct experimental determination of lethal points in conductivity and cavitation during drought is lacking. We quantified thresholds in hydraulic function in Loblolly pine, Pinus taeda, a commercially important timber species. In a greenhouse experiment, we exposed saplings (n = 96 total) to drought and rewatered treatment groups at variable levels of increasing water stress determined by pre-selected targets in pre-dawn water potential. Treatments also included a watered control with no drought, and drought with no rewatering. We measured physiological responses to water stress, including hydraulic conductivity, native PLC, water potential, foliar color, canopy die-back, and dark-adapted chlorophyll fluorescence. Following the rewatering treatment, we observed saplings for at least two months to determine which survived and which died. Using these data we calculated lethal physiological thresholds in water potential, directly measured PLC, and PLC inferred from water potential using a hydraulic vulnerability curve. We found that PLClethal inferred from water potential agreed with the 60% threshold suggested by previous research. However, directly measured PLC supported a much higher threshold. Beyond PLClethal, some trees survived by basal and epicormic re-sprouting, despite complete top-kill of existing foliage. Additional empirical study of multiple species to represent functional groups is needed to provide lethal thresholds for models presently in development.

  17. DARK ADAPTATION IN DINEUTES

    PubMed Central

    Clark, Leonard B.

    1938-01-01

    The level of dark adaptation of the whirligig beetle can be measured in terms of the threshold intensity calling forth a response. The course of dark adaptation was determined at levels of light adaptation of 6.5, 91.6, and 6100 foot-candles. All data can be fitted by the same curve. This indicates that dark adaptation follows parts of the same course irrespective of the level of light adaptation. The intensity of the adapting light determines the level at which dark adaptation will begin. The relation between log aI 0 (instantaneous threshold) and log of adapting light intensity is linear over the range studied. PMID:19873056

  18. Nitrogen (N) Deposition Impacts Seedling Growth of Pinus massoniana via N:P Ratio Effects and the Modulation of Adaptive Responses to Low P (Phosphorus)

    PubMed Central

    Zhang, Yi; Zhou, Zhichun; Yang, Qing

    2013-01-01

    Background In forest ecosystems with phosphorus (P) deficiency, the impact of atmospheric nitrogen (N) deposition on nutritional traits related to P uptake and P use potentially determines plant growth and vegetation productivity. Methodology/Principal Findings Two N deposition simulations were combined with three soil P conditions (homogeneous P deficiency with evenly low P; heterogeneous P deficiency with low subsoil P and high topsoil P; high P) using four full-sib families of Masson pine (Pinus massoniana). Under homogeneous P deficiency, N had a low effect on growth due to higher N:P ratios, whereas N-sensitive genotypes had lower N:P ratios and greater N sensitivity. The N effect increased under higher P conditions due to increased P concentration and balanced N:P ratios. An N:P threshold of 12.0–15.0 was detected, and growth was increased by N with an N:P ratio ≤ 12.0 and increased by P with an N:P ratio ≥ 15.0. Under homogeneous P deficiency, increased P use efficiency by N deposition improved growth. Under heterogeneous P deficiency, a greater P deficiency under N deposition due to increased N:P ratios induced greater adaptive responses to low P (root acid phosphatase secretion and topsoil root proliferation) and improved P acquisition and growth. Conclusions/Significance N deposition diversely affected seedling growth across different P conditions and genotypes via N:P ratio effects and the modulation of adaptive responses to low P. The positive impact of N on growth was genotype-specific and increased by soil P addition due to balanced N:P ratios. These results indicate the significance of breeding N-sensitive tree genotypes and improving forest soil P status to compensate for increasing N deposition. PMID:24205376

  19. Evaluation of the stability indices for the thunderstorm forecasting in the region of Belgrade, Serbia

    NASA Astrophysics Data System (ADS)

    Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.

    2015-07-01

    The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.

  20. An adaptive threshold detector and channel parameter estimator for deep space optical communications

    NASA Technical Reports Server (NTRS)

    Arabshahi, P.; Mukai, R.; Yan, T. -Y.

    2001-01-01

    This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.

  1. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  2. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  3. Lowering threshold energy for femtosecond laser pulse photodisruption through turbid media using adaptive optics

    NASA Astrophysics Data System (ADS)

    Hansen, A.; Ripken, Tammo; Krueger, Ronald R.; Lubatschowski, Holger

    2011-03-01

    Focussed femtosecond laser pulses are applied in ophthalmic tissues to create an optical breakdown and therefore a tissue dissection through photodisruption. The threshold irradiance for the optical breakdown depends on the photon density in the focal volume which can be influenced by the pulse energy, the size of the irradiated area (focus), and the irradiation time. For an application in the posterior eye segment the aberrations of the anterior eye elements cause a distortion of the wavefront and therefore an increased focal volume which reduces the photon density and thus raises the required energy for surpassing the threshold irradiance. The influence of adaptive optics on lowering the pulse energy required for photodisruption by refining a distorted focus was investigated. A reduction of the threshold energy can be shown when using adaptive optics. The spatial confinement with adaptive optics furthermore raises the irradiance at constant pulse energy. The lowered threshold energy allows for tissue dissection with reduced peripheral damage. This offers the possibility for moving femtosecond laser surgery from corneal or lental applications in the anterior eye to vitreal or retinal applications in the posterior eye.

  4. Self-Tuning Threshold Method for Real-Time Gait Phase Detection Based on Ground Contact Forces Using FSRs.

    PubMed

    Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi

    2018-02-06

    This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.

  5. The human as a detector of changes in variance and bandwidth

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Govindaraj, T.

    1977-01-01

    The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.

  6. Cannabinoid-induced effects on the nociceptive system: a neurophysiological study in patients with secondary progressive multiple sclerosis.

    PubMed

    Conte, Antonella; Bettolo, Chiara Marini; Onesti, Emanuela; Frasca, Vittorio; Iacovelli, Elisa; Gilio, Francesca; Giacomelli, Elena; Gabriele, Maria; Aragona, Massimiliano; Tomassini, Valentina; Pantano, Patrizia; Pozzilli, Carlo; Inghilleri, Maurizio

    2009-05-01

    Although clinical studies show that cannabinoids improve central pain in patients with multiple sclerosis (MS) neurophysiological studies are lacking to investigate whether they also suppress these patients' electrophysiological responses to noxious stimulation. The flexion reflex (FR) in humans is a widely used technique for assessing the pain threshold and for studying spinal and supraspinal pain pathways and the neurotransmitter system involved in pain control. In a randomized, double-blind, placebo-controlled, cross-over study we investigated cannabinoid-induced changes in RIII reflex variables (threshold, latency and area) in a group of 18 patients with secondary progressive MS. To investigate whether cannabinoids act indirectly on the nociceptive reflex by modulating lower motoneuron excitability we also evaluated the H-reflex size after tibial nerve stimulation and calculated the H wave/M wave (H/M) ratio. Of the 18 patients recruited and randomized 17 completed the study. After patients used a commercial delta-9-tetrahydrocannabinol (THC) and cannabidiol mixture as an oromucosal spray the RIII reflex threshold increased and RIII reflex area decreased. The visual analogue scale score for pain also decreased, though not significantly. Conversely, the H/M ratio measured before patients received cannabinoids remained unchanged after therapy. In conclusion, the cannabinoid-induced changes in the RIII reflex threshold and area in patients with MS provide objective neurophysiological evidence that cannabinoids modulate the nociceptive system in patients with MS.

  7. Coral bleaching pathways under the control of regional temperature variability

    NASA Astrophysics Data System (ADS)

    Langlais, C. E.; Lenton, A.; Heron, S. F.; Evenhuis, C.; Sen Gupta, A.; Brown, J. N.; Kuchinke, M.

    2017-11-01

    Increasing sea surface temperatures (SSTs) are predicted to adversely impact coral populations worldwide through increasing thermal bleaching events. Future bleaching is unlikely to be spatially uniform. Therefore, understanding what determines regional differences will be critical for adaptation management. Here, using a cumulative heat stress metric, we show that characteristics of regional SST determine the future bleaching risk patterns. Incorporating observed information on SST variability, in assessing future bleaching risk, provides novel options for management strategies. As a consequence, the known biases in climate model variability and the uncertainties in regional warming rate across climate models are less detrimental than previously thought. We also show that the thresholds used to indicate reef viability can strongly influence a decision on what constitutes a potential refugia. Observing and understanding the drivers of regional variability, and the viability limits of coral reefs, is therefore critical for making meaningful projections of coral bleaching risk.

  8. Threshold magnitudes for a multichannel correlation detector in background seismicity

    DOE PAGES

    Carmichael, Joshua D.; Hartse, Hans

    2016-04-01

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  9. Excitation-based and informational masking of a tonal signal in a four-tone masker.

    PubMed

    Leibold, Lori J; Hitchens, Jack J; Buss, Emily; Neff, Donna L

    2010-04-01

    This study examined contributions of peripheral excitation and informational masking to the variability in masking effectiveness observed across samples of multi-tonal maskers. Detection thresholds were measured for a 1000-Hz signal presented simultaneously with each of 25, four-tone masker samples. Using a two-interval, forced-choice adaptive task, thresholds were measured with each sample fixed throughout trial blocks for ten listeners. Average thresholds differed by as much as 26 dB across samples. An excitation-based model of partial loudness [Moore, B. C. J. et al. (1997). J. Audio Eng. Soc. 45, 224-237] was used to predict thresholds. These predictions accounted for a significant portion of variance in the data of several listeners, but no relation between the model and data was observed for many listeners. Moreover, substantial individual differences, on the order of 41 dB, were observed for some maskers. The largest individual differences were found for maskers predicted to produce minimal excitation-based masking. In subsequent conditions, one of five maskers was randomly presented in each interval. The difference in performance for samples with low versus high predicted thresholds was reduced in random compared to fixed conditions. These findings are consistent with a trading relation whereby informational masking is largest for conditions in which excitation-based masking is smallest.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua D.; Hartse, Hans

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  11. Summer temperature metrics for predicting brook trout (Salvelinus fontinalis) distribution in streams

    USGS Publications Warehouse

    Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.

    2012-01-01

    We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.

  12. Quantifying Livestock Heat Stress Impacts in the Sahel

    NASA Astrophysics Data System (ADS)

    Broman, D.; Rajagopalan, B.; Hopson, T. M.

    2014-12-01

    Livestock heat stress, especially in regions of the developing world with limited adaptive capacity, has a largely unquantified impact on food supply. Though dominated by ambient air temperature, relative humidity, wind speed, and solar radiation all affect heat stress, which can decrease livestock growth, milk production, reproduction rates, and mortality. Indices like the thermal-humidity index (THI) are used to quantify the heat stress experienced from climate variables. Livestock experience differing impacts at different index critical thresholds that are empirically determined and specific to species and breed. This lack of understanding has been highlighted in several studies with a limited knowledge of the critical thresholds of heat stress in native livestock breeds, as well as the current and future impact of heat stress,. As adaptation and mitigation strategies to climate change depend on a solid quantitative foundation, this knowledge gap has limited such efforts. To address the lack of study, we have investigated heat stress impacts in the pastoral system of Sub-Saharan West Africa. We used a stochastic weather generator to quantify both the historic and future variability of heat stress. This approach models temperature, relative humidity, and precipitation, the climate variables controlling heat stress. Incorporating large-scale climate as covariates into this framework provides a better historical fit and allows us to include future CMIP5 GCM projections to examine the climate change impacts on heat stress. Health and production data allow us to examine the influence of this variability on livestock directly, and are considered in conjunction with the confounding impacts of fodder and water access. This understanding provides useful information to decision makers looking to mitigate the impacts of climate change and can provide useful seasonal forecasts of heat stress risk. A comparison of the current and future heat stress conditions based on climate variables for West Africa will be presented, An assessment of current and future risk was obtained by linking climatic heat stress to cattle health and production. Seasonal forecasts of heat stress are also provided by modeling the heat stress climate variables using persistent large-scale climate features.

  13. Motor unit behaviour and contractile changes during fatigue in the human first dorsal interosseus

    PubMed Central

    Carpentier, Alain; Duchateau, Jacques; Hainaut, Karl

    2001-01-01

    In 67 single motor units, the mechanical properties, the recruitment and derecruitment thresholds, and the discharge rates were recorded concurrently in the first dorsal interosseus (FDI) of human subjects during intermittent fatiguing contractions. The task consisted of isometric ramp-and-hold contractions performed at 50% of the maximal voluntary contraction (MVC). The purpose of this study was to examine the influence of fatigue on the behaviour of motor units with a wide range of activation thresholds. For low-threshold (< 25% MVC) motor units, the mean twitch force increased with fatigue and the recruitment threshold either did not change or increased. In contrast, the twitch force and the activation threshold decreased for the high-threshold (> 25% MVC) units. The observation that in low-threshold motor units a quick stretch of the muscle at the end of the test reset the unit force and recruitment threshold to the prefatigue value suggests a significant role for fatigue-related changes in muscle stiffness but not twitch potentiation or motor unit synchronization. Although the central drive intensified during the fatigue test, as indicated by an increase in surface electromyogram (EMG), the discharge rate of the motor units during the hold phase of each contraction decreased progressively over the course of the task for motor units that were recruited at the beginning of the test, especially the low-threshold units. In contrast, the discharge rates of newly activated units first increased and then decreased. Such divergent behaviour of low- and high-threshold motor units could not be individually controlled by the central drive to the motoneurone pool. Rather, the different behaviours must be the consequence of variable contributions from motoneurone adaptation and afferent feedback from the muscle during the fatiguing contraction. PMID:11483719

  14. A Purkinje shift in the spectral sensitivity of grey squirrels

    PubMed Central

    Silver, Priscilla H.

    1966-01-01

    1. The light-adapted spectral sensitivity of the grey squirrel has been determined by an automated training method at a level about 6 log units above the squirrel's absolute threshold. 2. The maximum sensitivity is near 555 nm, under light-adapted conditions, compared with the dark-adapted maximum near 500 nm found by a similar method. 3. Neither the light-adapted nor the dark-adapted behavioural threshold agrees with electrophysiological findings using single flash techniques, but there is agreement with e.r.g. results obtained with sinusoidal stimuli. PMID:5972118

  15. Children with developmental coordination disorder (DCD) can adapt to perceptible and subliminal rhythm changes but are more variable.

    PubMed

    Roche, Renuka; Viswanathan, Priya; Clark, Jane E; Whitall, Jill

    2016-12-01

    Children with DCD demonstrate impairments in bimanual finger tapping during self-paced tapping and tapping in synchrony to different frequencies. In this study, we investigated the ability of children with DCD to adapt motorically to perceptible or subliminal changes of the auditory stimuli without a change in frequency, and compared their performance to typically developing controls (TDC). Nineteen children with DCD between ages 6-11years (mean age±SD=114±21months) and 17 TDC (mean age±SD=113±21months) participated in this study. Auditory perceptual threshold was established. Children initially tapped bimanually to an antiphase beat and then to either a perceptible change in rhythm or to gradual subliminal changes in rhythm. Children with DCD were able to perceive changes in rhythm similar to TDC. They were also able to adapt to both perceptible and subliminal changes in rhythms similar to their age- and gender- matched TDC. However, these children were significantly more variable compared with TDC in all phasing conditions. The results suggest that the performance impairments in bilateral tapping are not a result of poor conscious or sub-conscious perception of the auditory cue. The increased motor variability may be associated with cerebellar dysfunction but further behavioral and neurophysiological studies are needed. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Genomic analysis of differentiation between soil types reveals candidate genes for local adaptation in Arabidopsis lyrata.

    PubMed

    Turner, Thomas L; von Wettberg, Eric J; Nuzhdin, Sergey V

    2008-09-11

    Serpentine soil, which is naturally high in heavy metal content and has low calcium to magnesium ratios, comprises a difficult environment for most plants. An impressive number of species are endemic to serpentine, and a wide range of non-endemic plant taxa have been shown to be locally adapted to these soils. Locating genomic polymorphisms which are differentiated between serpentine and non-serpentine populations would provide candidate loci for serpentine adaptation. We have used the Arabidopsis thaliana tiling array, which has 2.85 million probes throughout the genome, to measure genetic differentiation between populations of Arabidopsis lyrata growing on granitic soils and those growing on serpentinic soils. The significant overrepresentation of genes involved in ion transport and other functions provides a starting point for investigating the molecular basis of adaptation to soil ion content, water retention, and other ecologically and economically important variables. One gene in particular, calcium-exchanger 7, appears to be an excellent candidate gene for adaptation to low CaratioMg ratio in A. lyrata.

  17. Physiological differences between cycling and running: lessons from triathletes.

    PubMed

    Millet, Gregoire P; Vleck, V E; Bentley, D J

    2009-01-01

    The purpose of this review was to provide a synopsis of the literature concerning the physiological differences between cycling and running. By comparing physiological variables such as maximal oxygen consumption (V O(2max)), anaerobic threshold (AT), heart rate, economy or delta efficiency measured in cycling and running in triathletes, runners or cyclists, this review aims to identify the effects of exercise modality on the underlying mechanisms (ventilatory responses, blood flow, muscle oxidative capacity, peripheral innervation and neuromuscular fatigue) of adaptation. The majority of studies indicate that runners achieve a higher V O(2max) on treadmill whereas cyclists can achieve a V O(2max) value in cycle ergometry similar to that in treadmill running. Hence, V O(2max) is specific to the exercise modality. In addition, the muscles adapt specifically to a given exercise task over a period of time, resulting in an improvement in submaximal physiological variables such as the ventilatory threshold, in some cases without a change in V O(2max). However, this effect is probably larger in cycling than in running. At the same time, skill influencing motor unit recruitment patterns is an important influence on the anaerobic threshold in cycling. Furthermore, it is likely that there is more physiological training transfer from running to cycling than vice versa. In triathletes, there is generally no difference in V O(2max) measured in cycle ergometry and treadmill running. The data concerning the anaerobic threshold in cycling and running in triathletes are conflicting. This is likely to be due to a combination of actual training load and prior training history in each discipline. The mechanisms surrounding the differences in the AT together with V O(2max) in cycling and running are not largely understood but are probably due to the relative adaptation of cardiac output influencing V O(2max) and also the recruitment of muscle mass in combination with the oxidative capacity of this mass influencing the AT. Several other physiological differences between cycling and running are addressed: heart rate is different between the two activities both for maximal and submaximal intensities. The delta efficiency is higher in running. Ventilation is more impaired in cycling than in running. It has also been shown that pedalling cadence affects the metabolic responses during cycling but also during a subsequent running bout. However, the optimal cadence is still debated. Central fatigue and decrease in maximal strength are more important after prolonged exercise in running than in cycling.

  18. Stride-to-stride variability and complexity between novice and experienced runners during a prolonged run at anaerobic threshold speed.

    PubMed

    Mo, Shiwei; Chow, Daniel H K

    2018-05-19

    Motor control, related to running performance and running related injuries, is affected by progression of fatigue during a prolonged run. Distance runners are usually recommended to train at or slightly above anaerobic threshold (AT) speed for improving performance. However, running at AT speed may result in accelerated fatigue. It is not clear how one adapts running gait pattern during a prolonged run at AT speed and if there are differences between runners with different training experience. To compare characteristics of stride-to-stride variability and complexity during a prolonged run at AT speed between novice runners (NR) and experienced runners (ER). Both NR (n = 17) and ER (n = 17) performed a treadmill run for 31 min at his/her AT speed. Stride interval dynamics was obtained throughout the run with the middle 30 min equally divided into six time intervals (denoted as T1, T2, T3, T4, T5 and T6). Mean, coefficient of variation (CV) and scaling exponent alpha of stride intervals were calculated for each interval of each group. This study revealed mean stride interval significantly increased with running time in a non-linear trend (p<0.001). The stride interval variability (CV) maintained relatively constant for NR (p = 0.22) and changed nonlinearly for ER (p = 0.023) throughout the run. Alpha was significantly different between groups at T2, T5 and T6, and nonlinearly changed with running time for both groups with slight differences. These findings provided insights into how the motor control system adapts to progression of fatigue and evidences that long-term training enhances motor control. Although both ER and NR could regulate gait complexity to maintain AT speed throughout the prolonged run, ER also regulated stride interval variability to achieve the goal. Copyright © 2018. Published by Elsevier B.V.

  19. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  20. Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    2015-03-01

    We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.

  1. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. 18F-fluorocholine PET-guided target volume delineation techniques for partial prostate re-irradiation in local recurrent prostate cancer.

    PubMed

    Wang, Hui; Vees, Hansjörg; Miralbell, Raymond; Wissmeyer, Michael; Steiner, Charles; Ratib, Osman; Senthamizhchelvan, Srinivasan; Zaidi, Habib

    2009-11-01

    We evaluate the contribution of (18)F-choline PET/CT in the delineation of gross tumour volume (GTV) in local recurrent prostate cancer after initial irradiation using various PET image segmentation techniques. Seventeen patients with local-only recurrent prostate cancer (median=5.7 years) after initial irradiation were included in the study. Rebiopsies were performed in 10 patients that confirmed the local recurrence. Following injection of 300 MBq of (18)F-fluorocholine, dynamic PET frames (3 min each) were reconstructed from the list-mode acquisition. Five PET image segmentation techniques were used to delineate the (18)F-choline-based GTVs. These included manual delineation of contours (GTV(man)) by two teams consisting of a radiation oncologist and a nuclear medicine physician each, a fixed threshold of 40% and 50% of the maximum signal intensity (GTV(40%) and GTV(50%)), signal-to-background ratio-based adaptive thresholding (GTV(SBR)), and a region growing (GTV(RG)) algorithm. Geographic mismatches between the GTVs were also assessed using overlap analysis. Inter-observer variability for manual delineation of GTVs was high but not statistically significant (p=0.459). In addition, the volumes and shapes of GTVs delineated using semi-automated techniques were significantly higher than those of GTVs defined manually. Semi-automated segmentation techniques for (18)F-choline PET-guided GTV delineation resulted in substantially higher GTVs compared to manual delineation and might replace the latter for determination of recurrent prostate cancer for partial prostate re-irradiation. The selection of the most appropriate segmentation algorithm still needs to be determined.

  3. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  4. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  5. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  6. Analysis of parenchymal patterns using conspicuous spatial frequency features in mammograms applied to the BI-RADS density rating scheme

    NASA Astrophysics Data System (ADS)

    Perconti, Philip; Loew, Murray

    2006-03-01

    Automatic classification of the density of breast parenchyma is shown using a measure that is correlated to the human observer performance, and compared against the BI-RADS density rating. Increasingly popular in the United States, the Breast Imaging Reporting and Data System (BI-RADS) is used to draw attention to the increased screening difficulty associated with greater breast density; however, the BI-RADS rating scheme is subjective and is not intended as an objective measure of breast density. So, while popular, BI-RADS does not define density classes using a standardized measure, which leads to increased variability among observers. The adaptive thresholding technique is a more quantitative approach for assessing the percentage breast density, but considerable reader interaction is required. We calculate an objective density rating that is derived using a measure of local feature salience. Previously, this measure was shown to correlate well with radiologists' localization and discrimination of true positive and true negative regions-of-interest. Using conspicuous spatial frequency features, an objective density rating is obtained and correlated with adaptive thresholding, and the subjectively ascertained BI-RADS density ratings. Using 100 cases, obtained from the University of South Florida's DDSM database, we show that an automated breast density measure can be derived that is correlated with the interactive thresholding method for continuous percentage breast density, but not with the BI-RADS density rating categories for the selected cases. Comparison between interactive thresholding and the new salience percentage density resulted in a Pearson correlation of 76.7%. Using a four-category scale equivalent to the BI-RADS density categories, a Spearman correlation coefficient of 79.8% was found.

  7. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; King, J.; Keiser, Jr., D.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  8. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; ...

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  9. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  10. Policy tree optimization for adaptive management of water resources systems

    NASA Astrophysics Data System (ADS)

    Herman, Jonathan; Giuliani, Matteo

    2017-04-01

    Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points" that suggest the need of updating the policy. However, there remains a need for a general method to optimize the choice of the signposts to be used and their threshold values. This work contributes a general framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. Given a set of feature variables (e.g., reservoir level, inflow observations, inflow forecasts), the resulting policy defines both the optimal reservoir operations and the conditions under which such operations should be triggered. We demonstrate the approach using Folsom Reservoir (California) as a case study, in which operating policies must balance the risk of both floods and droughts. Numerical results show that the tree-based policies outperform the ones designed via Dynamic Programming. In addition, they display good adaptive capacity to the changing climate, successfully adapting the reservoir operations across a large set of uncertain climate scenarios.

  11. Recognition of speech in noise after application of time-frequency masks: Dependence on frequency and threshold parameters

    PubMed Central

    Sinex, Donal G.

    2013-01-01

    Binary time-frequency (TF) masks can be applied to separate speech from noise. Previous studies have shown that with appropriate parameters, ideal TF masks can extract highly intelligible speech even at very low speech-to-noise ratios (SNRs). Two psychophysical experiments provided additional information about the dependence of intelligibility on the frequency resolution and threshold criteria that define the ideal TF mask. Listeners identified AzBio Sentences in noise, before and after application of TF masks. Masks generated with 8 or 16 frequency bands per octave supported nearly-perfect identification. Word recognition accuracy was slightly lower and more variable with 4 bands per octave. When TF masks were generated with a local threshold criterion of 0 dB SNR, the mean speech reception threshold was −9.5 dB SNR, compared to −5.7 dB for unprocessed sentences in noise. Speech reception thresholds decreased by about 1 dB per dB of additional decrease in the local threshold criterion. Information reported here about the dependence of speech intelligibility on frequency and level parameters has relevance for the development of non-ideal TF masks for clinical applications such as speech processing for hearing aids. PMID:23556604

  12. Characterization of Rod Function Phenotypes Across a Range of Age-Related Macular Degeneration Severities and Subretinal Drusenoid Deposits

    PubMed Central

    Flynn, Oliver J.; Cukras, Catherine A.; Jeffrey, Brett G.

    2018-01-01

    Purpose To examine spatial changes in rod-mediated function in relationship to local structural changes across the central retina in eyes with a spectrum of age-related macular degeneration (AMD) disease severity. Methods Participants were categorized into five AMD severity groups based on fundus features. Scotopic thresholds were measured at 14 loci spanning ±18° along the vertical meridian from one eye of each of 42 participants (mean = 71.7 ± 9.9 years). Following a 30% bleach, dark adaptation was measured at eight loci (±12°). Rod intercept time (RIT) was defined from the time to detect a −3.1 log cd/m2 stimulus. RITslope was defined from the linear fit of RIT with decreasing retinal eccentricity. The presence of subretinal drusenoid deposits (SDD), ellipsoid (EZ) band disruption, and drusen at the test loci was evaluated using optical coherence tomography. Results Scotopic thresholds indicated greater rod function loss in the macula, which correlated with increasing AMD group severity. RITslope, which captures the spatial change in the rate of dark adaptation, increased with AMD severity (P < 0.0001). Three rod function phenotypes emerged: RF1, normal rod function; RF2, normal scotopic thresholds but slowed dark adaptation; and RF3, elevated scotopic thresholds with slowed dark adaptation. Dark adaptation was slowed at all loci with SDD or EZ band disruption, and at 32% of loci with no local structural changes. Conclusions Three rod function phenotypes were defined from combined measurement of scotopic threshold and dark adaptation. Spatial changes in dark adaptation across the macula were captured with RITslope, which may be a useful outcome measure for functional studies of AMD. PMID:29847647

  13. Characterization of Rod Function Phenotypes Across a Range of Age-Related Macular Degeneration Severities and Subretinal Drusenoid Deposits.

    PubMed

    Flynn, Oliver J; Cukras, Catherine A; Jeffrey, Brett G

    2018-05-01

    To examine spatial changes in rod-mediated function in relationship to local structural changes across the central retina in eyes with a spectrum of age-related macular degeneration (AMD) disease severity. Participants were categorized into five AMD severity groups based on fundus features. Scotopic thresholds were measured at 14 loci spanning ±18° along the vertical meridian from one eye of each of 42 participants (mean = 71.7 ± 9.9 years). Following a 30% bleach, dark adaptation was measured at eight loci (±12°). Rod intercept time (RIT) was defined from the time to detect a -3.1 log cd/m2 stimulus. RITslope was defined from the linear fit of RIT with decreasing retinal eccentricity. The presence of subretinal drusenoid deposits (SDD), ellipsoid (EZ) band disruption, and drusen at the test loci was evaluated using optical coherence tomography. Scotopic thresholds indicated greater rod function loss in the macula, which correlated with increasing AMD group severity. RITslope, which captures the spatial change in the rate of dark adaptation, increased with AMD severity (P < 0.0001). Three rod function phenotypes emerged: RF1, normal rod function; RF2, normal scotopic thresholds but slowed dark adaptation; and RF3, elevated scotopic thresholds with slowed dark adaptation. Dark adaptation was slowed at all loci with SDD or EZ band disruption, and at 32% of loci with no local structural changes. Three rod function phenotypes were defined from combined measurement of scotopic threshold and dark adaptation. Spatial changes in dark adaptation across the macula were captured with RITslope, which may be a useful outcome measure for functional studies of AMD.

  14. Validation of the minimal citrate tube fill volume for routine coagulation tests on ACL TOP 500 CTS®.

    PubMed

    Ver Elst, K; Vermeiren, S; Schouwers, S; Callebaut, V; Thomson, W; Weekx, S

    2013-12-01

    CLSI recommends a minimal citrate tube fill volume of 90%. A validation protocol with clinical and analytical components was set up to determine the tube fill threshold for international normalized ratio of prothrombin time (PT-INR), activated partial thromboplastin time (aPTT) and fibrinogen. Citrated coagulation samples from 16 healthy donors and eight patients receiving vitamin K antagonists (VKA) were evaluated. Eighty-nine tubes were filled to varying volumes of >50%. Coagulation tests were performed on ACL TOP 500 CTS(®) . Receiver Operating Characteristic (ROC) plot, with Total error (TE) and critical difference (CD) as possible acceptance criteria, was used to determine the fill threshold. Receiving Operating Characteristic was the most accurate with CD for PT-INR and TE for aPTT resulting in thresholds of 63% for PT and 80% for aPTT. By adapted ROC, based on threshold setting at a point of 100% sensitivity at a maximum specificity, CD was best for PT and TE for aPTT resulting in thresholds of 73% for PT and 90% for aPTT. For fibrinogen, the method was only valid with the TE criterion at a 63% fill volume. In our study, we validated the minimal citrate tube fill volumes of 73%, 90% and 63% for PT-INR, aPTT and fibrinogen, respectively. © 2013 John Wiley & Sons Ltd.

  15. Numerical study of the effects of contact angle and viscosity ratio on the dynamics of snap-off through porous media

    NASA Astrophysics Data System (ADS)

    Starnoni, Michele; Pokrajac, Dubravka

    2018-01-01

    Snap-off is a pore-scale mechanism occurring in porous media in which a bubble of non-wetting phase displacing a wetting phase, and vice-versa, can break-up into ganglia when passing through a constriction. This mechanism is very important in foam generation processes, enhanced oil recovery techniques and capillary trapping of CO2 during its geological storage. In the present study, the effects of contact angle and viscosity ratio on the dynamics of snap-off are examined by simulating drainage in a single pore-throat constriction of variable cross-section, and for different pore-throat geometries. To model the flow, we developed a CFD code based on the Finite Volume method. The Volume-of-fluid method is used to track the interfaces. Results show that the threshold contact angle for snap-off, i.e. snap-off occurs only for contact angles smaller than the threshold, increases from a value of 28° for a circular cross-section to 30-34° for a square cross-section and up to 40° for a triangular one. For a throat of square cross-section, increasing the viscosity of the injected phase results in a drop in the threshold contact angle from a value of 30° when the viscosity ratio μ bar is equal to 1 to 26° when μ bar = 20 and down to 24° when μ bar = 20 .

  16. Clinical Characteristics and Outcomes Are Similar in ARDS Diagnosed by Oxygen Saturation/Fio2 Ratio Compared With Pao2/Fio2 Ratio

    PubMed Central

    Janz, David R.; Shaver, Ciara M.; Bernard, Gordon R.; Bastarache, Julie A.; Ware, Lorraine B.

    2015-01-01

    BACKGROUND: Oxygen saturation as measured by pulse oximetry/Fio2 (SF) ratio is highly correlated with the Pao2/Fio2 (PF) ratio in patients with ARDS. However, it remains uncertain whether SF ratio can be substituted for PF ratio for diagnosis of ARDS and whether SF ratio might identify patients who are systemically different from patients diagnosed by PF ratio. METHODS: We conducted a secondary analysis of a large observational prospective cohort study. Patients were eligible if they were admitted to the medical ICU and fulfilled the Berlin definition of ARDS with hypoxemia criteria using either the standard PF threshold (PF ratio ≤ 300) or a previously published SF threshold (SF ratio ≤ 315). RESULTS: Of 362 patients with ARDS, 238 (66%) received a diagnosis by PF ratio and 124 (34%) by SF ratio. In a small group of patients who received diagnoses of ARDS by SF ratio who had arterial blood gas measurements on the same day (n = 10), the PF ratio did not meet ARDS criteria. There were no major differences in clinical characteristics or comorbidities between groups with the exception of APACHE (Acute Physiology and Chronic Health Evaluation) II scores, which were higher in the group diagnosed by PF ratio. However, this difference was no longer apparent when arterial blood gas-dependent variables (pH, Pao2) were removed from the APACHE II score. There were also no differences in clinical outcomes including duration of mechanical ventilation (mean, 7 days in both groups; P = .25), duration of ICU stay (mean, 10 days vs 9 days in PF ratio vs SF ratio; P = .26), or hospital mortality (36% in both groups, P = .9). CONCLUSIONS: Patients with ARDS diagnosed by SF ratio have very similar clinical characteristics and outcomes compared with patients diagnosed by PF ratio. These findings suggest that SF ratio could be considered as a diagnostic tool for early enrollment into clinical trials. PMID:26271028

  17. [Improving speech comprehension using a new cochlear implant speech processor].

    PubMed

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg sentences in the clinical setting S(0)N(CI), with speech signal at 0 degrees and noise lateral to the CI at 90 degrees . With the convincing findings from our evaluations of this multicenter study cohort, a trial with the Freedom 24 sound processor for all suitable CI users is recommended. For evaluating the benefits of a new processor, the comparative assessment paradigm used in our study design would be considered ideal for use with individual patients.

  18. Edge enhancement and noise suppression for infrared image based on feature analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Meng

    2018-06-01

    Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.

  19. Audiogram and auditory critical ratios of two Florida manatees (Trichechus manatus latirostris).

    PubMed

    Gaspard, Joseph C; Bauer, Gordon B; Reep, Roger L; Dziuk, Kimberly; Cardwell, Adrienne; Read, Latoshia; Mann, David A

    2012-05-01

    Manatees inhabit turbid, shallow-water environments and have been shown to have poor visual acuity. Previous studies on hearing have demonstrated that manatees possess good hearing and sound localization abilities. The goals of this research were to determine the hearing abilities of two captive subjects and measure critical ratios to understand the capacity of manatees to detect tonal signals, such as manatee vocalizations, in the presence of noise. This study was also undertaken to better understand individual variability, which has been encountered during behavioral research with manatees. Two Florida manatees (Trichechus manatus latirostris) were tested in a go/no-go paradigm using a modified staircase method, with incorporated 'catch' trials at a 1:1 ratio, to assess their ability to detect single-frequency tonal stimuli. The behavioral audiograms indicated that the manatees' auditory frequency detection for tonal stimuli ranged from 0.25 to 90.5 kHz, with peak sensitivity extending from 8 to 32 kHz. Critical ratios, thresholds for tone detection in the presence of background masking noise, were determined with one-octave wide noise bands, 7-12 dB (spectrum level) above the thresholds determined for the audiogram under quiet conditions. Manatees appear to have quite low critical ratios, especially at 8 kHz, where the ratio was 18.3 dB for one manatee. This suggests that manatee hearing is sensitive in the presence of background noise and that they may have relatively narrow filters in the tested frequency range.

  20. The limits of applicability of the sound exposure level (SEL) metric to temporal threshold shifts (TTS) in beluga whales, Delphinapterus leucas.

    PubMed

    Popov, Vladimir V; Supin, Alexander Ya; Rozhnov, Viatcheslav V; Nechaev, Dmitry I; Sysueva, Evgenia V

    2014-05-15

    The influence of fatiguing sound level and duration on post-exposure temporary threshold shift (TTS) was investigated in two beluga whales (Delphinapterus leucas). The fatiguing sound was half-octave noise with a center frequency of 22.5 kHz. TTS was measured at a test frequency of 32 kHz. Thresholds were measured by recording rhythmic evoked potentials (the envelope following response) to a test series of short (eight cycles) tone pips with a pip rate of 1000 s(-1). TTS increased approximately proportionally to the dB measure of both sound pressure (sound pressure level, SPL) and duration of the fatiguing noise, as a product of these two variables. In particular, when the noise parameters varied in a manner that maintained the product of squared sound pressure and time (sound exposure level, SEL, which is equivalent to the overall noise energy) at a constant level, TTS was not constant. Keeping SEL constant, the highest TTS appeared at an intermediate ratio of SPL to sound duration and decreased at both higher and lower ratios. Multiplication (SPL multiplied by log duration) better described the experimental data than an equal-energy (equal SEL) model. The use of SEL as a sole universal metric may result in an implausible assessment of the impact of a fatiguing sound on hearing thresholds in odontocetes, including under-evaluation of potential risks. © 2014. Published by The Company of Biologists Ltd.

  1. Territory Quality and Plumage Morph Predict Offspring Sex Ratio Variation in a Raptor

    PubMed Central

    Chakarov, Nayden; Pauli, Martina; Mueller, Anna-Katharina; Potiek, Astrid; Grünkorn, Thomas; Dijkstra, Cor; Krüger, Oliver

    2015-01-01

    Parents may adapt their offspring sex ratio in response to their own phenotype and environmental conditions. The most significant causes for adaptive sex-ratio variation might express themselves as different distributions of fitness components between sexes along a given variable. Several causes for differential sex allocation in raptors with reversed sexual size dimorphism have been suggested. We search for correlates of fledgling sex in an extensive dataset on common buzzards Buteo buteo, a long-lived bird of prey. Larger female offspring could be more resource-demanding and starvation-prone and thus the costly sex. Prominent factors such as brood size and laying date did not predict nestling sex. Nonetheless, lifetime sex ratio (LSR, potentially indicative of individual sex allocation constraints) and overall nestling sex were explained by territory quality with more females being produced in better territories. Additionally, parental plumage morphs and the interaction of morph and prey abundance tended to explain LSR and nestling sex, indicating local adaptation of sex allocation However, in a limited census of nestling mortality, not females but males tended to die more frequently in prey-rich years. Also, although females could have potentially longer reproductive careers, a subset of our data encompassing full individual life histories showed that longevity and lifetime reproductive success were similarly distributed between the sexes. Thus, a basis for adaptive sex allocation in this population remains elusive. Overall, in common buzzards most major determinants of reproductive success appeared to have no effect on sex ratio but sex allocation may be adapted to local conditions in morph-specific patterns. PMID:26445010

  2. Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography.

    PubMed

    Zaki, Farzana; Wang, Yahui; Su, Hao; Yuan, Xin; Liu, Xuan

    2017-05-01

    Optical coherence tomography (OCT) is based on coherence detection of interferometric signals and hence inevitably suffers from speckle noise. To remove speckle noise in OCT images, wavelet domain thresholding has demonstrated significant advantages in suppressing noise magnitude while preserving image sharpness. However, speckle noise in OCT images has different characteristics in different spatial scales, which has not been considered in previous applications of wavelet domain thresholding. In this study, we demonstrate a noise adaptive wavelet thresholding (NAWT) algorithm that exploits the difference of noise characteristics in different wavelet sub-bands. The algorithm is simple, fast, effective and is closely related to the physical origin of speckle noise in OCT image. Our results demonstrate that NAWT outperforms conventional wavelet thresholding.

  3. Experimental research of adaptive OFDM and OCT precoding with a high SE for VLLC system

    NASA Astrophysics Data System (ADS)

    Liu, Shuang-ao; He, Jing; Chen, Qinghui; Deng, Rui; Zhou, Zhihua; Chen, Shenghai; Chen, Lin

    2017-09-01

    In this paper, an adaptive orthogonal frequency division multiplexing (OFDM) modulation scheme with 128/64/32/16-quadrature amplitude modulation (QAM) and orthogonal circulant matrix transform (OCT) precoding is proposed and experimentally demonstrated for a visible laser light communication (VLLC) system with a cost-effective 450-nm blue-light laser diode (LD). The performance of OCT precoding is compared with conventional the adaptive Discrete Fourier Transform-spread (DFT-spread) OFDM scheme, 32 QAM OCT precoding OFDM scheme, 64 QAM OCT precoding OFDM scheme and adaptive OCT precoding OFDM scheme. The experimental results show that OCT precoding can achieve a relatively flat signal-to-noise ratio (SNR) curve, and it can provide performance improvement in bit error rate (BER). Furthermore, the BER of the proposed OFDM signal with a raw bit rate 5.04 Gb/s after 5-m free space transmission is less than 20% of soft-decision forward error correlation (SD-FEC) threshold of 2.4 × 10-2, and the spectral efficiency (SE) of 4.2 bit/s/Hz can be successfully achieved.

  4. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  5. Optical communication system performance with tracking error induced signal fading.

    NASA Technical Reports Server (NTRS)

    Tycz, M.; Fitzmaurice, M. W.; Premo, D. A.

    1973-01-01

    System performance is determined for an optical communication system using noncoherent detection in the presence of tracking error induced signal fading assuming (1) binary on-off modulation (OOK) with both fixed and adaptive threshold receivers, and (2) binary polarization modulation (BPM). BPM is shown to maintain its inherent 2- to 3-dB advantage over OOK when adaptive thresholding is used, and to have a substantially greater advantage when the OOK system is restricted to a fixed decision threshold.

  6. Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan

    In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.

  7. Novel Design of a Soft Lightweight Pneumatic Continuum Robot Arm with Decoupled Variable Stiffness and Positioning.

    PubMed

    Giannaccini, Maria Elena; Xiang, Chaoqun; Atyabi, Adham; Theodoridis, Theo; Nefti-Meziani, Samia; Davis, Steve

    2018-02-01

    Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally.

  8. Novel Design of a Soft Lightweight Pneumatic Continuum Robot Arm with Decoupled Variable Stiffness and Positioning

    PubMed Central

    Xiang, Chaoqun; Atyabi, Adham; Theodoridis, Theo; Nefti-Meziani, Samia; Davis, Steve

    2018-01-01

    Abstract Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally. PMID:29412080

  9. Thresholds for Coral Bleaching: Are Synergistic Factors and Shifting Thresholds Changing the Landscape for Management? (Invited)

    NASA Astrophysics Data System (ADS)

    Eakin, C.; Donner, S. D.; Logan, C. A.; Gledhill, D. K.; Liu, G.; Heron, S. F.; Christensen, T.; Rauenzahn, J.; Morgan, J.; Parker, B. A.; Hoegh-Guldberg, O.; Skirving, W. J.; Strong, A. E.

    2010-12-01

    As carbon dioxide rises in the atmosphere, climate change and ocean acidification are modifying important physical and chemical parameters in the oceans with resulting impacts on coral reef ecosystems. Rising CO2 is warming the world’s oceans and causing corals to bleach, with both alarming frequency and severity. The frequent return of stressful temperatures has already resulted in major damage to many of the world’s coral reefs and is expected to continue in the foreseeable future. Warmer oceans also have contributed to a rise in coral infectious diseases. Both bleaching and infectious disease can result in coral mortality and threaten one of the most diverse ecosystems on Earth and the important ecosystem services they provide. Additionally, ocean acidification from rising CO2 is reducing the availability of carbonate ions needed by corals to build their skeletons and perhaps depressing the threshold for bleaching. While thresholds vary among species and locations, it is clear that corals around the world are already experiencing anomalous temperatures that are too high, too often, and that warming is exceeding the rate at which corals can adapt. This is despite a complex adaptive capacity that involves both the coral host and the zooxanthellae, including changes in the relative abundance of the latter in their coral hosts. The safe upper limit for atmospheric CO2 is probably somewhere below 350ppm, a level we passed decades ago, and for temperature is a sustained global temperature increase of less than 1.5°C above pre-industrial levels. How much can corals acclimate and/or adapt to the unprecedented fast changing environmental conditions? Any change in the threshold for coral bleaching as the result of acclimation and/or adaption may help corals to survive in the future but adaptation to one stress may be maladaptive to another. There also is evidence that ocean acidification and nutrient enrichment modify this threshold. What do shifting thresholds mean for identifying limits and taking management actions to adapt to climate change?

  10. Genomic Analysis of Differentiation between Soil Types Reveals Candidate Genes for Local Adaptation in Arabidopsis lyrata

    PubMed Central

    Turner, Thomas L.; von Wettberg, Eric J.; Nuzhdin, Sergey V.

    2008-01-01

    Serpentine soil, which is naturally high in heavy metal content and has low calcium to magnesium ratios, comprises a difficult environment for most plants. An impressive number of species are endemic to serpentine, and a wide range of non-endemic plant taxa have been shown to be locally adapted to these soils. Locating genomic polymorphisms which are differentiated between serpentine and non-serpentine populations would provide candidate loci for serpentine adaptation. We have used the Arabidopsis thaliana tiling array, which has 2.85 million probes throughout the genome, to measure genetic differentiation between populations of Arabidopsis lyrata growing on granitic soils and those growing on serpentinic soils. The significant overrepresentation of genes involved in ion transport and other functions provides a starting point for investigating the molecular basis of adaptation to soil ion content, water retention, and other ecologically and economically important variables. One gene in particular, calcium-exchanger 7, appears to be an excellent candidate gene for adaptation to low Ca∶Mg ratio in A. lyrata. PMID:18784841

  11. Great Ears: Low-Frequency Sensitivity Correlates in Land and Marine Leviathans.

    PubMed

    Ketten, D R; Arruda, J; Cramer, S; Yamato, M

    2016-01-01

    Like elephants, baleen whales produce low-frequency (LF) and even infrasonic (IF) signals, suggesting they may be particularly susceptible to underwater anthropogenic sound impacts. Analyses of computerized tomography scans and histologies of the ears in five baleen whale and two elephant species revealed that LF thresholds correlate with basilar membrane thickness/width and cochlear radii ratios. These factors are consistent with high-mass, low-stiffness membranes and broad spiral curvatures, suggesting that Mysticeti and Proboscidea evolved common inner ear adaptations over similar time scales for processing IF/LF sounds despite operating in different media.

  12. Poor outcome prediction by burst suppression ratio in adults with post-anoxic coma without hypothermia.

    PubMed

    Yang, Qinglin; Su, Yingying; Hussain, Mohammed; Chen, Weibi; Ye, Hong; Gao, Daiquan; Tian, Fei

    2014-05-01

    Burst suppression ratio (BSR) is a quantitative electroencephalography (qEEG) parameter. The purpose of our study was to compare the accuracy of BSR when compared to other EEG parameters in predicting poor outcomes in adults who sustained post-anoxic coma while not being subjected to therapeutic hypothermia. EEG was registered and recorded at least once within 7 days of post-anoxic coma onset. Electrodes were placed according to the international 10-20 system, using a 16-channel layout. Each EEG expert scored raw EEG using a grading scale adapted from Young and scored amplitude-integrated electroencephalography tracings, in addition to obtaining qEEG parameters defined as BSR with a defined threshold. Glasgow outcome scales of 1 and 2 at 3 months, determined by two blinded neurologists, were defined as poor outcome. Sixty patients with Glasgow coma scale score of 8 or less after anoxic accident were included. The sensitivity (97.1%), specificity (73.3%), positive predictive value (82.5%), and negative prediction value (95.0%) of BSR in predicting poor outcome were higher than other EEG variables. BSR1 and BSR2 were reliable in predicting death (area under the curve > 0.8, P < 0.05), with the respective cutoff points being 39.8% and 61.6%. BSR1 was reliable in predicting poor outcome (area under the curve  =  0.820, P < 0.05) with a cutoff point of 23.9%. BSR1 was also an independent predictor of increased risk of death (odds ratio  =  1.042, 95% confidence intervals: 1.012-1.073, P  =  0.006). BSR may be a better predictor in prognosticating poor outcomes in patients with post-anoxic coma who do not undergo therapeutic hypothermia when compared to other qEEG parameters.

  13. What is the best way to contour lung tumors on PET scans? Multiobserver validation of a gradient-based method using a NSCLC digital PET phantom.

    PubMed

    Werner-Wasik, Maria; Nelson, Arden D; Choi, Walter; Arai, Yoshio; Faulhaber, Peter F; Kang, Patrick; Almeida, Fabio D; Xiao, Ying; Ohri, Nitin; Brockway, Kristin D; Piper, Jonathan W; Nelson, Aaron S

    2012-03-01

    To evaluate the accuracy and consistency of a gradient-based positron emission tomography (PET) segmentation method, GRADIENT, compared with manual (MANUAL) and constant threshold (THRESHOLD) methods. Contouring accuracy was evaluated with sphere phantoms and clinically realistic Monte Carlo PET phantoms of the thorax. The sphere phantoms were 10-37 mm in diameter and were acquired at five institutions emulating clinical conditions. One institution also acquired a sphere phantom with multiple source-to-background ratios of 2:1, 5:1, 10:1, 20:1, and 70:1. One observer segmented (contoured) each sphere with GRADIENT and THRESHOLD from 25% to 50% at 5% increments. Subsequently, seven physicians segmented 31 lesions (7-264 mL) from 25 digital thorax phantoms using GRADIENT, THRESHOLD, and MANUAL. For spheres <20 mm in diameter, GRADIENT was the most accurate with a mean absolute % error in diameter of 8.15% (10.2% SD) compared with 49.2% (51.1% SD) for 45% THRESHOLD (p < 0.005). For larger spheres, the methods were statistically equivalent. For varying source-to-background ratios, GRADIENT was the most accurate for spheres >20 mm (p < 0.065) and <20 mm (p < 0.015). For digital thorax phantoms, GRADIENT was the most accurate (p < 0.01), with a mean absolute % error in volume of 10.99% (11.9% SD), followed by 25% THRESHOLD at 17.5% (29.4% SD), and MANUAL at 19.5% (17.2% SD). GRADIENT had the least systematic bias, with a mean % error in volume of -0.05% (16.2% SD) compared with 25% THRESHOLD at -2.1% (34.2% SD) and MANUAL at -16.3% (20.2% SD; p value <0.01). Interobserver variability was reduced using GRADIENT compared with both 25% THRESHOLD and MANUAL (p value <0.01, Levene's test). GRADIENT was the most accurate and consistent technique for target volume contouring. GRADIENT was also the most robust for varying imaging conditions. GRADIENT has the potential to play an important role for tumor delineation in radiation therapy planning and response assessment. Copyright © 2012. Published by Elsevier Inc.

  14. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.

    PubMed

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.

  15. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  16. Reliability of the method of levels for determining cutaneous temperature sensitivity

    NASA Astrophysics Data System (ADS)

    Jakovljević, Miroljub; Mekjavić, Igor B.

    2012-09-01

    Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.

  17. Technical Note: An operational landslide early warning system at regional scale based on space-time variable rainfall thresholds

    NASA Astrophysics Data System (ADS)

    Segoni, S.; Battistini, A.; Rossi, G.; Rosi, A.; Lagomarsino, D.; Catani, F.; Moretti, S.; Casagli, N.

    2014-10-01

    We set up an early warning system for rainfall-induced landslides in Tuscany (23 000 km2). The system is based on a set of state-of-the-art intensity-duration rainfall thresholds (Segoni et al., 2014b), makes use of LAMI rainfall forecasts and real-time rainfall data provided by an automated network of more than 300 rain-gauges. The system was implemented in a WebGIS to ease the operational use in civil protection procedures: it is simple and intuitive to consult and it provides different outputs. Switching among different views, the system is able to focus both on monitoring of real time data and on forecasting at different lead times up to 48 h. Moreover, the system can switch between a very straightforward view where a synoptic scenario of the hazard can be shown all over the region and a more in-depth view were the rainfall path of rain-gauges can be displayed and constantly compared with rainfall thresholds. To better account for the high spatial variability of the physical features, which affects the relationship between rainfall and landslides, the region is subdivided into 25 alert zones, each provided with a specific threshold. The warning system reflects this subdivision: using a network of 332 rain gauges, it allows monitoring each alert zone separately and warnings can be issued independently from an alert zone to another. An important feature of the warning system is the use of thresholds that may vary in time adapting at the conditions of the rainfall path recorded by the rain-gauges. Depending on when the starting time of the rainfall event is set, the comparison with the threshold may produce different outcomes. Therefore, a recursive algorithm was developed to check and compare with the thresholds all possible starting times, highlighting the worst scenario and showing in the WebGIS interface at what time and how much the rainfall path has exceeded or will exceed the most critical threshold. Besides forecasting and monitoring the hazard scenario over the whole region with hazard levels differentiated for 25 distinct alert zones, the system can be used to gather, analyze, visualize, explore, interpret and store rainfall data, thus representing a potential support to both decision makers and scientists.

  18. Olfactory Detection Thresholds and Adaptation in Adults with Autism Spectrum Condition

    ERIC Educational Resources Information Center

    Tavassoli, T.; Baron-Cohen, S.

    2012-01-01

    Sensory issues have been widely reported in Autism Spectrum Conditions (ASC). Since olfaction is one of the least investigated senses in ASC, the current studies explore olfactory detection thresholds and adaptation to olfactory stimuli in adults with ASC. 80 participants took part, 38 (18 females, 20 males) with ASC and 42 control participants…

  19. Microscopy mineral image enhancement based on improved adaptive threshold in nonsubsampled shearlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Liangliang; Si, Yujuan; Jia, Zhenhong

    2018-03-01

    In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.

  20. Biodiversity response to natural gradients of multiple stressors on continental margins

    PubMed Central

    Sperling, Erik A.; Frieder, Christina A.; Levin, Lisa A.

    2016-01-01

    Sharp increases in atmospheric CO2 are resulting in ocean warming, acidification and deoxygenation that threaten marine organisms on continental margins and their ecological functions and resulting ecosystem services. The relative influence of these stressors on biodiversity remains unclear, as well as the threshold levels for change and when secondary stressors become important. One strategy to interpret adaptation potential and predict future faunal change is to examine ecological shifts along natural gradients in the modern ocean. Here, we assess the explanatory power of temperature, oxygen and the carbonate system for macrofaunal diversity and evenness along continental upwelling margins using variance partitioning techniques. Oxygen levels have the strongest explanatory capacity for variation in species diversity. Sharp drops in diversity are seen as O2 levels decline through the 0.5–0.15 ml l−1 (approx. 22–6 µM; approx. 21–5 matm) range, and as temperature increases through the 7–10°C range. pCO2 is the best explanatory variable in the Arabian Sea, but explains little of the variance in diversity in the eastern Pacific Ocean. By contrast, very little variation in evenness is explained by these three global change variables. The identification of sharp thresholds in ecological response are used here to predict areas of the seafloor where diversity is most at risk to future marine global change, noting that the existence of clear regional differences cautions against applying global thresholds. PMID:27122565

  1. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  2. Wavelet tree structure based speckle noise removal for optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Liu, Xuan; Liu, Yang

    2018-02-01

    We report a new speckle noise removal algorithm in optical coherence tomography (OCT). Though wavelet domain thresholding algorithms have demonstrated superior advantages in suppressing noise magnitude and preserving image sharpness in OCT, the wavelet tree structure has not been investigated in previous applications. In this work, we propose an adaptive wavelet thresholding algorithm via exploiting the tree structure in wavelet coefficients to remove the speckle noise in OCT images. The threshold for each wavelet band is adaptively selected following a special rule to retain the structure of the image across different wavelet layers. Our results demonstrate that the proposed algorithm outperforms conventional wavelet thresholding, with significant advantages in preserving image features.

  3. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  4. Adaptive Blending of Model and Observations for Automated Short-Range Forecasting: Examples from the Vancouver 2010 Olympic and Paralympic Winter Games

    NASA Astrophysics Data System (ADS)

    Bailey, Monika E.; Isaac, George A.; Gultepe, Ismail; Heckman, Ivan; Reid, Janti

    2014-01-01

    An automated short-range forecasting system, adaptive blending of observations and model (ABOM), was tested in real time during the 2010 Vancouver Olympic and Paralympic Winter Games in British Columbia. Data at 1-min time resolution were available from a newly established, dense network of surface observation stations. Climatological data were not available at these new stations. This, combined with output from new high-resolution numerical models, provided a unique and exciting setting to test nowcasting systems in mountainous terrain during winter weather conditions. The ABOM method blends extrapolations in time of recent local observations with numerical weather predictions (NWP) model predictions to generate short-range point forecasts of surface variables out to 6 h. The relative weights of the model forecast and the observation extrapolation are based on performance over recent history. The average performance of ABOM nowcasts during February and March 2010 was evaluated using standard scores and thresholds important for Olympic events. Significant improvements over the model forecasts alone were obtained for continuous variables such as temperature, relative humidity and wind speed. The small improvements to forecasts of variables such as visibility and ceiling, subject to discontinuous changes, are attributed to the persistence component of ABOM.

  5. Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.

    2009-01-01

    Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.

  6. Carbon and nutrient use efficiencies optimally balance stoichiometric imbalances

    NASA Astrophysics Data System (ADS)

    Manzoni, Stefano; Čapek, Petr; Lindahl, Björn; Mooshammer, Maria; Richter, Andreas; Šantrůčková, Hana

    2016-04-01

    Decomposer organisms face large stoichiometric imbalances because their food is generally poor in nutrients compared to the decomposer cellular composition. The presence of excess carbon (C) requires adaptations to utilize nutrients effectively while disposing of or investing excess C. As food composition changes, these adaptations lead to variable C- and nutrient-use efficiencies (defined as the ratios of C and nutrients used for growth over the amounts consumed). For organisms to be ecologically competitive, these changes in efficiencies with resource stoichiometry have to balance advantages and disadvantages in an optimal way. We hypothesize that efficiencies are varied so that community growth rate is optimized along stoichiometric gradients of their resources. Building from previous theories, we predict that maximum growth is achieved when C and nutrients are co-limiting, so that the maximum C-use efficiency is reached, and nutrient release is minimized. This optimality principle is expected to be applicable across terrestrial-aquatic borders, to various elements, and at different trophic levels. While the growth rate maximization hypothesis has been evaluated for consumers and predators, in this contribution we test it for terrestrial and aquatic decomposers degrading resources across wide stoichiometry gradients. The optimality hypothesis predicts constant efficiencies at low substrate C:N and C:P, whereas above a stoichiometric threshold, C-use efficiency declines and nitrogen- and phosphorus-use efficiencies increase up to one. Thus, high resource C:N and C:P lead to low C-use efficiency, but effective retention of nitrogen and phosphorus. Predictions are broadly consistent with efficiency trends in decomposer communities across terrestrial and aquatic ecosystems.

  7. THE VISUAL DISCRIMINATION OF INTENSITY AND THE WEBER-FECHNER LAW

    PubMed Central

    Hecht, Selig

    1924-01-01

    1. A study of the historical development of the Weber-Fechner law shows that it fails to describe intensity perception; first, because it is based on observations which do not record intensity discrimination accurately, and second, because it omits the essentially discontinuous nature of the recognition of intensity differences. 2. There is presented a series of data, assembled from various sources, which proves that in the visual discrimination of intensity the threshold difference ΔI bears no constant relation to the intensity I. The evidence shows unequivocally that as the intensity rises, the ratio See PDF for Equation first decreases and then increases. 3. The data are then subjected to analysis in terms of a photochemical system already proposed for the visual activity of the rods and cones. It is found that for the retinal elements to discriminate between one intensity and the next perceptible one, the transition from one to the other must involve the decomposition of a constant amount of photosensitive material. 4. The magnitude of this unitary increment in the quantity of photochemical action is greater for the rods than for the cones. Therefore, below a certain critical illumination—the cone threshold—intensity discrimination is controlled by the rods alone, but above this point it is determined by the cones alone. 5. The unitary increments in retinal photochemical action may be interpreted as being recorded by each rod and cone; or as conditioning the variability of the retinal cells so that each increment involves a constant increase in the number of active elements; or as a combination of the two interpretations. 6. Comparison with critical data of such diverse nature as dark adaptation, absolute thresholds, and visual acuity shows that the analysis is consistent with well established facts of vision. PMID:19872133

  8. Future streamflow droughts in glacierized catchments: the impact of dynamic glacier modelling and changing thresholds

    NASA Astrophysics Data System (ADS)

    Van Tiel, Marit; Van Loon, Anne; Wanders, Niko; Vis, Marc; Teuling, Ryan; Stahl, Kerstin

    2017-04-01

    In glacierized catchments, snowpack and glaciers function as an important storage of water and hydrographs of highly glacierized catchments in mid- and high latitudes thus show a clear seasonality with low flows in winter and high flows in summer. Due to the ongoing climate change we expect this type of storage capacity to decrease with resultant consequences for the discharge regime. In this study we focus on streamflow droughts, here defined as below average water availability specifically in the high flow season, and which methods are most suitable to characterize future streamflow droughts as regimes change. Two glacierized catchments, Nigardsbreen (Norway) and Wolverine (Alaska), are used as case study and streamflow droughts are compared between two periods, 1975-2004 and 2071-2100. Streamflow is simulated with the HBV light model, calibrated on observed discharge and seasonal glacier mass balances, for two climate change scenarios (RCP 4.5 & RCP 8.5). In studies on future streamflow drought often the same variable threshold of the past has been applied to the future, but in regions where a regime shift is expected this method gives severe "droughts" in the historic high-flow period. We applied the new alternative transient variable threshold, a threshold that adapts to the changing hydrological regime and is thus better able to cope with this issue, but has never been thoroughly tested in glacierized catchments. As the glacier area representation in the hydrological modelling can also influence the modelled discharge and the derived streamflow droughts, we evaluated in this study both the difference between the historical variable threshold (HVT) and transient variable threshold (TVT) and two different glacier area conceptualisations (constant area (C) and dynamical area (D)), resulting in four scenarios: HVT-C, HVT-D, TVT-C and TVT-D. Results show a drastic decrease in the number of droughts in the HVT-C scenario due to increased glacier melt. The deficit volume is expected to be up to almost eight times larger in the future compared to the historical period (Wolverine, +674%) in the HVT-D scenario, caused by the regime shift. Using the TVT the drought characteristics between the C and D scenarios and between future and historic droughts are more similar. However, when using the TVT, causing factors of future droughts, anomalies in temperature and/or precipitation, can be analysed. This study highlights the different conclusions that may be drawn on future streamflow droughts in glacierized catchments depending on methodological choices. They could be used to answer different questions: the TVT for analysing drought processes in the future, the HVT to assess changes between historical and future periods, the constant area conceptualisation to analyse the effect of short term climate variability and the dynamical glacier area to model realistic future discharges in glacierized catchments.

  9. Olive flowering phenology variation between different cultivars in Spain and Italy: modeling analysis

    NASA Astrophysics Data System (ADS)

    Garcia-Mozo, H.; Orlandi, F.; Galan, C.; Fornaciari, M.; Romano, B.; Ruiz, L.; Diaz de La Guardia, C.; Trigo, M. M.; Chuine, I.

    2009-03-01

    Phenology data are sensitive data to identify how plants are adapted to local climate and how they respond to climatic changes. Modeling flowering phenology allows us to identify the meteorological variables determining the reproductive cycle. Phenology of temperate of woody plants is assumed to be locally adapted to climate. Nevertheless, recent research shows that local adaptation may not be an important constraint in predicting phenological responses. We analyzed variations in flowering dates of Olea europaea L. at different sites of Spain and Italy, testing for a genetic differentiation of flowering phenology among olive varieties to estimate whether local modeling is necessary for olive or not. We build models for the onset and peak dates flowering in different sites of Andalusia and Puglia. Process-based phenological models using temperature as input variable and photoperiod as the threshold date to start temperature accumulation were developed to predict both dates. Our results confirm and update previous results that indicated an advance in olive onset dates. The results indicate that both internal and external validity were higher in the models that used the photoperiod as an indicator to start to cumulate temperature. The use of the unified model for modeling the start and peak dates in the different localities provides standardized results for the comparative study. The use of regional models grouping localities by varieties and climate similarities indicate that local adaptation would not be an important factor in predicting olive phenological responses face to the global temperature increase.

  10. Species-specific responses to ocean acidification should account for local adaptation and adaptive plasticity.

    PubMed

    Vargas, Cristian A; Lagos, Nelson A; Lardies, Marco A; Duarte, Cristian; Manríquez, Patricio H; Aguilera, Victor M; Broitman, Bernardo; Widdicombe, Steve; Dupont, Sam

    2017-03-13

    Global stressors, such as ocean acidification, constitute a rapidly emerging and significant problem for marine organisms, ecosystem functioning and services. The coastal ecosystems of the Humboldt Current System (HCS) off Chile harbour a broad physical-chemical latitudinal and temporal gradient with considerable patchiness in local oceanographic conditions. This heterogeneity may, in turn, modulate the specific tolerances of organisms to climate stress in species with populations distributed along this environmental gradient. Negative response ratios are observed in species models (mussels, gastropods and planktonic copepods) exposed to changes in the partial pressure of CO 2 (pCO2) far from the average and extreme pCO2 levels experienced in their native habitats. This variability in response between populations reveals the potential role of local adaptation and/or adaptive phenotypic plasticity in increasing resilience of species to environmental change. The growing use of standard ocean acidification scenarios and treatment levels in experimental protocols brings with it a danger that inter-population differences are confounded by the varying environmental conditions naturally experienced by different populations. Here, we propose the use of a simple index taking into account the natural pCO2 variability, for a better interpretation of the potential consequences of ocean acidification on species inhabiting variable coastal ecosystems. Using scenarios that take into account the natural variability will allow understanding of the limits to plasticity across organismal traits, populations and species.

  11. Psychophysical Measurement of Rod and Cone Thresholds in Stargardt Disease with Full-Field Stimuli

    PubMed Central

    Collison, Frederick T.; Fishman, Gerald A.; McAnany, J. Jason; Zernant, Jana; Allikmets, Rando

    2014-01-01

    Purpose To investigate psychophysical thresholds in Stargardt disease with the full-field stimulus test (FST). Methods Visual acuity (VA), spectral-domain optical coherence tomography (SD-OCT), full-field electroretinogram (ERG), and FST measurements were made in one eye of 24 patients with Stargardt disease. Dark-adapted rod FST thresholds were measured with short-wavelength stimuli, and cone FST thresholds were obtained from the cone plateau phase of dark adaptation using long-wavelength stimuli. Correlation coefficients were calculated for FST thresholds versus macular thickness, VA and ERG amplitudes. Results Stargardt patient FST cone thresholds correlated significantly with VA, macular thickness, and ERG cone-response amplitudes (all P<0.01). The patients’ FST rod thresholds correlated with ERG rod-response amplitudes (P<0.01), but not macular thickness (P=0.05). All Stargardt disease patients with flecks confined to the macula and most of the patients with flecks extending outside of the macula had normal FST thresholds. All patients with extramacular atrophic changes had elevated FST cone thresholds and most had elevated FST rod thresholds. Conclusion FST rod and cone threshold elevation in Stargardt disease patients correlated well with measures of structure and function, as well as ophthalmoscopic retinal appearance. FST appears to be a useful tool for assessing rod and cone function in Stargardt disease. PMID:24695063

  12. Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies.

    PubMed

    Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C

    2002-08-01

    To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.

  13. Detection of immunocytological markers in photomicroscopic images

    NASA Astrophysics Data System (ADS)

    Friedrich, David; zur Jacobsmühlen, Joschka; Braunschweig, Till; Bell, André; Chaisaowong, Kraisorn; Knüchel-Clarke, Ruth; Aach, Til

    2012-03-01

    Early detection of cervical cancer can be achieved through visual analysis of cell anomalies. The established PAP smear achieves a sensitivity of 50-90%, most false negative results are caused by mistakes in the preparation of the specimen or reader variability in the subjective, visual investigation. Since cervical cancer is caused by human papillomavirus (HPV), the detection of HPV-infected cells opens new perspectives for screening of precancerous abnormalities. Immunocytochemical preparation marks HPV-positive cells in brush smears of the cervix with high sensitivity and specificity. The goal of this work is the automated detection of all marker-positive cells in microscopic images of a sample slide stained with an immunocytochemical marker. A color separation technique is used to estimate the concentrations of the immunocytochemical marker stain as well as of the counterstain used to color the nuclei. Segmentation methods based on Otsu's threshold selection method and Mean Shift are adapted to the task of segmenting marker-positive cells and their nuclei. The best detection performance of single marker-positive cells was achieved with the adapted thresholding method with a sensitivity of 95.9%. The contours differed by a modified Hausdorff Distance (MHD) of 2.8 μm. Nuclei of single marker positive cells were detected with a sensitivity of 95.9% and MHD = 1.02 μm.

  14. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  15. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics

    PubMed Central

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-01-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  16. Decision-support tools for Extreme Weather and Climate Events in the Northeast United States

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Lowery, M.; Whelchel, A.

    2013-12-01

    Decision-support tools were assessed for the 2013 National Climate Assessment technical input document, "Climate Change in the Northeast, A Sourcebook". The assessment included tools designed to generate and deliver actionable information to assist states and highly populated urban and other communities in assessment of climate change vulnerability and risk, quantification of effects, and identification of adaptive strategies in the context of adaptation planning across inter-annual, seasonal and multi-decadal time scales. State-level adaptation planning in the Northeast has generally relied on qualitative vulnerability assessments by expert panels and stakeholders, although some states have undertaken initiatives to develop statewide databases to support vulnerability assessments by urban and local governments, and state agencies. The devastation caused by Superstorm Sandy in October 2012 has raised awareness of the potential for extreme weather events to unprecedented levels and created urgency for action, especially in coastal urban and suburban communities that experienced pronounced impacts - especially in New Jersey, New York and Connecticut. Planning approaches vary, but any adaptation and resiliency planning process must include the following: - Knowledge of the probable change in a climate variable (e.g., precipitation, temperature, sea-level rise) over time or that the climate variable will attain a certain threshold deemed to be significant; - Knowledge of intensity and frequency of climate hazards (past, current or future events or conditions with potential to cause harm) and their relationship with climate variables; - Assessment of climate vulnerabilities (sensitive resources, infrastructure or populations exposed to climate-related hazards); - Assessment of relative risks to vulnerable resources; - Identification and prioritization of adaptive strategies to address risks. Many organizations are developing decision-support tools to assist in the urban planning process by addressing some of these needs. In this paper we highlight the decision tools available today, discuss their application in selected case studies, and present a gap analysis with opportunities for innovation and future work.

  17. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  18. Bacterial pattern and role of laboratory parameters as marker for neonatal sepsis

    NASA Astrophysics Data System (ADS)

    Ruslie, R. H.; Tjipta, D. G.; Samosir, C. T.; Hasibuan, B. S.

    2018-03-01

    World Health Organization (WHO) recorded 5 million neonatal mortality each year due to sepsis, and 98% were in developing countries. Diagnosis of neonatal sepsis needs to be confirmed with a positive culture from normally sterile sites. On the other hand, postponing treatment will worsen the disease and increase mortality. This study conducted to evaluate the bacterial pattern of neonatal sepsis and to compare laboratory parameter differences between suspected and confirmed sepsis. It was a retrospective analytic study on 94 neonates in Perinatology Division, Adam Malik General Hospital Medan, from November 2016 until January 2017. Blood cultures were taken to confirm the diagnosis. Laboratory parameters collected from medical records. Variables with significant results analyzed for their accuracy. P<0.05 were considered statistically significant with 95% confidence interval.Out of 94 neonates, culture positives found in 55.3% neonates, with most common etiology was Klebsiella pneumonia (22.6%). There were significant neutrophil/lymphocyte ratio and procalcitonin differences between suspected and confirmed sepsis (p 0.025 and 0.008 respectively). With a diagnostic threshold of 9.4, sensitivity and specificity of neutrophil/lymphocyte ratio were 61.5% and 66.7%, respectively. Procalcitonin sensitivity and specificity were 84.6% and 71.4%, respectively, with 3.6mg/L diagnostic threshold. Neutrophil/lymphocyte ratio and procalcitonin were significantly higher in confirmed sepsis.

  19. Physical Performance Measures Associated With Locomotive Syndrome in Middle-Aged and Older Japanese Women.

    PubMed

    Nakamura, Misa; Hashizume, Hiroshi; Oka, Hiroyuki; Okada, Morihiro; Takakura, Rie; Hisari, Ayako; Yoshida, Munehito; Utsunomiya, Hirotoshi

    2015-01-01

    The Japanese Orthopaedic Association proposed a concept called locomotive syndrome (LS) to identify middle-aged and older adults at high risk of requiring health care services because of problems with locomotion. It is important to identify factors associated with the development of LS. Physical performance measures such as walking speed and standing balance are highly predictive of subsequent disability and mortality in older adults. However, there is little evidence about the relationship between physical performance measures and LS. To determine the physical performance measures associated with LS, the threshold values for discriminating individuals with and without LS, and the odds ratio of LS according to performance greater than or less than these thresholds in middle-aged and older Japanese women. Participants were 126 Japanese women (mean age = 61.8 years). Locomotive syndrome was defined as a score of 16 or more on the 25-question Geriatric Locomotive Function Scale. Physical performance was evaluated using grip strength, unipedal stance time with eyes open, seated toe-touch, and normal and fast 6-m walk time (6 MWT). Variables were compared between LS and non-LS groups. Fourteen participants (11.1%) were classed as having LS. Unipedal stance time, normal 6 MWT, and fast 6 MWT were significantly different between the 2 groups. The LS group had a shorter unipedal stance time and a longer normal and fast 6 MWT than the non-LS group. For these 3 variables, the area under the receiver operating characteristic curve was greater than 0.7, and the threshold for discriminating the non-LS and LS groups was 15 s for unipedal stance time, 4.8 s for normal 6 MWT and 3.6 s for fast 6 MWT. These variables were entered into a multiple logistic regression analysis, which indicated that unipedal stance time less than 15 s was significantly related to LS (odds ratio = 8.46; P < .01). Unipedal stance time was the physical performance measure that was most strongly associated with LS. This measure may be useful for early detection of LS.

  20. Immobilization thresholds of electrofishing relative to fish size

    USGS Publications Warehouse

    Dolan, C.R.; Miranda, L.E.

    2003-01-01

    Fish size and electrical waveforms have frequently been associated with variation in electrofishing effectiveness. Under controlled laboratory conditions, we measured the electrical power required by five electrical waveforms to immobilize eight fish species of diverse sizes and shapes. Fish size was indexed by total body length, surface area, volume, and weight; shape was indexed by the ratio of body length to body depth. Our objectives were to identify immobilization thresholds, elucidate the descriptors of fish size that were best associated with those immobilization thresholds, and determine whether the vulnerability of a species relative to other species remained constant across electrical treatments. The results confirmed that fish size is a key variable controlling the immobilization threshold and further suggested that the size descriptor best related to immobilization is fish volume. The peak power needed to immobilize fish decreased rapidly with increasing fish volume in small fish but decreased slowly for fish larger than 75-100 cm 3. Furthermore, when we controlled for size and shape, different waveforms did not favor particular species, possibly because of the overwhelming effect of body size. Many of the immobilization inconsistencies previously attributed to species might simply represent the effect of disparities in body size.

  1. Regression Discontinuity Designs in Epidemiology

    PubMed Central

    Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till

    2014-01-01

    When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922

  2. Identification of novel uncertainty factors and thresholds of toxicological concern for health hazard and risk assessment: Application to cleaning product ingredients.

    PubMed

    Wang, Zhen; Scott, W Casan; Williams, E Spencer; Ciarlo, Michael; DeLeo, Paul C; Brooks, Bryan W

    2018-04-01

    Uncertainty factors (UFs) are commonly used during hazard and risk assessments to address uncertainties, including extrapolations among mammals and experimental durations. In risk assessment, default values are routinely used for interspecies extrapolation and interindividual variability. Whether default UFs are sufficient for various chemical uses or specific chemical classes remains understudied, particularly for ingredients in cleaning products. Therefore, we examined publicly available acute median lethal dose (LD50), and reproductive and developmental no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) values for the rat model (oral). We employed probabilistic chemical toxicity distributions to identify likelihoods of encountering acute, subacute, subchronic and chronic toxicity thresholds for specific chemical categories and ingredients in cleaning products. We subsequently identified thresholds of toxicological concern (TTC) and then various UFs for: 1) acute (LD50s)-to-chronic (reproductive/developmental NOAELs) ratios (ACRs), 2) exposure duration extrapolations (e.g., subchronic-to-chronic; reproductive/developmental), and 3) LOAEL-to-NOAEL ratios considering subacute/acute developmental responses. These ratios (95% CIs) were calculated from pairwise threshold levels using Monte Carlo simulations to identify UFs for all ingredients in cleaning products. Based on data availability, chemical category-specific UFs were also identified for aliphatic acids and salts, aliphatic alcohols, inorganic acids and salts, and alkyl sulfates. In a number of cases, derived UFs were smaller than default values (e.g., 10) employed by regulatory agencies; however, larger UFs were occasionally identified. Such UFs could be used by assessors instead of relying on default values. These approaches for identifying mammalian TTCs and diverse UFs represent robust alternatives to application of default values for ingredients in cleaning products and other chemical classes. Findings can also support chemical substitutions during alternatives assessment, and data dossier development (e.g., read across), identification of TTCs, and screening-level hazard and risk assessment when toxicity data is unavailable for specific chemicals. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    PubMed

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  4. Three-level sampler having automated thresholds

    NASA Technical Reports Server (NTRS)

    Jurgens, R. F.

    1976-01-01

    A three-level sampler is described that has its thresholds controlled automatically so as to track changes in the statistics of the random process being sampled. In particular, the mean value is removed and the ratio of the standard deviation of the random process to the threshold is maintained constant. The system is configured in such a manner that slow drifts in the level comparators and digital-to-analog converters are also removed. The ratio of the standard deviation to threshold level may be chosen within the constraints of the ratios of two integers N and M. These may be chosen to minimize the quantizing noise of the sampled process.

  5. MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, E.D.; Glide-Hurst, C.; Wayne State University, Detroit, MI

    2016-06-15

    Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveformmore » exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient-specific velocity-based 4DCT reconstruction is feasible. Image noise was reduced with AVB, suggesting potential applications for low-dose acquisitions and to improve 4DCT reconstruction for irregular breathing patients. The submitting institution holds research agreements with Philips Healthcare.« less

  6. Ensemble reconstruction of spatio-temporal extreme low-flow events in France since 1871

    NASA Astrophysics Data System (ADS)

    Caillouet, Laurie; Vidal, Jean-Philippe; Sauquet, Eric; Devers, Alexandre; Graff, Benjamin

    2017-06-01

    The length of streamflow observations is generally limited to the last 50 years even in data-rich countries like France. It therefore offers too small a sample of extreme low-flow events to properly explore the long-term evolution of their characteristics and associated impacts. To overcome this limit, this work first presents a daily 140-year ensemble reconstructed streamflow dataset for a reference network of near-natural catchments in France. This dataset, called SCOPE Hydro (Spatially COherent Probabilistic Extended Hydrological dataset), is based on (1) a probabilistic precipitation, temperature, and reference evapotranspiration downscaling of the Twentieth Century Reanalysis over France, called SCOPE Climate, and (2) continuous hydrological modelling using SCOPE Climate as forcings over the whole period. This work then introduces tools for defining spatio-temporal extreme low-flow events. Extreme low-flow events are first locally defined through the sequent peak algorithm using a novel combination of a fixed threshold and a daily variable threshold. A dedicated spatial matching procedure is then established to identify spatio-temporal events across France. This procedure is furthermore adapted to the SCOPE Hydro 25-member ensemble to characterize in a probabilistic way unrecorded historical events at the national scale. Extreme low-flow events are described and compared in a spatially and temporally homogeneous way over 140 years on a large set of catchments. Results highlight well-known recent events like 1976 or 1989-1990, but also older and relatively forgotten ones like the 1878 and 1893 events. These results contribute to improving our knowledge of historical events and provide a selection of benchmark events for climate change adaptation purposes. Moreover, this study allows for further detailed analyses of the effect of climate variability and anthropogenic climate change on low-flow hydrology at the scale of France.

  7. Influence of intrinsic noise generated by a thermotesting device on thermal sensory detection and thermal pain detection thresholds.

    PubMed

    Pavlaković, G; Züchner, K; Zapf, A; Bachmann, C G; Graf, B M; Crozier, T A; Pavlaković, H

    2009-08-01

    Various factors can influence thermal perception threshold measurements and contribute significantly to unwanted variability of the tests. To minimize this variability, testing should be performed under strictly controlled conditions. Identifying the factors that increase the variability and eliminating their influence should increase reliability and reproducibility. Currently available thermotesting devices use a water-cooling system that generates a continuous noise of approximately 60 dB. In order to analyze whether this noise could influence the thermal threshold measurements we compared the thresholds obtained with a silent thermotesting device to those obtained with a commercially available device. The subjects were tested with one randomly chosen device on 1 day and with the other device 7 days later. At each session, heat, heat pain, cold, and cold pain thresholds were determined with three measurements. Bland-Altman analysis was used to assess agreement in measurements obtained with different devices and it was shown that the intersubject variability of the thresholds obtained with the two devices was comparable for all four thresholds tested. In contrast, the intrasubject variability of the thresholds for heat, heat pain, and cold pain detection was significantly lower with the silent device. Our results show that thermal sensory thresholds measured with the two devices are comparable. However, our data suggest that, for studies with repeated measurements on the same subjects, a silent thermotesting device may allow detection of smaller differences in the treatment effects and/or may permit the use of a smaller number of tested subjects. Muscle Nerve 40: 257-263, 2009.

  8. Adaptive time-sequential binary sensing for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Hu, Chenhui; Lu, Yue M.

    2012-06-01

    We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.

  9. Facial arthralgia and myalgia: can they be differentiated by trigeminal sensory assessment?

    PubMed

    Eliav, Eli; Teich, Sorin; Nitzan, Dorit; El Raziq, Daood Abid; Nahlieli, Oded; Tal, Michael; Gracely, Richard H; Benoliel, Rafael

    2003-08-01

    Heat and electrical detection thresholds were assessed in 72 patients suffering from painful temporomandibular disorder. Employing widely accepted criteria, 44 patients were classified as suffering from temporomandibular joint (TMJ) arthralgia (i.e. pain originating from the TMJ) and 28 from myalgia (i.e. pain originating from the muscles of mastication). Electrical stimulation was employed to assess thresholds in large myelinated nerve fibers (Abeta) and heat application to assess thresholds in unmyelinated nerve fibers (C). The sensory tests were performed bilaterally in three trigeminal nerve sites: the auriculotemporal nerve territory (AUT), buccal nerve territory (BUC) and the mental nerve territory (MNT). In addition, 22 healthy asymptomatic controls were examined. A subset of ten arthralgia patients underwent arthrocentesis and electrical detection thresholds were additionally assessed following the procedure. Electrical detection threshold ratios were calculated by dividing the affected side by the control side, thus reduced ratios indicate hypersensitivity of the affected side. In control patients, ratios obtained at all sites did not vary significantly from the expected value of 'one' (mean with 95% confidence intervals; AUT, 1:0.95-1.06; BUC, 1.01:0.93-1.11; MNT, 0.97:0.88-1.05, all areas one sample analysis P>0.05). In arthralgia patients mean ratios (+/-SEM) obtained for the AUT territory (0.63+/-0.03) were significantly lower compared to ratios for the MNT (1.02+/-0.03) and BUC (0.96+/-0.04) territories (repeated measures analysis of variance (RANOVA), P<0.0001) and compared to the AUT ratios in myalgia (1.27+/-0.09) and control subjects (1+/-0.06, ANOVA, P<0.0001). In the myalgia group the electrical detection threshold ratios in the AUT territory were significantly elevated compared to the AUT ratios in control subjects (Dunnett test, P<0.05), but only approached statistical significance compared to the MNT (1.07+/-0.04) and BUC (1.11+/-0.06) territories (RANOVA, F(2,27)=3.12, P=0.052). There were no significant differences between and within the groups for electrical detection threshold ratios in the BUC and MNT nerve territories, and for the heat detection thresholds in all tested sites. Following arthrocentesis, mean electrical detection threshold ratios in the AUT territory were significantly elevated from 0.64+/-0.06 to 0.99+/-0.04 indicating resolution of the hypersensitivity (paired t-test, P=0.001). In conclusion, large myelinated fiber hypersensitivity is found in the skin overlying TMJs with clinical pain and pathology but is not found in controls. In patients with muscle-related facial pain there was significant elevation of the electrical detection threshold in the AUT region.

  10. Optimizing Satellite Communications With Adaptive and Phased Array Antennas

    NASA Technical Reports Server (NTRS)

    Ingram, Mary Ann; Romanofsky, Robert; Lee, Richard Q.; Miranda, Felix; Popovic, Zoya; Langley, John; Barott, William C.; Ahmed, M. Usman; Mandl, Dan

    2004-01-01

    A new adaptive antenna array architecture for low-earth-orbiting satellite ground stations is being investigated. These ground stations are intended to have no moving parts and could potentially be operated in populated areas, where terrestrial interference is likely. The architecture includes multiple, moderately directive phased arrays. The phased arrays, each steered in the approximate direction of the satellite, are adaptively combined to enhance the Signal-to-Noise and Interference-Ratio (SNIR) of the desired satellite. The size of each phased array is to be traded-off with the number of phased arrays, to optimize cost, while meeting a bit-error-rate threshold. Also, two phased array architectures are being prototyped: a spacefed lens array and a reflect-array. If two co-channel satellites are in the field of view of the phased arrays, then multi-user detection techniques may enable simultaneous demodulation of the satellite signals, also known as Space Division Multiple Access (SDMA). We report on Phase I of the project, in which fixed directional elements are adaptively combined in a prototype to demodulate the S-band downlink of the EO-1 satellite, which is part of the New Millennium Program at NASA.

  11. The energy ratio mapping algorithm: a tool to improve the energy-based detection of odontocete echolocation clicks.

    PubMed

    Klinck, Holger; Mellinger, David K

    2011-04-01

    The energy ratio mapping algorithm (ERMA) was developed to improve the performance of energy-based detection of odontocete echolocation clicks, especially for application in environments with limited computational power and energy such as acoustic gliders. ERMA systematically evaluates many frequency bands for energy ratio-based detection of echolocation clicks produced by a target species in the presence of the species mix in a given geographic area. To evaluate the performance of ERMA, a Teager-Kaiser energy operator was applied to the series of energy ratios as derived by ERMA. A noise-adaptive threshold was then applied to the Teager-Kaiser function to identify clicks in data sets. The method was tested for detecting clicks of Blainville's beaked whales while rejecting echolocation clicks of Risso's dolphins and pilot whales. Results showed that the ERMA-based detector correctly identified 81.6% of the beaked whale clicks in an extended evaluation data set. Average false-positive detection rate was 6.3% (3.4% for Risso's dolphins and 2.9% for pilot whales).

  12. Hearing in the sea otter (Enhydra lutris): auditory profiles for an amphibious marine carnivore.

    PubMed

    Ghoul, Asila; Reichmuth, Colleen

    2014-11-01

    In this study we examine the auditory capabilities of the sea otter (Enhydra lutris), an amphibious marine mammal that remains virtually unstudied with respect to its sensory biology. We trained an adult male sea otter to perform a psychophysical task in an acoustic chamber and at an underwater apparatus. Aerial and underwater audiograms were constructed from detection thresholds for narrowband signals measured in quiet conditions at frequencies from 0.125-40 kHz. Aerial hearing thresholds were also measured in the presence of octave-band masking noise centered at eight signal frequencies (0.25-22.6 kHz) so that critical ratios could be determined. The aerial audiogram of the sea otter resembled that of sea lions and showed a reduction in low-frequency sensitivity relative to terrestrial mustelids. Best sensitivity was -1 dB re 20 µPa at 8 kHz. Under water, hearing sensitivity was significantly reduced when compared to sea lions and other pinniped species, demonstrating that sea otter hearing is primarily adapted to receive airborne sounds. Critical ratios were more than 10 dB higher than those measured for pinnipeds, suggesting that sea otters are less efficient than other marine carnivores at extracting acoustic signals from background noise, especially at frequencies below 2 kHz.

  13. Repeatability of Quantitative Whole-Body 18F-FDG PET/CT Uptake Measures as Function of Uptake Interval and Lesion Selection in Non-Small Cell Lung Cancer Patients.

    PubMed

    Kramer, Gerbrand Maria; Frings, Virginie; Hoetjes, Nikie; Hoekstra, Otto S; Smit, Egbert F; de Langen, Adrianus Johannes; Boellaard, Ronald

    2016-09-01

    Change in (18)F-FDG uptake may predict response to anticancer treatment. The PERCIST suggest a threshold of 30% change in SUV to define partial response and progressive disease. Evidence underlying these thresholds consists of mixed stand-alone PET and PET/CT data with variable uptake intervals and no consensus on the number of lesions to be assessed. Additionally, there is increasing interest in alternative (18)F-FDG uptake measures such as metabolically active tumor volume and total lesion glycolysis (TLG). The aim of this study was to comprehensively investigate the repeatability of various quantitative whole-body (18)F-FDG metrics in non-small cell lung cancer (NSCLC) patients as a function of tracer uptake interval and lesion selection strategies. Eleven NSCLC patients, with at least 1 intrathoracic lesion 3 cm or greater, underwent double baseline whole-body (18)F-FDG PET/CT scans at 60 and 90 min after injection within 3 d. All (18)F-FDG-avid tumors were delineated with an 50% threshold of SUVpeak adapted for local background. SUVmax, SUVmean, SUVpeak, TLG, metabolically active tumor volume, and tumor-to-blood and -liver ratios were evaluated, as well as the influence of lesion selection and 2 methods for correction of uptake time differences. The best repeatability was found using the SUV metrics of the averaged PERCIST target lesions (repeatability coefficients < 10%). The correlation between test and retest scans was strong for all uptake measures at either uptake interval (intraclass correlation coefficient > 0.97 and R(2) > 0.98). There were no significant differences in repeatability between data obtained 60 and 90 min after injection. When only PERCIST-defined target lesions were included (n = 34), repeatability improved for all uptake values. Normalization to liver or blood uptake or glucose correction did not improve repeatability. However, after correction for uptake time the correlation of SUV measures and TLG between the 60- and 90-min data significantly improved without affecting test-retest performance. This study suggests that a 15% change of SUVmean/SUVpeak at 60 min after injection can be used to assess response in advanced NSCLC patients if up to 5 PERCIST target lesions are assessed. Lower thresholds could be used in averaged PERCIST target lesions (<10%). © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  14. Crop responses to climatic variation

    PubMed Central

    Porter, John R; Semenov, Mikhail A

    2005-01-01

    The yield and quality of food crops is central to the well being of humans and is directly affected by climate and weather. Initial studies of climate change on crops focussed on effects of increased carbon dioxide (CO2) level and/or global mean temperature and/or rainfall and nutrition on crop production. However, crops can respond nonlinearly to changes in their growing conditions, exhibit threshold responses and are subject to combinations of stress factors that affect their growth, development and yield. Thus, climate variability and changes in the frequency of extreme events are important for yield, its stability and quality. In this context, threshold temperatures for crop processes are found not to differ greatly for different crops and are important to define for the major food crops, to assist climate modellers predict the occurrence of crop critical temperatures and their temporal resolution. This paper demonstrates the impacts of climate variability for crop production in a number of crops. Increasing temperature and precipitation variability increases the risks to yield, as shown via computer simulation and experimental studies. The issue of food quality has not been given sufficient importance when assessing the impact of climate change for food and this is addressed. Using simulation models of wheat, the concentration of grain protein is shown to respond to changes in the mean and variability of temperature and precipitation events. The paper concludes with discussion of adaptation possibilities for crops in response to drought and argues that characters that enable better exploration of the soil and slower leaf canopy expansion could lead to crop higher transpiration efficiency. PMID:16433091

  15. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  16. Optimized breast MRI functional tumor volume as a biomarker of recurrence-free survival following neoadjuvant chemotherapy.

    PubMed

    Jafri, Nazia F; Newitt, David C; Kornak, John; Esserman, Laura J; Joe, Bonnie N; Hylton, Nola M

    2014-08-01

    To evaluate optimal contrast kinetics thresholds for measuring functional tumor volume (FTV) by breast magnetic resonance imaging (MRI) for assessment of recurrence-free survival (RFS). In this Institutional Review Board (IRB)-approved retrospective study of 64 patients (ages 29-72, median age of 48.6) undergoing neoadjuvant chemotherapy (NACT) for breast cancer, all patients underwent pre-MRI1 and postchemotherapy MRI4 of the breast. Tumor was defined as voxels meeting thresholds for early percent enhancement (PEthresh) and early-to-late signal enhancement ratio (SERthresh); and FTV (PEthresh, SERthresh) by summing all voxels meeting threshold criteria and minimum connectivity requirements. Ranges of PEthresh from 50% to 220% and SERthresh from 0.0 to 2.0 were evaluated. A Cox proportional hazard model determined associations between change in FTV over treatment and RFS at different PE and SER thresholds. The plot of hazard ratios for change in FTV from MRI1 to MRI4 showed a broad peak with the maximum hazard ratio and highest significance occurring at PE threshold of 70% and SER threshold of 1.0 (hazard ratio = 8.71, 95% confidence interval 2.86-25.5, P < 0.00015), indicating optimal model fit. Enhancement thresholds affect the ability of MRI tumor volume to predict RFS. The value is robust over a wide range of thresholds, supporting the use of FTV as a biomarker. © 2013 Wiley Periodicals, Inc.

  17. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  18. Compressed sensing system considerations for ECG and EMG wireless biosensors.

    PubMed

    Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J

    2012-04-01

    Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.

  19. Evidence Accumulator or Decision Threshold – Which Cortical Mechanism are We Observing?

    PubMed Central

    Simen, Patrick

    2012-01-01

    Most psychological models of perceptual decision making are of the accumulation-to-threshold variety. The neural basis of accumulation in parietal and prefrontal cortex is therefore a topic of great interest in neuroscience. In contrast, threshold mechanisms have received less attention, and their neural basis has usually been sought in subcortical structures. Here I analyze a model of a decision threshold that can be implemented in the same cortical areas as evidence accumulators, and whose behavior bears on two open questions in decision neuroscience: (1) When ramping activity is observed in a brain region during decision making, does it reflect evidence accumulation? (2) Are changes in speed-accuracy tradeoffs and response biases more likely to be achieved by changes in thresholds, or in accumulation rates and starting points? The analysis suggests that task-modulated ramping activity, by itself, is weak evidence that a brain area mediates evidence accumulation as opposed to threshold readout; and that signs of modulated accumulation are as likely to indicate threshold adaptation as adaptation of starting points and accumulation rates. These conclusions imply that how thresholds are modeled can dramatically impact accumulator-based interpretations of this data. PMID:22737136

  20. [The analysis of threshold effect using Empower Stats software].

    PubMed

    Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan

    2013-11-01

    In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.

  1. Matching Social and Biophysical Scales in Extensive Livestock Production as a Basis for Adaptation to Global Change

    NASA Astrophysics Data System (ADS)

    Sayre, N. F.; Bestelmeyer, B.

    2015-12-01

    Global livestock production is heterogeneous, and its benefits and costs vary widely across global contexts. Extensive grazing lands (or rangelands) constitute the vast majority of the land dedicated to livestock production globally, but they are relatively minor contributors to livestock-related environmental impacts. Indeed, the greatest potential for environmental damage in these lands lies in their potential for conversion to other uses, including agriculture, mining, energy production and urban development. Managing such conversion requires improving the sustainability of livestock production in the face of fragmentation, ecological and economic marginality and climate change. We present research from Mongolia and the United States demonstrating methods of improving outcomes on rangelands by improving the fit between the scales of social and biophysical processes. Especially in arid and semi-arid settings, rangelands exhibit highly variable productivity over space and time and non-linear or threshold dynamics in vegetation; climate change is projected to exacerbate these challenges and, in some cases, diminish overall productivity. Policy and governance frameworks that enable landscape-scale management and administration enable range livestock producers to adapt to these conditions. Similarly, livestock breeds that have evolved to withstand climate and vegetation change improve producers' prospects in the face of increasing variability and declining productivity. A focus on the relationships among primary production, animal production, spatial connectivity, and scale must underpin adaptation strategies in rangelands.

  2. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  3. Reduction in Dynamic Visual Acuity Reveals Gaze Control Changes Following Spaceflight

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Miller, Chris; Lawrence, Emily L.; Mulavara Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    INTRODUCTION: Exposure to microgravity causes adaptive changes in eye-head coordination that can lead to altered gaze control. This could affect postflight visual acuity during head and body motion. The goal of this study was to characterize changes in dynamic visual acuity after long-duration spaceflight. METHODS: Dynamic Visual Acuity (DVA) data from 14 astro/cosmonauts were collected after long-duration (6 months) spaceflight. The difference in acuity between seated and walking conditions provided a metric of change in the subjects ability to maintain gaze fixation during self-motion. In each condition, a psychophysical threshold detection algorithm was used to display Landolt ring optotypes at a size that was near each subject s acuity threshold. Verbal responses regarding the orientation of the gap were recorded as the optotypes appeared sequentially on a computer display 4 meters away. During the walking trials, subjects walked at 6.4 km/h on a motorized treadmill. RESULTS: A decrement in mean postflight DVA was found, with mean values returning to baseline within 1 week. The population mean showed a consistent improvement in DVA performance, but it was accompanied by high variability. A closer examination of the individual subject s recovery curves revealed that many did not follow a pattern of continuous improvement with each passing day. When adjusted on the basis of previous long-duration flight experience, the population mean shows a "bounce" in the re-adaptation curve. CONCLUSION: Gaze control during self-motion is altered following long-duration spaceflight and changes in postflight DVA performance indicate that vestibular re-adaptation may be more complex than a gradual return to normal.

  4. A reduced-order adaptive neuro-fuzzy inference system model as a software sensor for rapid estimation of five-day biochemical oxygen demand

    NASA Astrophysics Data System (ADS)

    Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin

    2013-07-01

    The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.

  5. Biocrust spatial distribution at landscape scale is strongly controlled by terrain attributes: Topographic thresholds for colonization

    NASA Astrophysics Data System (ADS)

    Raúl Román Fernández, José; Rodríguez-Caballero, Emilio; Chamizo de la Piedra, Sonia; Roncero Ramos, Bea; Cantón Castilla, Yolanda

    2017-04-01

    Biological soil crusts (biocrusts) are spatially variable components of soil. Whereas biogeographic, climatic or soil properties drive biocrust distribution from regional to global scales, biocrust spatial distribution within the landscape is controlled by topographic forces that create specific microhabitats that promote or difficult biocrust growth. By knowing which are the variables that control biocrust distribution and their individual effect we can establish the abiotic thresholds that limit natural biocrust colonization on different environments, which may be very useful for designing soil restoration programmes. The objective of this study was to analyse the influence of topographic-related variables in the distribution of different types of biocrust within a semiarid catchment where cyanobacteria and lichen dominated biocrust represent the most important surface components, El Cautivo experimental area (SE Spain). To do this, natural coverage of i) bare soil, ii) vegetation, iii) cyanobacteria-dominated soil crust and iv) lichen-dominated soil crust were measured on 70 experimental plots distributed across 23 transect (three 4.5 x 4.5 m plots per transect). Following that, we used a 1m x 1m DEM (Digital Elevation Model) of the study site obtained from a LiDAR point cloud to calculate different topographic variables such as slope gradient, length slope (LS) factor (potential sediment transport index), potential incoming solar radiation, topographic wetness index (WI) and maximum flow accumulation. Canonical Correspondence Analysis was performed to infer the influence of each variable in the coverage of each class and thresholds of biocrust colonization were identified mathematically by means of linear regression analysis describing the relationship between each factor and biocrust cover. Our results show that the spatial distribution of cyanobacteria-dominated biocrust, which showed physiological and morphological adaptation to cope with drought and UVA radiation, was mostly controlled by incoming solar radiation, being mostly located on areas with high incoming solar radiation and low slope, showing a threshold at 48 degrees from which they are not found. Lichen-dominated biocrust, on the other hand, colonize the uppermost and steepest part of north aspect hillslopes where incoming solar radiation and ETP are low, as consequence of their lower capacity to survive under extreme temperatures and drought conditions. With higher capacity of the soil to retain run-on (WI), surface is mostly cover by plants instead of lichens. Bare soil distribution is controlled by the combination of two factors, slope and solar radiation, covering the south aspect hillslopes, where slope gradient is high and there is high incoming solar radiation and ETP for lichen colonization.

  6. An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning

    PubMed Central

    Zhou, Jun; Wang, Chao

    2017-01-01

    Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics. PMID:28783079

  7. An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning.

    PubMed

    Zhou, Jun; Wang, Chao

    2017-08-06

    Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics.

  8. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors

    PubMed Central

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  9. Laying the Groundwork for NCLEX Success: An Exploration of Adaptive Quizzing as an Examination Preparation Method.

    PubMed

    Cox-Davenport, Rebecca A; Phelan, Julia C

    2015-05-01

    First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed.

  10. Packet-Based Protocol Efficiency for Aeronautical and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Carek, David A.

    2005-01-01

    This paper examines the relation between bit error ratios and the effective link efficiency when transporting data with a packet-based protocol. Relations are developed to quantify the impact of a protocol s packet size and header size relative to the bit error ratio of the underlying link. These relations are examined in the context of radio transmissions that exhibit variable error conditions, such as those used in satellite, aeronautical, and other wireless networks. A comparison of two packet sizing methodologies is presented. From these relations, the true ability of a link to deliver user data, or information, is determined. Relations are developed to calculate the optimal protocol packet size forgiven link error characteristics. These relations could be useful in future research for developing an adaptive protocol layer. They can also be used for sizing protocols in the design of static links, where bit error ratios have small variability.

  11. Deep pain sensitivity is correlated with oral-health-related quality of life but not with prosthetic factors in complete denture wearers

    PubMed Central

    COSTA, Yuri Martins; PORPORATTI, André Luís; HILGENBERG-SYDNEY, Priscila Brenner; BONJARDIM, Leonardo Rigoldi; CONTI, Paulo César Rodrigues

    2015-01-01

    ABSTRACT Low pressure Pain Threshold (PPT) is considered a risk factor for Temporomandibular Disorders (TMD) and is influenced by psychological variables. Objectives To correlate deep pain sensitivity of masticatory muscles with prosthetic factors and Oral-Health-Related Quality of Life (OHRQoL) in completely edentulous subjects. Material and Methods A total of 29 complete denture wearers were recruited. The variables were: a) Pressure Pain Threshold (PPT) of the masseter and temporalis; b) retention, stability, and tooth wear of dentures; c) Vertical Dimension of Occlusion (VDO); d) Oral Health Impact Profile (OHIP) adapted to orofacial pain. The Kolmogorov-Smirnov test, the Pearson Product-Moment correlation coefficient, the Spearman Rank correlation coefficient, the Point-Biserial correlation coefficient, and the Bonferroni correction (α=1%) were applied to the data. Results The mean age (standard deviation) of the participants was of 70.1 years (9.5) and 82% of them were females. There were no significant correlations with prosthetic factors, but significant negative correlations were found between the OHIP and the PPT of the anterior temporalis (r=-0.50, 95% CI-0.73 to 0.17, p=0.005). Discussion The deep pain sensitivity of masticatory muscles in complete dentures wearers is associated with OHRQoL, but not with prosthetic factors. PMID:26814457

  12. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  13. Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds

    NASA Astrophysics Data System (ADS)

    Johnson, C. E.

    2017-12-01

    Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.

  14. Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters

    NASA Astrophysics Data System (ADS)

    Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon

    2018-04-01

    In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.

  15. Gravity receptor function in mice with graded otoconial deficiencies.

    PubMed

    Jones, Sherri M; Erway, Lawrence C; Johnson, Kenneth R; Yu, Heping; Jones, Timothy A

    2004-05-01

    The purpose of the present study was to examine gravity receptor function in mutant mouse strains with variable deficits in otoconia: lethal milk (lm), pallid (pa), tilted (tlt), mocha (mh), and muted (mu). Control animals were either age-matched heterozygotes or C57BL/6J (abbr. B6) mice. Gravity receptor function was measured using linear vestibular evoked potentials (VsEPs). Cage and swimming behaviors were also documented. Temporal bones were cleared to assess the overall otoconial deficit and to correlate structure and function for lm mice. Results confirmed the absence of VsEPs for mice that lacked otoconia completely. VsEP thresholds and amplitudes varied in mouse strains with variable loss of otoconia. Some heterozygotes also showed elevated VsEP thresholds in comparison to B6 mice. In lm mice, which have absent otoconia in the utricle and a variable loss of otoconia in the saccule, VsEPs were present and average P1/N1 amplitudes were highly correlated with the average loss of saccular otoconia (R = 0.77,p < 0.001). Cage and swimming behavior were not adversely affected in those animals with recordable VsEPs. Most, but not all, mice with absent VsEPs were unable to swim. Some animals were able to swim despite having no measurable gravity receptor response. The latter finding underscores the remarkable adaptive potential exhibited by neurobehavioral systems following profound sensory loss. It also shows that behavior alone may be an unreliable indicator of the extent of gravity receptor deficits.

  16. Experimental Psychological Stress on Quantitative Sensory Testing Response in Patients with Temporomandibular Disorders.

    PubMed

    Araújo Oliveira Ferreira, Dyna Mara; Costa, Yuri Martins; de Quevedo, Henrique Müller; Bonjardim, Leonardo Rigoldi; Rodrigues Conti, Paulo César

    2018-05-15

    To assess the modulatory effects of experimental psychological stress on the somatosensory evaluation of myofascial temporomandibular disorder (TMD) patients. A total of 20 women with myofascial TMD and 20 age-matched healthy women were assessed by means of a standardized battery of quantitative sensory testing. Cold detection threshold (CDT), warm detection threshold (WDT), cold pain threshold (CPT), heat pain threshold (HPT), mechanical pain threshold (MPT), wind-up ratio (WUR), and pressure pain threshold (PPT) were performed on the facial skin overlying the masseter muscle. The variables were measured in three sessions: before (baseline) and immediately after the Paced Auditory Serial Addition Task (PASAT) (stress) and then after a washout period of 20 to 30 minutes (poststress). Mixed analysis of variance (ANOVA) was applied to the data, and the significance level was set at P = .050. A significant main effect of the experimental session on all thermal tests was found (ANOVA: F > 4.10, P < .017), where detection tests presented an increase in thresholds in the poststress session compared to baseline (CDT, P = .012; WDT, P = .040) and pain thresholds were reduced in the stress (CPT, P < .001; HPT, P = .001) and poststress sessions (CPT, P = .005; HPT, P = .006) compared to baseline. In addition, a significant main effect of the study group on all mechanical tests (MPT, WUR, and PPT) was found (ANOVA: F > 4.65, P < .037), where TMD patients were more sensitive than healthy volunteers. Acute mental stress conditioning can modulate thermal sensitivity of the skin overlying the masseter in myofascial TMD patients and healthy volunteers. Therefore, psychological stress should be considered in order to perform an unbiased somatosensory assessment of TMD patients.

  17. Detection and discrimination of colour, a comparison of physiological and psychophysical data

    NASA Astrophysics Data System (ADS)

    Valberg, A.; Lee, B. B.

    1989-01-01

    Whereas the physiological basis of colorimetry (colour matches) is well understood in terms of the trireceptor theory of colour vision, colour discrimination and scaling still lack a comparable foundation. We present here experimental data that demonstrate how sensitivity and responsiveness of different types of cone-opponent and non-opponent cells of the macaque monkey correlate with human threshold sensitivity on the one hand, and how they in combination can be used to construct a suprathreshold equidistant colour space. Psychophysical thresholds correlate well with the threshold envelope of the most sensitive cells when stimuli are projected upon a steady white background. Detection thresholds for stimuli of differing wavelength and purity (saturation) generally indicate a transition from a phasic non-opponent system to a tonic opponent system of on-centre cells as purity increases. Detection and chromatic discrimination thresholds coincide only for long and short wavelengths of high purity, whereas they differ for mid-spectral lights. Different cell types may thus support detection and discrimination with different stimuli. With chromatic scaling of surface colours on the other hand, when stimuli are darker than an adaptation field still other cell types are needed. We demonstrate that it is possible, from a combination of on- and off-opponent cells, to reconstruct a uniform colour space, using summed outputs of cells with the same cone combination and vector addition for cells with different combinations. Different hues are represented by opponent cells with inputs from different cone types, the hue percept being related to the ratio of the activities of these cell systems.

  18. Process- and controller-adaptations determine the physiological effects of cold acclimation.

    PubMed

    Werner, Jürgen

    2008-09-01

    Experimental results on physiological effects of cold adaptation seem confusing and apparently incompatible with one another. This paper will explain that a substantial part of such a variety of results may be deduced from a common functional concept. A core/shell treatment ("model") of the thermoregulatory system is used with mean body temperature as the controlled variable. Adaptation, as a higher control level, is introduced into the system. Due to persistent stressors, either the (heat transfer) process or the controller properties (parameters) are adjusted (or both). It is convenient to call the one "process adaptation" and the other "controller adaptation". The most commonly demonstrated effect of autonomic cold acclimation is a change in the controller threshold. The analysis shows that this necessarily means a lowering of body temperature because of a lowered metabolic rate. This explains experimental results on both Europeans in the climatic chamber and Australian Aborigines in a natural environment. Exclusive autonomic process adaptation occurs in the form of a better insulation. The analysis explains why the post-adaptive steady-state can only be achieved, if the controller system reduces metabolism and why in spite of this the new state is inevitably characterized by a rise in body temperature. If both process and controller adaptations are simultaneously present, there may be not any change of body temperature at all, e.g., as demonstrated in animal experiments. Whether this kind of adaptation delivers a decrease, an increase or no change of mean body temperature, depends on the proportion of process and controller adaptation.

  19. Identification of a self-paced hitting task in freely moving rats based on adaptive spike detection from multi-unit M1 cortical signals

    PubMed Central

    Hammad, Sofyan H. H.; Farina, Dario; Kamavuako, Ernest N.; Jensen, Winnie

    2013-01-01

    Invasive brain–computer interfaces (BCIs) may prove to be a useful rehabilitation tool for severely disabled patients. Although some systems have shown to work well in restricted laboratory settings, their usefulness must be tested in less controlled environments. Our objective was to investigate if a specific motor task could reliably be detected from multi-unit intra-cortical signals from freely moving animals. Four rats were trained to hit a retractable paddle (defined as a “hit”). Intra-cortical signals were obtained from electrodes placed in the primary motor cortex. First, the signal-to-noise ratio was increased by wavelet denoising. Action potentials were then detected using an adaptive threshold, counted in three consecutive time intervals and were used as features to classify either a “hit” or a “no-hit” (defined as an interval between two “hits”). We found that a “hit” could be detected with an accuracy of 75 ± 6% when wavelet denoising was applied whereas the accuracy dropped to 62 ± 5% without prior denoising. We compared our approach with the common daily practice in BCI that consists of using a fixed, manually selected threshold for spike detection without denoising. The results showed the feasibility of detecting a motor task in a less restricted environment than commonly applied within invasive BCI research. PMID:24298254

  20. Identification of ecological thresholds from variations in phytoplankton communities among lakes: contribution to the definition of environmental standards.

    PubMed

    Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc

    2016-04-01

    In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.

  1. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  2. Zone-size nonuniformity of 18F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer.

    PubMed

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Lee, Li-yu; Chang, Joseph Tung-Chieh; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Liao, Chun-Ta; Yang, Lan-Yan; Hsu, Ching-Han; Yen, Tzu-Chen

    2015-03-01

    The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUVmax 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment (18)F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUVmax 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification.

  3. Epidemic spreading on preferred degree adaptive networks.

    PubMed

    Jolad, Shivakumar; Liu, Wenjia; Schmittmann, B; Zia, R K P

    2012-01-01

    We study the standard SIS model of epidemic spreading on networks where individuals have a fluctuating number of connections around a preferred degree κ. Using very simple rules for forming such preferred degree networks, we find some unusual statistical properties not found in familiar Erdös-Rényi or scale free networks. By letting κ depend on the fraction of infected individuals, we model the behavioral changes in response to how the extent of the epidemic is perceived. In our models, the behavioral adaptations can be either 'blind' or 'selective'--depending on whether a node adapts by cutting or adding links to randomly chosen partners or selectively, based on the state of the partner. For a frozen preferred network, we find that the infection threshold follows the heterogeneous mean field result λ(c)/μ = <κ>/<κ2> and the phase diagram matches the predictions of the annealed adjacency matrix (AAM) approach. With 'blind' adaptations, although the epidemic threshold remains unchanged, the infection level is substantially affected, depending on the details of the adaptation. The 'selective' adaptive SIS models are most interesting. Both the threshold and the level of infection changes, controlled not only by how the adaptations are implemented but also how often the nodes cut/add links (compared to the time scales of the epidemic spreading). A simple mean field theory is presented for the selective adaptations which capture the qualitative and some of the quantitative features of the infection phase diagram.

  4. Functional traits and plasticity in response to light in seedlings of four Iberian forest tree species.

    PubMed

    Sánchez-Gómez, David; Valladares, Fernando; Zavala, Miguel A

    2006-11-01

    We investigated the differential roles of physiological and morphological features on seedling survivorship along an experimental irradiance gradient in four dominant species of cool temperate-Mediterranean forests (Quercus robur L., Quercus pyrenaica Willd., Pinus sylvestris L. and Pinus pinaster Ait.). The lowest photochemical efficiency (F(v)/F(m) in dark-adapted leaves) was reached in deep shade (1% of full sunlight) in all species except Q. robur, which had the lowest photochemical efficiency in both deep shade and 100% of full sunlight. Species differed significantly in their survival in 1% of full sunlight but exhibited similar survivorship in 6, 20 and 100% of full sunlight. Shade-tolerant oaks had lower leaf area ratios, shoot to root ratios, foliage allocation ratios and higher rates of allocation to structural biomass (stem plus thick roots) than shade-intolerant pines. Overall phenotypic plasticity for each species, estimated as the difference between the minimum and the maximum mean values of the ecophysiological variables studied at the various irradiances divided by the maximum mean value of those variables, was inversely correlated with shade tolerance. Observed morphology, allocation and plasticity conformed to a conservative resource-use strategy, although observed differences in specific leaf area, which was higher in shade-tolerant species, supported a carbon gain maximization strategy. Lack of a congruent suite of traits underlying shade tolerance in the studied species provides evidence of adaptation to multiple selective forces. Although the study was based on only four species, the importance of ecophysiological variables as determinants of interspecific differences in survival in limiting light was demonstrated.

  5. High-frequency (8 to 16 kHz) reference thresholds and intrasubject threshold variability relative to ototoxicity criteria using a Sennheiser HDA 200 earphone.

    PubMed

    Frank, T

    2001-04-01

    The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using Sennheiser HDA 200 earphones should equal the 1998 interim ISO RETSPLs from 8 to 16 kHz. Further, because the differences between repeated thresholds were well within +/-10 dB and had an extremely low false-positive rate in reference to the ASHA 1994 criteria for a significant threshold shift due to ototoxicity, a Sennheiser HDA 200 earphone can be used for serial monitoring to determine whether significant high-frequency threshold shifts have occurred for patients receiving potentially ototoxic drug therapy.

  6. Transformation of two and three-dimensional regions by elliptic systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1991-01-01

    A reliable linear system is presented for grid generation in 2-D and 3-D. The method is robust in the sense that convergence is guaranteed but is not as reliable as other nonlinear elliptic methods in generating nonfolding grids. The construction of nonfolding grids depends on having reasonable approximations of cell aspect ratios and an appropriate distribution of grid points on the boundary of the region. Some guidelines are included on approximating the aspect ratios, but little help is offered on setting up the boundary grid other than to say that in 2-D the boundary correspondence should be close to that generated by a conformal mapping. It is assumed that the functions which control the grid distribution depend only on the computational variables and not on the physical variables. Whether this is actually the case depends on how the grid is constructed. In a dynamic adaptive procedure where the grid is constructed in the process of solving a fluid flow problem, the grid is usually updated at fixed iteration counts using the current value of the control function. Since the control function is not being updated during the iteration of the grid equations, the grid construction is a linear procedure. However, in the case of a static adaptive procedure where a trial solution is computed and used to construct an adaptive grid, the control functions may be recomputed at every step of the grid iteration.

  7. Psychophysical measurements in children: challenges, pitfalls, and considerations.

    PubMed

    Witton, Caroline; Talcott, Joel B; Henning, G Bruce

    2017-01-01

    Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.

  8. From Theory to Rural Farms: Testing the Efficacy of the Dryland Development Paradigm of Desertification

    NASA Astrophysics Data System (ADS)

    Reynolds, J. F.; Herrick, J.; Huber-Sannwald, E.; Ayarza, M.

    2011-12-01

    The social and economic systems of humans (H) are inextricably linked with environmental (E) systems. This tight coupling is especially relevant in drylands, where ecosystem goods and services vital to sustaining the livelihoods of human populations are constantly changing over time. The Dryland Development Paradigm (DDP; Reynolds et al. 2007, Science 316, 847-851) was proposed as an integrated framework for dealing with the enormous complexity associated with coupled H-E systems. The DDP consists of five principles: (1) the structure, function and interrelationships that characterize H-E systems are always changing so both H and E factors must always be considered simultaneously; (2) a limited suite of "slow" variables are critical determinants of H-E dynamics; (3) thresholds in both H and E systems are vital: if a key slow variable crosses a threshold this can lead to a different state or condition (a switch in culture resistance to the introduction of new technology such as tractors to plow fields); (4) H-E systems are hierarchical in nature and because of the many cross-scale linkages and feedbacks, adaptation, surprises and self-organization are the norm; and (5) lastly, "solving" land degradation problems cannot be accomplished without drawing upon the firsthand experience and insights (local knowledge) of local stakeholders. For the past 7 years, ARIDnet-AMERICAS, an NSF-supported coordination research network, has applied these five principles via 11 case studies at diverse locations in Argentina, Bolivia, Chile, Columbia, Honduras, Mexico and the United States with the goal to compare and contrast the causes and processes of land degradation and their effects on the balance between the demand for, and supply of, ecosystem services. We present a summary of our initial synthesis. The causal human-environmental processes driving land degradation (e.g., overgrazing, government policies, international markets) are often similar but with differing levels of influence in different locations. Fundamental research knowledge is often limited, especially at multiple scales, and hence local stakeholder knowledge is essential for understanding the complexities of biophysical, social and economic processes and their interactions and feedbacks. Thresholds of H-E variables, while prevalent and essential components to projecting vulnerabilities and critical risks of livelihoods, they are difficult to quantify. We found the DDP to be a robust framework for developing conceptual models of potentially effective, adaptive and sustainable management policies although the extraordinary variability of H-E subsystems pose enormous research, management and policy challenges. We also present our initial attempt to quantify these complex phenomena within the framework of an integrated assessment model (ARIDnet-IAM) that focuses on bridging development science to ecosystem services and sustainability of human livelihoods in global drylands.

  9. Algorithm for improving psychophysical threshold estimates by detecting sustained inattention in experiments using PEST.

    PubMed

    Rinderknecht, Mike D; Ranzani, Raffaele; Popp, Werner L; Lambercy, Olivier; Gassert, Roger

    2018-05-10

    Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings.

  10. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability-Implications for Cochlear Implant Candidacy.

    PubMed

    Firszt, Jill B; Reeder, Ruth M; Holden, Laura K

    At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.

  11. Design of an adaptive CubeSat transmitter for achieving optimum signal-to-noise ratio (SNR)

    NASA Astrophysics Data System (ADS)

    Jaswar, F. D.; Rahman, T. A.; Hindia, M. N.; Ahmad, Y. A.

    2017-12-01

    CubeSat technology has opened the opportunity to conduct space-related researches at a relatively low cost. Typical approach to maintain an affordable cubeSat mission is to use a simple communication system, which is based on UHF link with fixed-transmit power and data rate. However, CubeSat in the Low Earth Orbit (LEO) does not have relative motion with the earth rotation, resulting in variable propagation path length that affects the transmission signal. A transmitter with adaptive capability to select multiple sets of data rate and radio frequency (RF) transmit power is proposed to improve and optimise the link. This paper presents the adaptive UHF transmitter design as a solution to overcome the variability of the propagation path. The transmitter output power is adjustable from 0.5W to 2W according to the mode of operations and satellite power limitations. The transmitter is designed to have four selectable modes to achieve the optimum signal-to-noise ratio (SNR) and efficient power consumption based on the link budget analysis and satellite requirement. Three prototypes are developed and tested for space-environment conditions such as the radiation test. The Total Ionizing Dose measurements are conducted in the radiation test done at Malaysia Nuclear Agency Laboratory. The results from this test have proven that the adaptive transmitter can perform its operation with estimated more than seven months in orbit. This radiation test using gamma source with 1.5krad exposure is the first one conducted for a satellite program in Malaysia.

  12. Effect of eccentricity and light level on the timing of light adaptation mechanisms.

    PubMed

    Barrionuevo, Pablo A; Matesanz, Beatriz M; Gloriani, Alejandro H; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A

    2018-04-01

    We explored the complexity of the light adaptation process, assessing adaptation recovery (Ar) at different eccentricities and light levels. Luminance thresholds were obtained with transient background fields at mesopic and photopic light levels for temporal retinal eccentricities (0°-15°) with test/background stimulus size of 0.5°/1° using a staircase procedure in a two-channel Maxwellian view optical system. Ar was obtained in comparison with steady data [Vis. Res.125, 12 (2016)VISRAM0042-698910.1016/j.visres.2016.04.008]. Light level proportionally affects Ar only at fovea. Photopic extrafoveal thresholds were one log unit higher for transient conditions. Adaptation was equally fast at low light levels for different retinal locations with variations mainly affected by noise. These results evidence different timing in the mechanisms of adaptation involved.

  13. Resilience thinking: integrating resilience, adaptability and transformability

    Treesearch

    Carl Folke; Stephen R. Carpenter; Brian Walker; Marten Scheffer; Terry Chapin; Johan Rockstrom

    2010-01-01

    Resilience thinking addresses the dynamics and development of complex social-ecological systems (SES). Three aspects are central: resilience, adaptability and transformability. These aspects interrelate across multiple scales. Resilience in this context is the capacity of a SES to continually change and adapt yet remain within critical thresholds. Adaptability is part...

  14. Linking Remotely Sensed Aerosol Types to Their Chemical Composition

    NASA Technical Reports Server (NTRS)

    Dawson, Kyle William; Kacenelenbogen, Meloe S.; Johnson, Matthew S.; Burton, Sharon P.; Hostetler, Chris A.; Meskhidze, Nicholas

    2016-01-01

    Aerosol types measured during the Ship-Aircraft Bio-Optical Research (SABOR) experiment are related to GEOS-Chem model chemical composition. The application for this procedure to link model chemical components to aerosol type is desirable for understanding aerosol evolution over time. The Mahalanobis distance (DM) statistic is used to cluster model groupings of five chemical components (organic carbon, black carbon, sea salt, dust and sulfate) in a way analogous to the methods used by Burton et al. [2012] and Russell et al. [2014]. First, model-to-measurement evaluation is performed by collocating vertically resolved aerosol extinction from SABOR High Spectral Resolution LiDAR (HSRL) to the GEOS-Chem nested high-resolution data. Comparisons of modeled-to-measured aerosol extinction are shown to be within 35% +/- 14%. Second, the model chemical components are calculation into five variables to calculate the DM and cluster means and covariances for each HSRL-retrieved aerosol type. The layer variables from the model are aerosol optical depth (AOD) ratios of (i) sea salt and (ii) dust to total AOD, mass ratios of (iii) total carbon (i.e. sum of organic and black carbon) to the sum of total carbon and sulfate (iv) organic carbon to black carbon, and (v) the natural log of the aerosol-to-molecular extinction ratio. Third, the layer variables and at most five out of twenty SABOR flights are used to form the pre-specified clusters for calculating DM and to assign an aerosol type. After determining the pre-specified clusters, model aerosol types are produced for the entire vertically resolved GEOS-Chem nested domain over the United States and the model chemical component distributions relating to each type are recorded. Resulting aerosol types are Dust/Dusty Mix, Maritime, Smoke, Urban and Fresh Smoke (separated into 'dark' and 'light' by a threshold of the organic to black carbon ratio). Model-calculated DM not belonging to a specific type (i.e. not meeting a threshold probability) is termed an outlier and those DM values that can belong to multiple types (i.e. showing weak probability of belonging to a specific cluster) are termed as Overlap. MODIS active fires are overlaid on the model domain to qualitatively evaluate the model-predicted Smoke aerosol types.

  15. Linking remotely sensed aerosol types to their chemical composition

    NASA Astrophysics Data System (ADS)

    Dawson, K. W.; Kacenelenbogen, M. S.; Johnson, M. S.; Burton, S. P.; Hostetler, C. A.; Meskhidze, N.

    2016-12-01

    Aerosol types measured during the Ship-Aircraft Bio-Optical Research (SABOR) experiment are related to GEOS-Chem model chemical composition. The application for this procedure to link model chemical components to aerosol type is desirable for understanding aerosol evolution over time. The Mahalanobis distance (DM) statistic is used to cluster model groupings of five chemical components (organic carbon, black carbon, sea salt, dust and sulfate) in a way analogous to the methods used by Burton et al. [2012] and Russell et al. [2014]. First, model-to-measurement evaluation is performed by collocating vertically resolved aerosol extinction from SABOR High Spectral Resolution LiDAR (HSRL) to the GEOS-Chem nested high-resolution data. Comparisons of modeled-to-measured aerosol extinction are shown to be within 35% ± 14%. Second, the model chemical components are calculation into five variables to calculate the DM and cluster means and covariances for each HSRL-retrieved aerosol type. The layer variables from the model are aerosol optical depth (AOD) ratios of (i) sea salt and (ii) dust to total AOD, mass ratios of (iii) total carbon (i.e. sum of organic and black carbon) to the sum of total carbon and sulfate (iv) organic carbon to black carbon, and (v) the natural log of the aerosol-to-molecular extinction ratio. Third, the layer variables and at most five out of twenty SABOR flights are used to form the pre-specified clusters for calculating DM and to assign an aerosol type. After determining the pre-specified clusters, model aerosol types are produced for the entire vertically resolved GEOS-Chem nested domain over the United States and the model chemical component distributions relating to each type are recorded. Resulting aerosol types are Dust/Dusty Mix, Maritime, Smoke, Urban and Fresh Smoke (separated into `dark' and `light' by a threshold of the organic to black carbon ratio). Model-calculated DM not belonging to a specific type (i.e. not meeting a threshold probability) is termed an outlier and those DM values that can belong to multiple types (i.e. showing weak probability of belonging to a specific cluster) are termed as Overlap. MODIS active fires are overlaid on the model domain to qualitatively evaluate the model-predicted Smoke aerosol types.

  16. Phase Synchronization of Hemodynamic Variables at Rest and after Deep Breathing Measured during the Course of Pregnancy

    PubMed Central

    Papousek, Ilona; Roessler, Andreas; Hinghofer-Szalkay, Helmut; Lang, Uwe; Kolovetsiou-Kreiner, Vassiliki

    2013-01-01

    Background The autonomic nervous system plays a central role in the functioning of systems critical for the homeostasis maintenance. However, its role in the cardiovascular adaptation to pregnancy-related demands is poorly understood. We explored the maternal cardiovascular systems throughout pregnancy to quantify pregnancy-related autonomic nervous system adaptations. Methodology Continuous monitoring of heart rate (R-R interval; derived from the 3-lead electrocardiography), blood pressure, and thoracic impedance was carried out in thirty-six women at six time-points throughout pregnancy. In order to quantify in addition to the longitudinal effects on baseline levels throughout gestation the immediate adaptive heart rate and blood pressure changes at each time point, a simple reflex test, deep breathing, was applied. Consequently, heart rate variability and blood pressure variability in the low (LF) and high (HF) frequency range, respiration and baroreceptor sensitivity were analyzed in resting conditions and after deep breathing. The adjustment of the rhythms of the R-R interval, blood pressure and respiration partitioned for the sympathetic and the parasympathetic branch of the autonomic nervous system were quantified by the phase synchronization index γ, which has been adopted from the analysis of weakly coupled chaotic oscillators. Results Heart rate and LF/HF ratio increased throughout pregnancy and these effects were accompanied by a continuous loss of baroreceptor sensitivity. The increases in heart rate and LF/HF ratio levels were associated with an increasing decline in the ability to flexibly respond to additional demands (i.e., diminished adaptive responses to deep breathing). The phase synchronization index γ showed that the observed effects could be explained by a decreased coupling of respiration and the cardiovascular system (HF components of heart rate and blood pressure). Conclusions/Significance The findings suggest that during the course of pregnancy the individual systems become increasingly independent to meet the increasing demands placed on the maternal cardiovascular and respiratory system. PMID:23577144

  17. A threshold method for immunological correlates of protection

    PubMed Central

    2013-01-01

    Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322

  18. Threshold and channel interaction in cochlear implant users: evaluation of the tripolar electrode configuration.

    PubMed

    Bierer, Julie Arenberg

    2007-03-01

    The efficacy of cochlear implants is limited by spatial and temporal interactions among channels. This study explores the spatially restricted tripolar electrode configuration and compares it to bipolar and monopolar stimulation. Measures of threshold and channel interaction were obtained from nine subjects implanted with the Clarion HiFocus-I electrode array. Stimuli were biphasic pulses delivered at 1020 pulses/s. Threshold increased from monopolar to bipolar to tripolar stimulation and was most variable across channels with the tripolar configuration. Channel interaction, quantified by the shift in threshold between single- and two-channel stimulation, occurred for all three configurations but was largest for the monopolar and simultaneous conditions. The threshold shifts with simultaneous tripolar stimulation were slightly smaller than with bipolar and were not as strongly affected by the timing of the two channel stimulation as was monopolar. The subjects' performances on clinical speech tests were correlated with channel-to-channel variability in tripolar threshold, such that greater variability was related to poorer performance. The data suggest that tripolar channels with high thresholds may reveal cochlear regions of low neuron survival or poor electrode placement.

  19. Productivity responses of desert vegetation to precipitation patterns across a rainfall gradient.

    PubMed

    Li, Fang; Zhao, Wenzhi; Liu, Hu

    2015-03-01

    The influences of previous-year precipitation and episodic rainfall events on dryland plants and communities are poorly quantified in the temperate desert region of Northwest China. To evaluate the thresholds and lags in the response of aboveground net primary productivity (ANPP) to variability in rainfall pulses and seasonal precipitation along the precipitation-productivity gradient in three desert ecosystems with different precipitation regimes, we collected precipitation data from 2000 to 2012 in Shandan (SD), Linze (LZ) and Jiuquan (JQ) in northwestern China. Further, we extracted the corresponding MODIS Normalized Difference Vegetation Index (NDVI, a proxy for ANPP) datasets at 250 m spatial resolution. We then evaluated different desert ecosystems responses using statistical analysis, and a threshold-delay model (TDM). TDM is an integrative framework for analysis of plant growth, precipitation thresholds, and plant functional type strategies that capture the nonlinear nature of plant responses to rainfall pulses. Our results showed that: (1) the growing season NDVIINT (INT stands for time-integrated) was largely correlated with the warm season (spring/summer) at our mildly-arid desert ecosystem (SD). The arid ecosystem (LZ) exhibited a different response, and the growing season NDVIINT depended highly on the previous year's fall/winter precipitation and ANPP. At the extremely arid site (JQ), the variability of growing season NDVIINT was equally correlated with the cool- and warm-season precipitation; (2) some parameters of threshold-delay differed among the three sites: while the response of NDVI to rainfall pulses began at about 5 mm for all the sites, the maximum thresholds in SD, LZ, and JQ were about 55, 35 and 30 mm respectively, increasing with an increase in mean annual precipitation. By and large, more previous year's fall/winter precipitation, and large rainfall events, significantly enhanced the growth of desert vegetation, and desert ecosystems should be much more adaptive under likely future scenarios of increasing fall/winter precipitation and large rainfall events. These results highlight the inherent complexity in predicting how desert ecosystems will respond to future fluctuations in precipitation.

  20. Scientific uses and technical implementation of a variable gravity centrifuge on Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Johnson, C. C.; Hargens, A. R.

    1990-01-01

    The potential need and science requirements for a centrifuge to be designed and flown on Space Station Freedom are discussed, with a focus on a design concept for a centrifuge developed at NASA Ames. Applications identified for the centrifuge include fundamental studies in which gravity is a variable under experimental control, the need to provide a 1-g control, attempts to discover the threshold value of gravitation force for psychological response, and an effort to determine the effects of intermittent hypergravity. Science requirements specify the largest possible diameter at approximately 2.5 m, gravity levels ranging from 0.01 to 2 g, a nominal ramp-up rate of 0.01 g/sec, and life support for plants and animals. Ground-based studies using rats and squirrel monkeys on small-diameter centrifuges have demonstrated that animals can adapt to centrifugation at gravity gradients higher than those normally used in ground-based hypergravity studies.

  1. Multiplicities and thermal runaway of current leads for superconducting magnets

    NASA Astrophysics Data System (ADS)

    Krikkis, Rizos N.

    2017-04-01

    The multiple solutions of conduction and vapor cooled copper leads modeling current delivery to a superconducting magnet have been numerically calculated. Both ideal convection and convection with a finite heat transfer coefficient for an imposed coolant mass flow rate have been considered. Because of the nonlinearities introduced by the temperature dependent material properties, two solutions exist, one stable and one unstable regardless of the cooling method. The limit points separating the stable form the unstable steady states form the blow-up threshold beyond which, any further increase in the operating current results in a thermal runway. An interesting finding is that the multiplicity persists even when the cold end temperature is raised above the liquid nitrogen temperature. The effect of various parameters such as the residual resistivity ratio, the overcurrent and the variable conductor cross section on the bifurcation structure and their stabilization effect on the blow-up threshold is also evaluated.

  2. Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training

    PubMed Central

    Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William

    2015-01-01

    Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material. PMID:25730330

  3. Understanding the relationship between the variability in agrometeorological indices and adaptation practices across the Canadian Prairies

    NASA Astrophysics Data System (ADS)

    Chipanshi, A.; Qi, D.; Zhang, Y.; Cherneski, P.

    2017-12-01

    In an attempt to understand how agriculture will adapt to the changing and variable climate, crop based agrometeorological indices including the Effective Growing Degree Days (EGDDs), Growing Season Length (GSL), Heat waves, Water Demand (Precipitation - Evapotranspiration) and the Standardized Precipitation Evapotranspiration Index (SPEI) were analyzed in terms of frequency, duration and trend over a 63-year timeframe (1950 to 2012) from the Canadian Prairies and related to crop production. The heat based indices (EGDD, GSL and Heat waves) increased over the analysis period due to an upward increase in the observed mean temperature. The change was most noticeable in the northern portion of the study area where agriculture is limited by insufficient heat units under the present climate. Heat waves became more frequent in the southern parts of the study area (there were more days above the 30oC threshold). Water availability as assessed from water demand (P-PE) and SPEI trended downward especially in Alberta and Saskatchewan. In spite of the increased severity and frequency in water deficits, there was a noticeable reduction in the variability of crop yield over time. This was attributed to the increased adaptive capacity that has been gained through the use of improved seed hybrids, fertilizer, the use of fungicides and adoption of best management practices such as zero till and direct seeding. After crop yields were de-trended to remove effects of technology, the cumulative precipitation during the growing season explained the majority of the variance in crop yield. This initial analysis has set the stage for analyzing the characteristics of agrometeorological indices under climate change scenarios and how accumulated precipitation during the growing season will affect crop yield and production.

  4. Noninvasive Determination of Anaerobic Threshold Based on the Heart Rate Deflection Point in Water Cycling.

    PubMed

    Pinto, Stephanie S; Brasil, Roxana M; Alberton, Cristine L; Ferreira, Hector K; Bagatini, Natália C; Calatayud, Joaquin; Colado, Juan C

    2016-02-01

    This study compared heart rate (HR), oxygen uptake (VO2), percentage of maximal HR (%HRmax), percentage of maximal VO2, and cadence (Cad) related to the anaerobic threshold (AT) during a water cycling maximal test between heart rate deflection point (HRDP) and ventilatory (VT) methods. In addition, the correlations between both methods were assessed for all variables. The test was performed by 27 men in a cycle ergometer in an aquatic environment. The protocol started at a Cad of 100 b · min(-1) for 3 minutes with subsequent increments of 15 b · min(-1) every 2 minutes until exhaustion. A paired two-tailed Student's t-test was used to compare the variables between the HRDP and VT methods. The Pearson product-moment correlation test was used to correlate the same variables determined by the 2 methods. There was no difference in HR (166 ± 13 vs. 166 ± 13 b · min(-1)), VO2 (38.56 ± 6.26 vs. 39.18 ± 6.13 ml · kg(-1) · min(-1)), %HRmax (89.24 ± 3.84 vs. 89.52 ± 4.29%), VO2max (70.44 ± 7.99 vs. 71.64 ± 8.32%), and Cad (174 ± 14 b · min(-1) vs. 171 ± 8 b · min(-1)) related to AT between the HRDP and VT methods. Moreover, significant relationships were found between the methods to determine the AT for all variables analyzed (r = 0.57-0.97). The estimation of the HRDP may be a noninvasive and easy method to determine the AT, which could be used to adapt individualized training intensities to practitioners during water cycling classes.

  5. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability - Implications for Cochlear Implant Candidacy

    PubMed Central

    Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.

    2016-01-01

    Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal hearing participant groups were not significantly different for speech-in-noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates. PMID:28067750

  6. Fuel pin cladding

    DOEpatents

    Vaidyanathan, Swaminathan; Adamson, Martyn G.

    1986-01-01

    An improved fuel pin cladding, particularly adapted for use in breeder reactors, consisting of composite tubing with austenitic steel on the outer portion of the thickness of the tube wall and with nickel and/or ferritic material on the inner portion of the thickness of the tube wall. The nickel forms a sacrificial barrier as it reacts with certain fission products thereby reducing fission product activity at the austenitic steel interface. The ferritic material forms a preventive barrier for the austenitic steel as it is immune to liquid metal embrittlement. The improved cladding permits the use of high density fuel which in turn leads to a better breeding ratio in breeder reactors, and will increase the threshold at which failure occurs during temperature transients.

  7. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  8. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  9. Steroid profiles of professional soccer players: an international comparative study.

    PubMed

    Strahm, E; Sottas, P-E; Schweizer, C; Saugy, M; Dvorak, J; Saudan, C

    2009-12-01

    Urinary steroid profiling is used in doping controls to detect testosterone abuse. A testosterone over epitestosterone (T/E) ratio exceeding 4.0 is considered as suspicious of testosterone administration, irrespectively of individual heterogeneous factors such as the athlete's ethnicity. A deletion polymorphism in the UGT2B17 gene was demonstrated to account for a significant part of the interindividual variability in the T/E between Caucasians and Asians. Here, the variability of urinary steroid profiles was examined in a widely heterogeneous cohort of professional soccer players. The steroid profile of 57 Africans, 32 Asians, 50 Caucasians and 32 Hispanics was determined by gas chromatography-mass spectrometry. Significant differences have been observed between all ethnic groups. After estimation of the prevalence of the UGT2B17 deletion/deletion genotype (African: 22%; Asian: 81%; Caucasian: 10%; Hispanic: 7%), ethnic-specific thresholds were developed for a specificity of 99% for the T/E (African: 5.6; Asian: 3.8; Caucasian: 5.7; Hispanic: 5.8). Finally, another polymorphism could be hypothesised in Asians based on specific concentration ratio of 5alpha-/5beta-androstane-3alpha,17beta-diol in urine. These results demonstrate that a unique and non-specific threshold to evidence testosterone misuse is not fit for purpose. An athlete's endocrinological passport consisting of a longitudinal follow-up together with the ethnicity and/or the genotype would strongly enhance the detection of testosterone abuse. Finally, additional genotyping studies should be undertaken to determine whether the remaining unexplained disparities have an environmental or a genetic origin.

  10. A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection.

    PubMed

    Zhao, Zhishan; Zhao, Anbang; Hui, Juan; Hou, Baochun; Sotudeh, Reza; Niu, Fang

    2017-07-04

    The most classical detector of active sonar and radar is the matched filter (MF), which is the optimal processor under ideal conditions. Aiming at the problem of active sonar detection, we propose a frequency-domain adaptive matched filter (FDAMF) with the use of a frequency-domain adaptive line enhancer (ALE). The FDAMF is an improved MF. In the simulations in this paper, the signal to noise ratio (SNR) gain of the FDAMF is about 18.6 dB higher than that of the classical MF when the input SNR is -10 dB. In order to improve the performance of the FDAMF with a low input SNR, we propose a pre-processing method, which is called frequency-domain time reversal convolution and interference suppression (TRC-IS). Compared with the classical MF, the FDAMF combined with the TRC-IS method obtains higher SNR gain, a lower detection threshold, and a better receiver operating characteristic (ROC) in the simulations in this paper. The simulation results show that the FDAMF has higher processing gain and better detection performance than the classical MF under ideal conditions. The experimental results indicate that the FDAMF does improve the performance of the MF, and can adapt to actual interference in a way. In addition, the TRC-IS preprocessing method works well in an actual noisy ocean environment.

  11. Derivation of groundwater threshold values for analysis of impacts predicted at potential carbon sequestration sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Last, G. V.; Murray, C. J.; Bott, Y.

    2016-06-01

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts to groundwater quality due to carbon dioxide (CO 2) or brine leakage, should it occur from deep CO 2 storage reservoirs. These efforts targeted two classes of aquifer – an unconfined fractured carbonate aquifer based on the Edwards Aquifer in Texas, and a confined alluvium aquifer based on the High Plains Aquifer in Kansas. Hypothetical leakage scenarios focus on wellbores as the most likely conduits from the storage reservoir to an underground source of drinking water (USDW). To facilitate evaluationmore » of potential degradation of the USDWs, threshold values, below which there would be no predicted impacts, were determined for each of these two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities. Results demonstrate the importance of establishing baseline groundwater quality conditions that capture the spatial and temporal variability of the USDWs prior to CO 2 injection and storage.« less

  12. Human Vulnerability to Climate Variability in the Sahel: Farmers' Adaptation Strategies in Northern Burkina Faso

    NASA Astrophysics Data System (ADS)

    Barbier, Bruno; Yacouba, Hamma; Karambiri, Harouna; Zoromé, Malick; Somé, Blaise

    2009-05-01

    In this study, the authors investigate farmers’ vulnerability to climate variability and evaluate local adoption of technology and farmers’ perceptions of adaptation strategies to rainfall variability and policies. A survey was conducted in a community in northern Burkina Faso following the crop failure of 2004. In 2006, following a better harvest, another survey was conducted to compare farmers’ actions and reactions during two contrasted rainy seasons. The results confirm that farmers from this community have substantially changed their practices during the last few decades. They have adopted a wide range of techniques that are intended to simultaneously increase crop yield and reduce yield variability. Micro water harvesting (Zaï) techniques have been widely adopted (41%), and a majority of fields have been improved with stone lines (60%). Hay (48%) and sorghum residues are increasingly stored to feed animals during the dry season, making bull and sheep fattening now a common practice. Dry season vegetable production also involves a majority of the population (60%). According to farmers, most of the new techniques have been adopted because of growing land scarcity and new market opportunities, rather than because of climate variability. Population pressure has reached a critical threshold, while land scarcity, declining soil fertility and reduced animal mobility have pushed farmers to intensify agricultural production. These techniques reduce farmers’ dependency on rainfall but are still insufficient to reduce poverty and vulnerability. Thirty-nine percent of the population remains vulnerable after a good rainy season. Despite farmers’ desire to remain in their own communities, migrations are likely to remain a major source of regular income and form of recourse in the event of droughts.

  13. Computer-aided detection of clustered microcalcifications in multiscale bilateral filtering regularized reconstructed digital breast tomosynthesis volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samala, Ravi K., E-mail: rsamala@umich.edu; Chan, Heang-Ping; Lu, Yao

    Purpose: Develop a computer-aided detection (CADe) system for clustered microcalcifications in digital breast tomosynthesis (DBT) volume enhanced with multiscale bilateral filtering (MSBF) regularization. Methods: With Institutional Review Board approval and written informed consent, two-view DBT of 154 breasts, of which 116 had biopsy-proven microcalcification (MC) clusters and 38 were free of MCs, was imaged with a General Electric GEN2 prototype DBT system. The DBT volumes were reconstructed with MSBF-regularized simultaneous algebraic reconstruction technique (SART) that was designed to enhance MCs and reduce background noise while preserving the quality of other tissue structures. The contrast-to-noise ratio (CNR) of MCs was furthermore » improved with enhancement-modulated calcification response (EMCR) preprocessing, which combined multiscale Hessian response to enhance MCs by shape and bandpass filtering to remove the low-frequency structured background. MC candidates were then located in the EMCR volume using iterative thresholding and segmented by adaptive region growing. Two sets of potential MC objects, cluster centroid objects and MC seed objects, were generated and the CNR of each object was calculated. The number of candidates in each set was controlled based on the breast volume. Dynamic clustering around the centroid objects grouped the MC candidates to form clusters. Adaptive criteria were designed to reduce false positive (FP) clusters based on the size, CNR values and the number of MCs in the cluster, cluster shape, and cluster based maximum intensity projection. Free-response receiver operating characteristic (FROC) and jackknife alternative FROC (JAFROC) analyses were used to assess the performance and compare with that of a previous study. Results: Unpaired two-tailedt-test showed a significant increase (p < 0.0001) in the ratio of CNRs for MCs with and without MSBF regularization compared to similar ratios for FPs. For view-based detection, a sensitivity of 85% was achieved at an FP rate of 2.16 per DBT volume. For case-based detection, a sensitivity of 85% was achieved at an FP rate of 0.85 per DBT volume. JAFROC analysis showed a significant improvement in the performance of the current CADe system compared to that of our previous system (p = 0.003). Conclusions: MBSF regularized SART reconstruction enhances MCs. The enhancement in the signals, in combination with properly designed adaptive threshold criteria, effective MC feature analysis, and false positive reduction techniques, leads to a significant improvement in the detection of clustered MCs in DBT.« less

  14. Impairment of retinal increment thresholds in Huntington's disease.

    PubMed

    Paulus, W; Schwarz, G; Werner, A; Lange, H; Bayer, A; Hofschuster, M; Müller, N; Zrenner, E

    1993-10-01

    We have investigated detection thresholds for a foveal blue test light using a Maxwellian view system in 61 normal subjects, 19 patients with Huntington's chorea, 14 patients with Tourette's syndrome, and 20 patients with schizophrenia. Ten measurements were made: The blue test light (1 degree diameter, 500 msec duration) was presented either superimposed on a yellow adaptation field (5 degree diameter) or 500 msec after switching off this field (transient tritanopia effect). In both cases five different background intensities were presented. The only abnormality found was in patients with Huntington's chorea. During adaptation these patients' thresholds are significantly higher than normal (p < 0.005). No change was found in the transient tritanopia effect. Huntington's disease causes degeneration of several different transmitter systems in the brain. Increment threshold testing allows for noninvasive investigation of patients and confirms the involvement of the retina in the degenerative process in Huntington's chorea.

  15. Adaptive gain and filtering circuit for a sound reproduction system

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor)

    1998-01-01

    Adaptive compressive gain and level dependent spectral shaping circuitry for a hearing aid include a microphone to produce an input signal and a plurality of channels connected to a common circuit output. Each channel has a preset frequency response. Each channel includes a filter with a preset frequency response to receive the input signal and to produce a filtered signal, a channel amplifier to amplify the filtered signal to produce a channel output signal, a threshold register to establish a channel threshold level, and a gain circuit. The gain circuit increases the gain of the channel amplifier when the channel output signal falls below the channel threshold level and decreases the gain of the channel amplifier when the channel output signal rises above the channel threshold level. A transducer produces sound in response to the signal passed by the common circuit output.

  16. Using scan statistics for congenital anomalies surveillance: the EUROCAT methodology.

    PubMed

    Teljeur, Conor; Kelly, Alan; Loane, Maria; Densem, James; Dolk, Helen

    2015-11-01

    Scan statistics have been used extensively to identify temporal clusters of health events. We describe the temporal cluster detection methodology adopted by the EUROCAT (European Surveillance of Congenital Anomalies) monitoring system. Since 2001, EUROCAT has implemented variable window width scan statistic for detecting unusual temporal aggregations of congenital anomaly cases. The scan windows are based on numbers of cases rather than being defined by time. The methodology is imbedded in the EUROCAT Central Database for annual application to centrally held registry data. The methodology was incrementally adapted to improve the utility and to address statistical issues. Simulation exercises were used to determine the power of the methodology to identify periods of raised risk (of 1-18 months). In order to operationalize the scan methodology, a number of adaptations were needed, including: estimating date of conception as unit of time; deciding the maximum length (in time) and recency of clusters of interest; reporting of multiple and overlapping significant clusters; replacing the Monte Carlo simulation with a lookup table to reduce computation time; and placing a threshold on underlying population change and estimating the false positive rate by simulation. Exploration of power found that raised risk periods lasting 1 month are unlikely to be detected except when the relative risk and case counts are high. The variable window width scan statistic is a useful tool for the surveillance of congenital anomalies. Numerous adaptations have improved the utility of the original methodology in the context of temporal cluster detection in congenital anomalies.

  17. Clinical evaluation of an inspiratory impedance threshold device during standard cardiopulmonary resuscitation in patients with out-of-hospital cardiac arrest.

    PubMed

    Aufderheide, Tom P; Pirrallo, Ronald G; Provo, Terry A; Lurie, Keith G

    2005-04-01

    To determine whether an impedance threshold device, designed to enhance circulation, would increase acute resuscitation rates for patients in cardiac arrest receiving conventional manual cardiopulmonary resuscitation. Prospective, randomized, double-blind, intention-to-treat. Out-of-hospital trial conducted in the Milwaukee, WI, emergency medical services system. Adults in cardiac arrest of presumed cardiac etiology. On arrival of advanced life support, patients were treated with standard cardiopulmonary resuscitation combined with either an active or a sham impedance threshold device. We measured safety and efficacy of the impedance threshold device; the primary end point was intensive care unit admission. Statistical analyses performed included the chi-square test and multivariate regression analysis. One hundred sixteen patients were treated with a sham impedance threshold device, and 114 patients were treated with an active impedance threshold device. Overall intensive care unit admission rates were 17% with the sham device vs. 25% in the active impedance threshold device (p = .13; odds ratio, 1.64; 95% confidence interval, 0.87, 3.10). Patients in the subgroup presenting with pulseless electrical activity had intensive care unit admission and 24-hr survival rates of 20% and 12% in sham (n = 25) vs. 52% and 30% in active impedance threshold device groups (n = 27) (p = .018, odds ratio, 4.31; 95% confidence interval, 1.28, 14.5, and p = .12, odds ratio, 3.09; 95% confidence interval, 0.74, 13.0, respectively). A post hoc analysis of patients with pulseless electrical activity at any time during the cardiac arrest revealed that intensive care unit and 24-hr survival rates were 20% and 11% in the sham (n = 56) vs. 41% and 27% in the active impedance threshold device groups (n = 49) (p = .018, odds ratio, 2.82; 95% confidence interval, 1.19, 6.67, and p = .037, odds ratio, 3.01; 95% confidence interval, 1.07, 8.96, respectively). There were no statistically significant differences in outcomes for patients presenting in ventricular fibrillation and asystole. Adverse event and complication rates were also similar. During this first clinical trial of the impedance threshold device during standard cardiopulmonary resuscitation, use of the new device more than doubled short-term survival rates in patients presenting with pulseless electrical activity. A larger clinical trial is underway to determine the potential longer term benefits of the impedance threshold device in cardiac arrest.

  18. Disturbances of rod threshold forced by briefly exposed luminous lines, edges, disks and annuli

    PubMed Central

    Hallett, P. E.

    1971-01-01

    1. When the dark-adapted eye is exposed to a brief duration (2 msec) luminous line the resulting threshold disturbance is much sharper (decay constant of ca. 10 min arc) than would be expected in a system which is known to integrate the effects of light quanta over a distance of 1 deg or so. 2. When the forcing input is a pair of brief duration parallel luminous lines the threshold disturbance falls off sharply at the outsides of the pattern but on the inside a considerable spread of threshold-raising effects may occur unless the lines are sufficiently far apart. 3. The threshold disturbance due to a briefly exposed edge shows an overshoot reminiscent of `lateral inhibition'. 4. If the threshold is measured at the centre of a black disk presented in a briefly lit surround then (a) the dependence of threshold on time interval between test and surround suggests that the threshold elevation is due to a non-optical effect which is not `metacontrast'; (b) the dependence of threshold on black disk diameter is consistent with the notion that the spatial threshold disturbance is progressively sharpened as the separation of luminous edges increases. 5. If the threshold is measured at the centre of briefly exposed luminous disks of various diameters one obtains the same evidence for an `antagonistic centre-surround' system as that produced by other workers (e.g. Westheimer, 1965) for the steadily light-adapted eye. 6. The previous paper (Hallett, 1971) showed that brief illumination of the otherwise dark-adapted eye can rapidly and substantially change the extent of spatial integration. The present paper shows that brief illumination leads to substantial `inhibitory' effects. 7. Earlier approaches are reviewed: (a) the linear system signal/noise theory of the time course of threshold disturbances (Hallett, 1969b) is illustrated by the case of a small subtense flash superimposed on a large oscillatory background; (b) the spatial weighting functions of some other authors are given. 8. A possible non-linear model is briefly described: the line weighting function for the receptive field centre is taken to be a single Gaussian, as is customary, but the line weighting function for the inhibitory surround is bimodal. PMID:5145728

  19. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  20. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  1. Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics

    PubMed Central

    Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong

    2016-01-01

    The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the ‘general separatrix’ in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability. PMID:27546614

  2. Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics.

    PubMed

    Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong

    2016-08-22

    The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the 'general separatrix' in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability.

  3. Variability of argon laser-induced sensory and pain thresholds on human oral mucosa and skin.

    PubMed Central

    Svensson, P.; Bjerring, P.; Arendt-Nielsen, L.; Kaaber, S.

    1991-01-01

    The variability of laser-induced pain perception on human oral mucosa and hairy skin was investigated in order to establish a new method for evaluation of pain in the orofacial region. A high-energy argon laser was used for experimental pain stimulation, and sensory and pain thresholds were determined. The intra-individual coefficients of variation for oral thresholds were comparable to cutaneous thresholds. However, inter-individual variation was smaller for oral thresholds, which could be due to larger variation in cutaneous optical properties. The short-term and 24-hr changes in thresholds on both surfaces were less than 9%. The results indicate that habituation to laser thresholds may account for part of the intra-individual variation observed. However, the subjective ratings of the intensity of the laser stimuli were constant. Thus, oral thresholds may, like cutaneous thresholds, be used for assessment and quantification of analgesic efficacies and to investigate various pain conditions. PMID:1814248

  4. Age-associated loss of selectivity in human olfactory sensory neurons

    PubMed Central

    Rawson, Nancy E.; Gomez, George; Cowart, Beverly J.; Kriete, Andres; Pribitkin, Edmund; Restrepo, Diego

    2011-01-01

    We report a cross-sectional study of olfactory impairment with age based on both odorant-stimulated responses of human olfactory sensory neurons (OSNs) and tests of olfactory threshold sensitivity. A total of 621 OSNs from 440 subjects in two age groups of younger ( 45 years) and older (≥60 years) subjects were investigated using fluorescence intensity ratio fura-2 imaging. OSNs were tested for responses to two odorant mixtures, as well as to subsets of and individual odors in those mixtures. Whereas cells from younger donors were highly selective in the odorants to which they responded, cells from older donors were more likely to respond to multiple odor stimuli, despite a loss in these subjects’ absolute olfactory sensitivity, suggesting a loss of specificity. This degradation in peripheral cellular specificity may impact odor discrimination and olfactory adaptation in the elderly. It is also possible that chronic adaptation as a result of reduced specificity contributes to observed declines in absolute sensitivity. PMID:22074806

  5. Addressing the limits to adaptation across four damage--response systems

    EPA Science Inventory

    Our ability to adapt to climate change is not boundless, and previous modeling shows that capacity limited adaptation will play a policy-significant role in future decisions about climate change. These limits are delineated by capacity thresholds, after which climate damages beg...

  6. Investigation of Adaptive-threshold Approaches for Determining Area-Time Integrals from Satellite Infrared Data to Estimate Convective Rain Volumes

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.; VonderHaar, Thomas H.

    1996-01-01

    The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.

  7. Effect of 8 Weeks Soccer Training on Health and Physical Performance in Untrained Women

    PubMed Central

    Ortiz, Jaelson G.; da Silva, Juliano F.; Carminatti, Lorival J.; Guglielmo, Luiz G.A.; Diefenthaeler, Fernando

    2018-01-01

    This study aims to analyze the physiological, neuromuscular, and biochemical responses in untrained women after eight weeks of regular participation in small-sided soccer games compared to aerobic training. Twenty-seven healthy untrained women were divided into two groups [soccer group (SG = 17) and running group (RG = 10)]. Both groups trained three times per week for eight weeks. The variables measured in this study were maximal oxygen uptake (VO2max), relative velocity at VO2max (vVO2max), peak velocity, relative intensity at lactate threshold (vLT), relative intensity at onset of blood lactate accumulation (vOBLA), peak force, total cholesterol, HDL, LDL, triglycerides, and cholesterol ratio (LDL/HDL). VO2max, vLT, and vOBLA increased significantly in both groups (12.8 and 16.7%, 11.1 and 15.3%, 11.6 and 19.8%, in SG and RG respectively). However, knee extensors peak isometric strength and triglyceride levels, total cholesterol, LDL, and HDL did not differ after eight weeks of training in both groups. On the other hand, the LDL/HDL ratio significantly reduced in both groups. In conclusion, eight weeks of regular participation in small-sided soccer games was sufficient to increase aerobic performance and promote health benefits related to similar aerobic training in untrained adult women. Key points Regular participation in soccer small sided-games increase aerobic performance and promote health benefits related to similar aerobic training in untrained women. 8 weeks soccer training is enough to promote positive physiological and biochemical adaptations in untrained women. Soccer small sided-games have the potential to be more pleasurable and effective among women as other modalities as running and cycling. PMID:29535574

  8. What Deters Students from Studying Abroad? Evidence from Four European Countries and Its Implications for Higher Education Policy

    ERIC Educational Resources Information Center

    Netz, Nicolai

    2015-01-01

    This study examines factors that deter students in Austria, Germany, Switzerland and the Netherlands from studying abroad. Using an adaptation of the Rubicon model of action phases, the path to gaining study abroad experience is conceptualised as a process involving two thresholds: the decision threshold and the realisation threshold. Theoretical…

  9. Cost-effectiveness thresholds: pros and cons.

    PubMed

    Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-12-01

    Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.

  10. Testing for a Debt-Threshold Effect on Output Growth.

    PubMed

    Lee, Sokbae; Park, Hyunmin; Seo, Myung Hwan; Shin, Youngki

    2017-12-01

    Using the Reinhart-Rogoff dataset, we find a debt threshold not around 90 per cent but around 30 per cent, above which the median real gross domestic product (GDP) growth falls abruptly. Our work is the first to formally test for threshold effects in the relationship between public debt and median real GDP growth. The null hypothesis of no threshold effect is rejected at the 5 per cent significance level for most cases. While we find no evidence of a threshold around 90 per cent, our findings from the post-war sample suggest that the debt threshold for economic growth may exist around a relatively small debt-to-GDP ratio of 30 per cent. Furthermore, countries with debt-to-GDP ratios above 30 per cent have GDP growth that is 1 percentage point lower at the median.

  11. Testing for a Debt‐Threshold Effect on Output Growth†

    PubMed Central

    Lee, Sokbae; Park, Hyunmin; Seo, Myung Hwan; Shin, Youngki

    2017-01-01

    Abstract Using the Reinhart–Rogoff dataset, we find a debt threshold not around 90 per cent but around 30 per cent, above which the median real gross domestic product (GDP) growth falls abruptly. Our work is the first to formally test for threshold effects in the relationship between public debt and median real GDP growth. The null hypothesis of no threshold effect is rejected at the 5 per cent significance level for most cases. While we find no evidence of a threshold around 90 per cent, our findings from the post‐war sample suggest that the debt threshold for economic growth may exist around a relatively small debt‐to‐GDP ratio of 30 per cent. Furthermore, countries with debt‐to‐GDP ratios above 30 per cent have GDP growth that is 1 percentage point lower at the median. PMID:29263562

  12. Step Detection Robust against the Dynamics of Smartphones

    PubMed Central

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  13. Why do shape aftereffects increase with eccentricity?

    PubMed

    Gheorghiu, Elena; Kingdom, Frederick A A; Bell, Jason; Gurnsey, Rick

    2011-12-20

    Studies have shown that spatial aftereffects increase with eccentricity. Here, we demonstrate that the shape-frequency and shape-amplitude aftereffects, which describe the perceived shifts in the shape of a sinusoidal-shaped contour following adaptation to a slightly different sinusoidal-shaped contour, also increase with eccentricity. Why does this happen? We first demonstrate that the perceptual shift increases with eccentricity for stimuli of fixed sizes. These shifts are not attenuated by variations in stimulus size; in fact, at each eccentricity the degree of perceptual shift is scale-independent. This scale independence is specific to the aftereffect because basic discrimination thresholds (in the absence of adaptation) decrease as size increases. Structural aspects of the displays were found to have a modest effect on the degree of perceptual shift; the degree of adaptation depends modestly on distance between stimuli during adaptation and post-adaptation testing. There were similar temporal rates of decline of adaptation across the visual field and higher post-adaptation discrimination thresholds in the periphery than in the center. The observed results are consistent with greater sensitivity reduction in adapted mechanisms following adaptation in the periphery or an eccentricity-dependent increase in the bandwidth of the shape-frequency- and shape-amplitude-selective mechanisms.

  14. A robotic test of proprioception within the hemiparetic arm post-stroke.

    PubMed

    Simo, Lucia; Botzer, Lior; Ghez, Claude; Scheidt, Robert A

    2014-04-30

    Proprioception plays important roles in planning and control of limb posture and movement. The impact of proprioceptive deficits on motor function post-stroke has been difficult to elucidate due to limitations in current tests of arm proprioception. Common clinical tests only provide ordinal assessment of proprioceptive integrity (eg. intact, impaired or absent). We introduce a standardized, quantitative method for evaluating proprioception within the arm on a continuous, ratio scale. We demonstrate the approach, which is based on signal detection theory of sensory psychophysics, in two tasks used to characterize motor function after stroke. Hemiparetic stroke survivors and neurologically intact participants attempted to detect displacement- or force-perturbations robotically applied to their arm in a two-interval, two-alternative forced-choice test. A logistic psychometric function parameterized detection of limb perturbations. The shape of this function is determined by two parameters: one corresponds to a signal detection threshold and the other to variability of responses about that threshold. These two parameters define a space in which proprioceptive sensation post-stroke can be compared to that of neurologically-intact people. We used an auditory tone discrimination task to control for potential comprehension, attention and memory deficits. All but one stroke survivor demonstrated competence in performing two-alternative discrimination in the auditory training test. For the remaining stroke survivors, those with clinically identified proprioceptive deficits in the hemiparetic arm or hand had higher detection thresholds and exhibited greater response variability than individuals without proprioceptive deficits. We then identified a normative parameter space determined by the threshold and response variability data collected from neurologically intact participants. By plotting displacement detection performance within this normative space, stroke survivors with and without intact proprioception could be discriminated on a continuous scale that was sensitive to small performance variations, e.g. practice effects across days. The proposed method uses robotic perturbations similar to those used in ongoing studies of motor function post-stroke. The approach is sensitive to small changes in the proprioceptive detection of hand motions. We expect this new robotic assessment will empower future studies to characterize how proprioceptive deficits compromise limb posture and movement control in stroke survivors.

  15. Intra-Shell boron isotope ratios in benthic foraminifera: Implications for paleo-pH reconstructions

    NASA Astrophysics Data System (ADS)

    Rollion-Bard, C.; Erez, J.

    2009-12-01

    The boron isotope composition of marine carbonates is considered to be a seawater pH proxy. Nevertheless, the use of δ11B has some limitations: 1) the knowledge of fractionation factor (α4-3) between the two boron dissolved species (boric acid and borate ion), 2) the δ11B of seawater may have varied with time and 3) the amplitude of the "vital effects" of this proxy. Using secondary ion mass spectrometry (SIMS), we looked at the internal variability in the boron isotope ratio of the shallow water, symbionts bearing foraminiferan Amphistegina lobifera. Specimens were cultured at constant temperature (24±0.1 °C) in seawater with pH ranging between 7.90 and 8.45. We performed 6 to 8 measurements of δ11B in each foraminifera. Intra-shell boron isotopes show large variability with an upper threshold value of pH ~ 9. The ranges of the skeletal calculated pH values in different cultured foraminifera, show strong correlation with the culture pH values and may thus serve as proxy for pH in the past ocean.

  16. Normal Lung Quantification in Usual Interstitial Pneumonia Pattern: The Impact of Threshold-based Volumetric CT Analysis for the Staging of Idiopathic Pulmonary Fibrosis.

    PubMed

    Ohkubo, Hirotsugu; Kanemitsu, Yoshihiro; Uemura, Takehiro; Takakuwa, Osamu; Takemura, Masaya; Maeno, Ken; Ito, Yutaka; Oguri, Tetsuya; Kazawa, Nobutaka; Mikami, Ryuji; Niimi, Akio

    2016-01-01

    Although several computer-aided computed tomography (CT) analysis methods have been reported to objectively assess the disease severity and progression of idiopathic pulmonary fibrosis (IPF), it is unclear which method is most practical. A universal severity classification system has not yet been adopted for IPF. The purpose of this study was to test the correlation between quantitative-CT indices and lung physiology variables and to determine the ability of such indices to predict disease severity in IPF. A total of 27 IPF patients showing radiological UIP pattern on high-resolution (HR) CT were retrospectively enrolled. Staging of IPF was performed according to two classification systems: the Japanese and GAP (gender, age, and physiology) staging systems. CT images were assessed using a commercially available CT imaging analysis workstation, and the whole-lung mean CT value (MCT), the normally attenuated lung volume as defined from -950 HU to -701 Hounsfield unit (NL), the volume of the whole lung (WL), and the percentage of NL to WL (NL%), were calculated. CT indices (MCT, WL, and NL) closely correlated with lung physiology variables. Among them, NL strongly correlated with forced vital capacity (FVC) (r = 0.92, P <0.0001). NL% showed a large area under the receiver operating characteristic curve for detecting patients in the moderate or advanced stages of IPF. Multivariable logistic regression analyses showed that NL% is significantly more useful than the percentages of predicted FVC and predicted diffusing capacity of the lungs for carbon monoxide (Japanese stage II/III/IV [odds ratio, 0.73; 95% confidence intervals (CI), 0.48 to 0.92; P < 0.01]; III/IV [odds ratio. 0.80; 95% CI 0.59 to 0.96; P < 0.01]; GAP stage II/III [odds ratio, 0.79; 95% CI, 0.56 to 0.97; P < 0.05]). The measurement of NL% by threshold-based volumetric CT analysis may help improve IPF staging.

  17. Limited reliability of computed tomographic perfusion acute infarct volume measurements compared with diffusion-weighted imaging in anterior circulation stroke.

    PubMed

    Schaefer, Pamela W; Souza, Leticia; Kamalian, Shervin; Hirsch, Joshua A; Yoo, Albert J; Kamalian, Shahmir; Gonzalez, R Gilberto; Lev, Michael H

    2015-02-01

    Diffusion-weighted imaging (DWI) can reliably identify critically ischemic tissue shortly after stroke onset. We tested whether thresholded computed tomographic cerebral blood flow (CT-CBF) and CT-cerebral blood volume (CT-CBV) maps are sufficiently accurate to substitute for DWI for estimating the critically ischemic tissue volume. Ischemic volumes of 55 patients with acute anterior circulation stroke were assessed on DWI by visual segmentation and on CT-CBF and CT-CBV with segmentation using 15% and 30% thresholds, respectively. The contrast:noise ratios of ischemic regions on the DWI and CT perfusion (CTP) images were measured. Correlation and Bland-Altman analyses were used to assess the reliability of CTP. Mean contrast:noise ratios for DWI, CT-CBF, and CT-CBV were 4.3, 0.9, and 0.4, respectively. CTP and DWI lesion volumes were highly correlated (R(2)=0.87 for CT-CBF; R(2)=0.83 for CT-CBV; P<0.001). Bland-Altman analyses revealed little systemic bias (-2.6 mL) but high measurement variability (95% confidence interval, ±56.7 mL) between mean CT-CBF and DWI lesion volumes, and systemic bias (-26 mL) and high measurement variability (95% confidence interval, ±64.0 mL) between mean CT-CBV and DWI lesion volumes. A simulated treatment study demonstrated that using CTP-CBF instead of DWI for detecting a statistically significant effect would require at least twice as many patients. The poor contrast:noise ratios of CT-CBV and CT-CBF compared with those of DWI result in large measurement error, making it problematic to substitute CTP for DWI in selecting individual acute stroke patients for treatment. CTP could be used for treatment studies of patient groups, but the number of patients needed to identify a significant effect is much higher than the number needed if DWI is used. © 2014 American Heart Association, Inc.

  18. Adaptive sequential Bayesian classification using Page's test

    NASA Astrophysics Data System (ADS)

    Lynch, Robert S., Jr.; Willett, Peter K.

    2002-03-01

    In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.

  19. ImmunoRatio: a publicly available web application for quantitative image analysis of estrogen receptor (ER), progesterone receptor (PR), and Ki-67

    PubMed Central

    2010-01-01

    Introduction Accurate assessment of estrogen receptor (ER), progesterone receptor (PR), and Ki-67 is essential in the histopathologic diagnostics of breast cancer. Commercially available image analysis systems are usually bundled with dedicated analysis hardware and, to our knowledge, no easily installable, free software for immunostained slide scoring has been described. In this study, we describe a free, Internet-based web application for quantitative image analysis of ER, PR, and Ki-67 immunohistochemistry in breast cancer tissue sections. Methods The application, named ImmunoRatio, calculates the percentage of positively stained nuclear area (labeling index) by using a color deconvolution algorithm for separating the staining components (diaminobenzidine and hematoxylin) and adaptive thresholding for nuclear area segmentation. ImmunoRatio was calibrated using cell counts defined visually as the gold standard (training set, n = 50). Validation was done using a separate set of 50 ER, PR, and Ki-67 stained slides (test set, n = 50). In addition, Ki-67 labeling indexes determined by ImmunoRatio were studied for their prognostic value in a retrospective cohort of 123 breast cancer patients. Results The labeling indexes by calibrated ImmunoRatio analyses correlated well with those defined visually in the test set (correlation coefficient r = 0.98). Using the median Ki-67 labeling index (20%) as a cutoff, a hazard ratio of 2.2 was obtained in the survival analysis (n = 123, P = 0.01). ImmunoRatio was shown to adapt to various staining protocols, microscope setups, digital camera models, and image acquisition settings. The application can be used directly with web browsers running on modern operating systems (e.g., Microsoft Windows, Linux distributions, and Mac OS). No software downloads or installations are required. ImmunoRatio is open source software, and the web application is publicly accessible on our website. Conclusions We anticipate that free web applications, such as ImmunoRatio, will make the quantitative image analysis of ER, PR, and Ki-67 easy and straightforward in the diagnostic assessment of breast cancer specimens. PMID:20663194

  20. ImmunoRatio: a publicly available web application for quantitative image analysis of estrogen receptor (ER), progesterone receptor (PR), and Ki-67.

    PubMed

    Tuominen, Vilppu J; Ruotoistenmäki, Sanna; Viitanen, Arttu; Jumppanen, Mervi; Isola, Jorma

    2010-01-01

    Accurate assessment of estrogen receptor (ER), progesterone receptor (PR), and Ki-67 is essential in the histopathologic diagnostics of breast cancer. Commercially available image analysis systems are usually bundled with dedicated analysis hardware and, to our knowledge, no easily installable, free software for immunostained slide scoring has been described. In this study, we describe a free, Internet-based web application for quantitative image analysis of ER, PR, and Ki-67 immunohistochemistry in breast cancer tissue sections. The application, named ImmunoRatio, calculates the percentage of positively stained nuclear area (labeling index) by using a color deconvolution algorithm for separating the staining components (diaminobenzidine and hematoxylin) and adaptive thresholding for nuclear area segmentation. ImmunoRatio was calibrated using cell counts defined visually as the gold standard (training set, n = 50). Validation was done using a separate set of 50 ER, PR, and Ki-67 stained slides (test set, n = 50). In addition, Ki-67 labeling indexes determined by ImmunoRatio were studied for their prognostic value in a retrospective cohort of 123 breast cancer patients. The labeling indexes by calibrated ImmunoRatio analyses correlated well with those defined visually in the test set (correlation coefficient r = 0.98). Using the median Ki-67 labeling index (20%) as a cutoff, a hazard ratio of 2.2 was obtained in the survival analysis (n = 123, P = 0.01). ImmunoRatio was shown to adapt to various staining protocols, microscope setups, digital camera models, and image acquisition settings. The application can be used directly with web browsers running on modern operating systems (e.g., Microsoft Windows, Linux distributions, and Mac OS). No software downloads or installations are required. ImmunoRatio is open source software, and the web application is publicly accessible on our website. We anticipate that free web applications, such as ImmunoRatio, will make the quantitative image analysis of ER, PR, and Ki-67 easy and straightforward in the diagnostic assessment of breast cancer specimens.

  1. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons

    PubMed Central

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-01-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter—describing somatic integration—and the spike-history filter—accounting for spike-frequency adaptation—dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations. PMID:26907675

  2. Optimal Threshold for a Positive Hybrid Capture 2 Test for Detection of Human Papillomavirus: Data from the ARTISTIC Trial▿

    PubMed Central

    Sargent, A.; Bailey, A.; Turner, A.; Almonte, M.; Gilham, C.; Baysson, H.; Peto, J.; Roberts, C.; Thomson, C.; Desai, M.; Mather, J.; Kitchener, H.

    2010-01-01

    We present data on the use of the Hybrid Capture 2 (HC2) test for the detection of high-risk human papillomavirus (HR HPV) with different thresholds for positivity within a primary screening setting and as a method of triage for low-grade cytology. In the ARTISTIC population-based trial, 18,386 women were screened by cytology and for HPV. Cervical intraepithelial neoplasia lesions of grade two and higher (CIN2+ lesions) were identified for 453 women within 30 months of an abnormal baseline sample. When a relative light unit/cutoff (RLU/Co) ratio of ≥1 was used as the threshold for considering an HC2 result positive, 15.6% of results were positive, and the proportion of CIN2+ lesions in this group was 14.7%. The relative sensitivity for CIN2+ lesion detection was 93.4%. When an RLU/Co ratio of ≥2 was used as the threshold, there was a 2.5% reduction in positivity, with an increase in the proportion of CIN2+ lesions detected. The relative sensitivity decreased slightly, to 90.3%. Among women with low-grade cytology, HPV prevalences were 43.7% and 40.3% at RLU/Co ratios of ≥1 and ≥2, respectively. The proportions of CIN2+ lesions detected were 17.3% and 18.0%, with relative sensitivities of 87.7% at an RLU/Co ratio of ≥1 and 84.2% at an RLU/Co ratio of ≥2. At an RLU/Co ratio of ≥1, 68.3% of HC2-positive results were confirmed by the Roche line blot assay, compared to 77.2% of those at an RLU/Co ratio of ≥2. Fewer HC2-positive results were confirmed for 35- to 64-year-olds (50.3% at an RLU/Co ratio of ≥1 and 63.2% at an RLU/Co ratio of >2) than for 20- to 34-year-olds (78.7% at an RLU/Co ratio of ≥1 and 83.7% at an RLU/Co ratio of >2). If the HC2 test is used for routine screening as an initial test or as a method of triage for low-grade cytology, we would suggest increasing the threshold for positivity from the RLU/Co ratio of ≥1, recommended by the manufacturer, to an RLU/Co ratio of ≥2, since this study has shown that a beneficial balance between relative sensitivity and the proportion of CIN2+ lesions detected is achieved at this threshold. PMID:20007387

  3. Robust Ordering of Anaphase Events by Adaptive Thresholds and Competing Degradation Pathways.

    PubMed

    Kamenz, Julia; Mihaljev, Tamara; Kubis, Armin; Legewie, Stefan; Hauf, Silke

    2015-11-05

    The splitting of chromosomes in anaphase and their delivery into the daughter cells needs to be accurately executed to maintain genome stability. Chromosome splitting requires the degradation of securin, whereas the distribution of the chromosomes into the daughter cells requires the degradation of cyclin B. We show that cells encounter and tolerate variations in the abundance of securin or cyclin B. This makes the concurrent onset of securin and cyclin B degradation insufficient to guarantee that early anaphase events occur in the correct order. We uncover that the timing of chromosome splitting is not determined by reaching a fixed securin level, but that this level adapts to the securin degradation kinetics. In conjunction with securin and cyclin B competing for degradation during anaphase, this provides robustness to the temporal order of anaphase events. Our work reveals how parallel cell-cycle pathways can be temporally coordinated despite variability in protein concentrations. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Initial Weight Loss Response as an Indicator for Providing Early Rescue Efforts to Improve Long-term Treatment Outcomes.

    PubMed

    Unick, Jessica L; Pellegrini, Christine A; Demos, Kathryn E; Dorfman, Leah

    2017-09-01

    There is a large variability in response to behavioral weight loss (WL) programs. Reducing rates of obesity and diabetes may require more individuals to achieve clinically significant WL post-treatment. Given that WL within the first 1-2 months of a WL program is associated with long-term WL, it may be possible to improve treatment outcomes by identifying and providing additional intervention to those with poor initial success (i.e., "early non-responders"). We review the current literature regarding early non-response to WL programs and discuss how adaptive interventions can be leveraged as a strategy to "rescue" early non-responders. Preliminary findings suggest that adaptive interventions, specifically stepped care approaches, offer promise for improving outcomes among early non-responders. Future studies need to determine the optimal time point and threshold for intervening and the type of early intervention to employ. Clinicians and researchers should consider the discussed factors when making treatment decisions.

  5. Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy?

    PubMed

    Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa

    2015-11-01

    To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Surface ablation of aluminum and silicon by ultrashort laser pulses of variable width

    NASA Astrophysics Data System (ADS)

    Zayarny, D. A.; Ionin, A. A.; Kudryashov, S. I.; Makarov, S. V.; Kuchmizhak, A. A.; Vitrik, O. B.; Kulchin, Yu. N.

    2016-06-01

    Single-shot thresholds of surface ablation of aluminum and silicon via spallative ablation by infrared (IR) and visible ultrashort laser pulses of variable width τlas (0.2-12 ps) have been measured by optical microscopy. For increasing laser pulse width τlas < 3 ps, a drastic (threefold) drop of the ablation threshold of aluminum has been observed for visible pulses compared to an almost negligible threshold variation for IR pulses. In contrast, the ablation threshold in silicon increases threefold with increasing τlas for IR pulses, while the corresponding thresholds for visible pulses remained almost constant. In aluminum, such a width-dependent decrease in ablation thresholds has been related to strongly diminished temperature gradients for pulse widths exceeding the characteristic electron-phonon thermalization time. In silicon, the observed increase in ablation thresholds has been ascribed to two-photon IR excitation, while in the visible range linear absorption of the material results in almost constant thresholds.

  7. Identifying a Probabilistic Boolean Threshold Network From Samples.

    PubMed

    Melkman, Avraham A; Cheng, Xiaoqing; Ching, Wai-Ki; Akutsu, Tatsuya

    2018-04-01

    This paper studies the problem of exactly identifying the structure of a probabilistic Boolean network (PBN) from a given set of samples, where PBNs are probabilistic extensions of Boolean networks. Cheng et al. studied the problem while focusing on PBNs consisting of pairs of AND/OR functions. This paper considers PBNs consisting of Boolean threshold functions while focusing on those threshold functions that have unit coefficients. The treatment of Boolean threshold functions, and triplets and -tuplets of such functions, necessitates a deepening of the theoretical analyses. It is shown that wide classes of PBNs with such threshold functions can be exactly identified from samples under reasonable constraints, which include: 1) PBNs in which any number of threshold functions can be assigned provided that all have the same number of input variables and 2) PBNs consisting of pairs of threshold functions with different numbers of input variables. It is also shown that the problem of deciding the equivalence of two Boolean threshold functions is solvable in pseudopolynomial time but remains co-NP complete.

  8. Validity and reliability of in-situ air conduction thresholds measured through hearing aids coupled to closed and open instant-fit tips.

    PubMed

    O'Brien, Anna; Keidser, Gitte; Yeend, Ingrid; Hartley, Lisa; Dillon, Harvey

    2010-12-01

    Audiometric measurements through a hearing aid ('in-situ') may facilitate provision of hearing services where these are limited. This study investigated the validity and reliability of in-situ air conduction hearing thresholds measured with closed and open domes relative to thresholds measured with insert earphones, and explored sources of variability in the measures. Twenty-four adults with sensorineural hearing impairment attended two sessions in which thresholds and real-ear-to-dial-difference (REDD) values were measured. Without correction, significantly higher low-frequency thresholds in dB HL were measured in-situ than with insert earphones. Differences were due predominantly to differences in ear canal SPL, as measured with the REDD, which were attributed to leaking low-frequency energy. Test-retest data yielded higher variability with the closed dome coupling due to inconsistent seals achieved with this tip. For all three conditions, inter-participant variability in the REDD values was greater than intra-participant variability. Overall, in-situ audiometry is as valid and reliable as conventional audiometry provided appropriate REDD corrections are made and ambient sound in the test environment is controlled.

  9. Identifying community thresholds for lotic benthic diatoms in response to human disturbance.

    PubMed

    Tang, Tao; Tang, Ting; Tan, Lu; Gu, Yuan; Jiang, Wanxiang; Cai, Qinghua

    2017-06-23

    Although human disturbance indirectly influences lotic assemblages through modifying physical and chemical conditions, identifying thresholds of human disturbance would provide direct evidence for preventing anthropogenic degradation of biological conditions. In the present study, we used data obtained from tributaries of the Three Gorges Reservoir in China to detect effects of human disturbance on streams and to identify disturbance thresholds for benthic diatoms. Diatom species composition was significantly affected by three in-stream stressors including TP, TN and pH. Diatoms were also influenced by watershed % farmland and natural environmental variables. Considering three in-stream stressors, TP was positively influenced by % farmland and % impervious surface area (ISA). In contrast, TN and pH were principally affected by natural environmental variables. Among measured natural environmental variables, average annual air temperature, average annual precipitation, and topsoil % CaCO 3 , % gravel, and total exchangeable bases had significant effects on study streams. When effects of natural variables were accounted for, substantial compositional changes in diatoms occurred when farmland or ISA land use exceeded 25% or 0.3%, respectively. Our study demonstrated the rationale for identifying thresholds of human disturbance for lotic assemblages and addressed the importance of accounting for effects of natural factors for accurate disturbance thresholds.

  10. Audiometric Predictions Using SFOAE and Middle-Ear Measurements

    PubMed Central

    Ellison, John C.; Keefe, Douglas H.

    2006-01-01

    Objective The goals of the study are to determine how well stimulus-frequency otoacoustic emissions (SFOAEs) identify hearing loss, classify hearing loss as mild or moderate-severe, and correlate with pure-tone thresholds in a population of adults with normal middle-ear function. Other goals are to determine if middle-ear function as assessed by wideband acoustic transfer function (ATF) measurements in the ear canal account for the variability in normal thresholds, and if the inclusion of ATFs improves the ability of SFOAEs to identify hearing loss and predict pure-tone thresholds. Design The total suppressed SFOAE signal and its corresponding noise were recorded in 85 ears (22 normal ears and 63 ears with sensorineural hearing loss) at octave frequencies from 0.5 – 8 kHz using a nonlinear residual method. SFOAEs were recorded a second time in three impaired ears to assess repeatability. Ambient-pressure ATFs were obtained in all but one of these 85 ears, and were also obtained from an additional 31 normal-hearing subjects in whom SFOAE data were not obtained. Pure-tone air-and bone-conduction thresholds and 226-Hz tympanograms were obtained on all subjects. Normal tympanometry and the absence of air-bone gaps were used to screen subjects for normal middle-ear function. Clinical decision theory was used to assess the performance of SFOAE and ATF predictors in classifying ears as normal or impaired, and linear regression analysis was used to test the ability of SFOAE and ATF variables to predict the air-conduction audiogram. Results The ability of SFOAEs to classify ears as normal or hearing impaired was significant at all test frequencies. The ability of SFOAEs to classify impaired ears as either mild or moderate-severe was significant at test frequencies from 0.5 to 4 kHz. SFOAEs were present in cases of severe hearing loss. SFOAEs were also significantly correlated with air-conduction thresholds from 0.5 to 8 kHz. The best performance occurred using the SFOAE signal-to-noise ratio (S/N) as the predictor, and the overall best performance was at 2 kHz. The SFOAE S/N measures were repeatable to within 3.5 dB in impaired ears. The ATF measures explained up to 25% of the variance in the normal audiogram; however, ATF measures did not improve SFOAEs predictors of hearing loss except at 4 kHz. Conclusions In common with other OAE types, SFOAEs are capable of identifying the presence of hearing loss. In particular, SFOAEs performed better than distortion-product and click-evoked OAEs in predicting auditory status at 0.5 kHz; SFOAE performance was similar to that of other OAE types at higher frequencies except for a slight performance reduction at 4 kHz. Because SFOAEs were detected in ears with mild to severe cases of hearing loss they may also provide an estimate of the classification of hearing loss. Although SFOAEs were significantly correlated with hearing threshold, they do not appear to have clinical utility in predicting a specific behavioral threshold. Information on middle-ear status as assessed by ATF measures offered minimal improvement in SFOAE predictions of auditory status in a population of normal and impaired ears with normal middle-ear function. However, ATF variables did explain a significant fraction of the variability in the audiograms of normal ears, suggesting that audiometric thresholds in normal ears are partially constrained by middle-ear function as assessed by ATF tests. PMID:16230898

  11. Contrasting patterns of RUNX2 repeat variations are associated with palate shape in phyllostomid bats and New World primates.

    PubMed

    Ferraz, Tiago; Rossoni, Daniela M; Althoff, Sérgio L; Pissinatti, Alcides; Paixão-Cortês, Vanessa R; Bortolini, Maria Cátira; González-José, Rolando; Marroig, Gabriel; Salzano, Francisco M; Gonçalves, Gislene L; Hünemeier, Tábita

    2018-05-18

    Establishing the genetic basis that underlies craniofacial variability in natural populations is one of the main topics of evolutionary and developmental studies. One of the genes associated with mammal craniofacial variability is RUNX2, and in the present study we investigated the association between craniofacial length and width and RUNX2 across New World bats (Phyllostomidae) and primates (Catarrhini and Platyrrhini). Our results showed contrasting patterns of association between the glutamate/alanine ratios (Q/A ratio) and palate shape in these highly diverse groups. In phyllostomid bats, we found an association between shorter/broader faces and increase of the Q/A ratio. In New World monkeys (NWM) there was a positive correlation of increasing Q/A ratios to more elongated faces. Our findings reinforced the role of the Q/A ratio as a flexible genetic mechanism that would rapidly change the time of skull ossification throughout development. However, we propose a scenario in which the influence of this genetic adjustment system is indirect. The Q/A ratio would not lead to a specific phenotype, but throughout the history of a lineage, would act along with evolutionary constraints, as well as other genes, as a facilitator for adaptive morphological changes.

  12. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  13. Extraction of Extended Small-Scale Objects in Digital Images

    NASA Astrophysics Data System (ADS)

    Volkov, V. Y.

    2015-05-01

    Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.

  14. Informing Adaptation Decisions: What Do We Need to Know and What Do We Need to Do?

    NASA Astrophysics Data System (ADS)

    Pulwarty, R. S.; Webb, R. S.

    2014-12-01

    The demand for improved climate knowledge and information is well documented. As noted in the IPCC Reports (SREX, AR5) and other assessments, this demand has increased pressure for better information to support planning under changing rates of extremes event occurrence. This demand has focused on mechanisms used to respond to past variability and change, including, integrated resource management (watersheds, coasts), infrastructure design, information systems, technological optimization, financial risk management, and behavioral and institutional change. Climate inputs range from static site design statistics (return periods) to dynamic, emergent thresholds and transitions preceded by steep response curves and punctuated equilibria. Tradeoffs are evident in the use of risk-based anticipatory strategies vs. resilience measures. In such settings, annual decision calendars for operational requirements can confound adaptation expectations. Key knowledge assessment questions include: (1) How predictable are potential impacts of events in the context of other stressors, (2) how is action to anticipate such impacts informed, and (3) How often should criteria for "robustness" be reconsidered? To illustrate, we will discuss the climate information needs and uses for two areas of concern for both short and long-term risks (i) climate and disaster risk financing, and (ii) watershed management. The presentation will focus on the climate information needed for (1) improved monitoring, modeling and methods for understanding and analyzing exposure risks, (2) generating risk profiles, (3) developing information systems and scenarios for critical thresholds across climate time and space scales, (4) embedding annual decision calendars in the context of longer-term risk management, (5) gaming experiments to show the net benefits of new information. We will conclude with a discussion of the essential climate variables needed to implement services-delivery and development efforts such as the Global Framework for Climate Services and the Pilot Program on Climate Resilience.

  15. Biomimetic micromechanical adaptive flow-sensor arrays

    NASA Astrophysics Data System (ADS)

    Krijnen, Gijs; Floris, Arjan; Dijkstra, Marcel; Lammerink, Theo; Wiegerink, Remco

    2007-05-01

    We report current developments in biomimetic flow-sensors based on flow sensitive mechano-sensors of crickets. Crickets have one form of acoustic sensing evolved in the form of mechanoreceptive sensory hairs. These filiform hairs are highly perceptive to low-frequency sound with energy sensitivities close to thermal threshold. In this work we describe hair-sensors fabricated by a combination of sacrificial poly-silicon technology, to form silicon-nitride suspended membranes, and SU8 polymer processing for fabrication of hairs with diameters of about 50 μm and up to 1 mm length. The membranes have thin chromium electrodes on top forming variable capacitors with the substrate that allow for capacitive read-out. Previously these sensors have been shown to exhibit acoustic sensitivity. Like for the crickets, the MEMS hair-sensors are positioned on elongated structures, resembling the cercus of crickets. In this work we present optical measurements on acoustically and electrostatically excited hair-sensors. We present adaptive control of flow-sensitivity and resonance frequency by electrostatic spring stiffness softening. Experimental data and simple analytical models derived from transduction theory are shown to exhibit good correspondence, both confirming theory and the applicability of the presented approach towards adaptation.

  16. Detection of short-term changes in vegetation cover by use of LANDSAT imagery. [Arizona

    NASA Technical Reports Server (NTRS)

    Turner, R. M. (Principal Investigator); Wiseman, F. M.

    1975-01-01

    The author has identified the following significant results. By using a constant band 6 to band 5 radiance ratio of 1.25, the changing pattern of areas of relatively dense vegetation cover was detected for the semiarid region in the vicinity of Tucson, Arizona. Electronically produced binary thematic masks were used to map areas with dense vegetation. The foliar cover threshold represented by the ratio was not accurately determined but field measurements show that the threshold lies in the range of 10 to 25 percent foliage cover. Montana evergreen forests with constant dense cover were correctly shown to exceed the threshold on all dates. The summer active grassland exceeded the threshold in the summer unless rainfall was insufficient. Desert areas exceeded the threshold during the spring of 1973 following heavy rains; the same areas during the rainless spring of 1974 did not exceed threshold. Irrigated fields, parks, golf courses, and riparian communities were among the habitats most frequently surpassing the threshold.

  17. Establishing the Response of Low Frequency Auditory Filters

    NASA Technical Reports Server (NTRS)

    Rafaelof, Menachem; Christian, Andrew; Shepherd, Kevin; Rizzi, Stephen; Stephenson, James

    2017-01-01

    The response of auditory filters is central to frequency selectivity of sound by the human auditory system. This is true especially for realistic complex sounds that are often encountered in many applications such as modeling the audibility of sound, voice recognition, noise cancelation, and the development of advanced hearing aid devices. The purpose of this study was to establish the response of low frequency (below 100Hz) auditory filters. Two experiments were designed and executed; the first was to measure subject's hearing threshold for pure tones (at 25, 31.5, 40, 50, 63 and 80 Hz), and the second was to measure the Psychophysical Tuning Curves (PTCs) at two signal frequencies (Fs= 40 and 63Hz). Experiment 1 involved 36 subjects while experiment 2 used 20 subjects selected from experiment 1. Both experiments were based on a 3-down 1-up 3AFC adaptive staircase test procedure using either a variable level narrow-band noise masker or a tone. A summary of the results includes masked threshold data in form of PTCs, the response of auditory filters, their distribution, and comparison with similar recently published data.

  18. Robust crop and weed segmentation under uncontrolled outdoor illumination

    USDA-ARS?s Scientific Manuscript database

    A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...

  19. Discharge variability and bedrock river incision on the Hawaiian island of Kaua'i

    NASA Astrophysics Data System (ADS)

    Huppert, K.; Deal, E.; Perron, J. T.; Ferrier, K.; Braun, J.

    2017-12-01

    Bedrock river incision occurs during floods that generate sufficient shear stress to strip riverbeds of sediment cover and erode underlying bedrock. Thresholds for incision can prevent erosion at low flows and slow down erosion at higher flows that do generate excess shear stress. Because discharge distributions typically display power-law tails, with non-negligible frequencies of floods much greater than the mean, models incorporating stochastic discharge and incision thresholds predict that discharge variability can sometimes have greater effects on long-term incision rates than mean discharge. This occurs when the commonly observed inverse scalings between mean discharge and discharge variability are weak or when incision thresholds are high. Because the effects of thresholds and discharge variability have only been documented in a few locations, their influence on long-term river incision rates remains uncertain. The Hawaiian island of Kaua'i provides an ideal natural laboratory to evaluate the effects of discharge variability and thresholds on bedrock river incision because it has one of Earth's steepest spatial gradients in mean annual rainfall and it also experiences dramatic spatial variations in rainfall and discharge variability, spanning a wide range of the conditions reported on Earth. Kaua'i otherwise has minimal variations in lithology, vertical motion, and other factors that can influence erosion. River incision rates averaged over 1.5 - 4.5 Myr timescales can be estimated along the lengths of Kauaian channels from the depths of river canyons and lava flow ages. We characterize rainfall and discharge variability on Kaua'i using records from an extensive network of rain and stream gauges spanning the past century. We use these characterizations to model long-term bedrock river incision along Kauaian channels with a threshold-dependent incision law, modulated by site-specific discharge-channel width scalings. Our comparisons between modeled and observed erosion rates suggest that variations in river incision rates on Kaua'i are dominated by variations in mean rainfall and discharge, rather than by differences in storminess across the island. We explore the implications of this result for the threshold dependence of river incision across Earth's varied climates.

  20. Active adaptive management for reintroduction of an animal population

    USGS Publications Warehouse

    Runge, Michael C.

    2013-01-01

    Captive animals are frequently reintroduced to the wild in the face of uncertainty, but that uncertainty can often be reduced over the course of the reintroduction effort, providing the opportunity for adaptive management. One common uncertainty in reintroductions is the short-term survival rate of released adults (a release cost), an important factor because it can affect whether releasing adults or juveniles is better. Information about this rate can improve the success of the reintroduction program, but does the expected gain offset the costs of obtaining the information? I explored this question for reintroduction of the griffon vulture (Gyps fulvus) by framing the management question as a belief Markov decision process, characterizing uncertainty about release cost with 2 information state variables, and finding the solution using stochastic dynamic programming. For a reintroduction program of fixed length (e.g., 5 years of releases), the optimal policy in the final release year resembles the deterministic solution: release either all adults or all juveniles depending on whether the point estimate for the survival rate in question is above or below a specific threshold. But the optimal policy in the earlier release years 1) includes release of a mixture of juveniles and adults under some circumstances, and 2) recommends release of adults even when the point estimate of survival is much less than the deterministic threshold. These results show that in an iterated decision setting, the optimal decision in early years can be quite different from that in later years because of the value of learning. 

  1. An adaptive detector and channel estimator for deep space optical communications

    NASA Technical Reports Server (NTRS)

    Mukai, R.; Arabshahi, P.; Yan, T. Y.

    2001-01-01

    This paper will discuss the design and testing of both the channel parameter identification system, and the adaptive threshold system, and illustrate their advantages and performance under simulated channel degradation conditions.

  2. Generation of Comprehensive Surrogate Kinetic Models and Validation Databases for Simulating Large Molecular Weight Hydrocarbon Fuels

    DTIC Science & Technology

    2012-10-25

    of hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean molecular weight (MWave) of...diffusive soot extinction configurations. Matching the “real fuel combustion property targets” of hydrogen/ carbon molar ratio (H/C), derived cetane number...combustion property targets - hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean

  3. Modelling malaria incidence with environmental dependency in a locality of Sudanese savannah area, Mali

    PubMed Central

    Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K

    2009-01-01

    Background The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models. Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. Methods A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data. The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia. Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. Results The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]). The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Conclusion Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation. PMID:19361335

  4. Modelling malaria incidence with environmental dependency in a locality of Sudanese savannah area, Mali.

    PubMed

    Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A Lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K

    2009-04-10

    The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models.Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data.The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia.Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]).The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation.

  5. Fuel pin cladding

    DOEpatents

    Vaidyanathan, S.; Adamson, M.G.

    1986-01-28

    Disclosed is an improved fuel pin cladding, particularly adapted for use in breeder reactors, consisting of composite tubing with austenitic steel on the outer portion of the thickness of the tube wall and with nickel and/or ferritic material on the inner portion of the thickness of the tube wall. The nickel forms a sacrificial barrier as it reacts with certain fission products thereby reducing fission product activity at the austenitic steel interface. The ferritic material forms a preventive barrier for the austenitic steel as it is immune to liquid metal embrittlement. The improved cladding permits the use of high density fuel which in turn leads to a better breeding ratio in breeder reactors, and will increase the threshold at which failure occurs during temperature transients. 2 figs.

  6. Fuel pin cladding

    DOEpatents

    Vaidyanathan, S.; Adamson, M.G.

    1983-12-16

    An improved fuel pin cladding, particularly adapted for use in breeder reactors, is described which consist of composite tubing with austenitic steel on the outer portion of the thickness of the tube wall and with nickel an/or ferritic material on the inner portion of the thickness of the tube wall. The nickel forms a sacrificial barrier as it reacts with certain fission products thereby reducing fission product activity at the austenitic steel interface. The ferritic material forms a preventive barrier for the austenitic steel as it is immune to liquid metal embrittlement. The improved cladding permits the use of high density fuel which in turn leads to a better breeding ratio in breeder reactors, and will increase the threshold at which failure occurs during temperature transients.

  7. Target matching based on multi-view tracking

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Zhou, Changsheng

    2011-01-01

    A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.

  8. Model-free adaptive control of supercritical circulating fluidized-bed boilers

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L

    2014-12-16

    A novel 3-Input-3-Output (3.times.3) Fuel-Air Ratio Model-Free Adaptive (MFA) controller is introduced, which can effectively control key process variables including Bed Temperature, Excess O2, and Furnace Negative Pressure of combustion processes of advanced boilers. A novel 7-input-7-output (7.times.7) MFA control system is also described for controlling a combined 3-Input-3-Output (3.times.3) process of Boiler-Turbine-Generator (BTG) units and a 5.times.5 CFB combustion process of advanced boilers. Those boilers include Circulating Fluidized-Bed (CFB) Boilers and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.

  9. Propagation of high-energy laser beams through the earth's atmosphere II; Proceedings of the Meeting, Los Angeles, CA, Jan. 21-23, 1991

    NASA Technical Reports Server (NTRS)

    Ulrich, Peter B. (Editor); Wilson, Leroy E. (Editor)

    1991-01-01

    Consideration is given to turbulence at the inner scale, modeling turbulent transport in laser beam propagation, variable wind direction effects on thermal blooming correction, realistic wind effects on turbulence and thermal blooming compensation, wide bandwidth spectral measurements of atmospheric tilt turbulence, remote alignment of adaptive optical systems with far-field optimization, focusing infrared laser beams on targets in space without using adaptive optics, and a simplex optimization method for adaptive optics system alignment. Consideration is also given to ground-to-space multiline propagation at 1.3 micron, a path integral approach to thermal blooming, functional reconstruction predictions of uplink whole beam Strehl ratios in the presence of thermal blooming, and stability analysis of semidiscrete schemes for thermal blooming computation.

  10. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.

    PubMed

    Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq

    2017-06-01

    The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.

  11. The threshold signal:noise ratio in the perception of fragmented figures.

    PubMed

    Merkul'ev, A V; Pronin, S V; Semenov, L A; Foreman, N; Chikhman, V N; Shelepin, Yu E

    2006-01-01

    Perception thresholds were measured for fragmented outline figures (the Gollin test). A new approach to the question of the perception of incomplete images was developed. In this approach, figure fragmentation consisted of masking with multiplicative texture-like noise--this interference was termed "invisible" masking. The first series of studies established that the "similarity" between the amplitude-frequency spectra of test figures and "invisible" masks, expressed as a linear correlation coefficient, had significant effects on the recognition thresholds of these figures. The second series of experiments showed that progressing formation of the figures was accompanied by increases in the correlation between their spatial-frequency characteristics and the corresponding characteristics of the incomplete figure, while the correlation with the "invisible" mask decreased. It is suggested that the ratio of the correlation coefficients, characterizing the "similarity" of the fragmented figure with the intact figure and the "invisible" mask, corresponds to the signal:noise ratio. The psychophysical recognition threshold for figures for naive subjects not familiar with the test image alphabet was reached after the particular level of fragmentation at which this ratio was unity.

  12. Cost–effectiveness thresholds: pros and cons

    PubMed Central

    Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-01-01

    Abstract Cost–effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost–effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost–effectiveness thresholds allow cost–effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization’s Commission on Macroeconomics in Health suggested cost–effectiveness thresholds based on multiples of a country’s per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this – in addition to uncertainty in the modelled cost–effectiveness ratios – can lead to the wrong decision on how to spend health-care resources. Cost–effectiveness information should be used alongside other considerations – e.g. budget impact and feasibility considerations – in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost–effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair. PMID:27994285

  13. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    PubMed

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  14. [Distribution of waist circumference and waist-to-height ratio by categories of body mass index in patients attended in endocrinology and nutrition units].

    PubMed

    López De La Torre, Martín; Bellido Guerrero, Diego; Vidal Cortada, Josep; Soto González, Alfonso; García Malpartida, Katherinne; Hernandez-Mijares, Antonio

    2010-12-01

    Waist circumference (WC) and the waist-to-height ratio (WHtR) are anthropometric measures widely used in clinical practice to evaluate visceral fat and the consequent cardiovascular risk. However, risk thresholds should be standardized according to body mass index (BMI). To determine the distribution of WC and WHtR according to the BMI cut-points currently used to describe overweight and obesity. WC, WHtR and BMI were measured in 3521 adult patients (>18 years) attended in Endocrinology and Nutrition units. A total of 20.8% (734 patients) were diabetic. Obesity was found in 82.1% of diabetic patients and in 75% of non-diabetic patients. The WC thresholds proposed by the National Institute of Health (102 cm in men, 88 cm in women), Bray (100 cm in men, 90 cm in women) and the International Diabetes Federation (94 cm in men, 80 cm in women) were exceeded by 92.9%, 94.8% and 98.4% of obese men, 96.8%, 95.5% and 99.7% of obese women, 79.1%, 83.1% and 90% of diabetic men and 95.5%, 81.5% and 97.4% of diabetic women, respectively. Thresholds adapted to the degree of obesity (90, 100, 110 and 125 cm in men and 80, 90, 105 and 115cm in women for normal BMI, overweight, obesity I and obesity greater than I) were exceeded by 58.4% of obese men, 54.2% of obese women, 57.5% of diabetic men and 60.7% of diabetic women. WC was higher in men, and BMI and the WHtR were higher in women. The WC of diabetic women equalled that of men, and WC, WHtR and BMI were higher in diabetic than in non-diabetic women (p<0.001). WC (p<0.005), WHtR (p<0.001) and BMI (p<0.5) were also higher in diabetic than in non-diabetic men. WC and WHtR thresholds by BMI discriminated diabetic and obese patients better than single thresholds, and can be represented graphically by the distribution of percentile ranks of WC and WHtR by BMI.ik. Copyright © 2009 SEEN. Published by Elsevier Espana. All rights reserved.

  15. SU-E-J-266: Cone Beam Computed Tomography (CBCT) Inter-Scan and Inter-Observer Tumor Volume Variability Assessment in Patients Treated with Stereotactic Body Radiation Therapy (SBRT) for Early Stage Non-Small Cell Lung Cancer (NSCLC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Y; Aileen, C; Kozono, D

    Purpose: Quantification of volume changes on CBCT during SBRT for NSCLC may provide a useful radiological marker for radiation response and adaptive treatment planning, but the reproducibility of CBCT volume delineation is a concern. This study is to quantify inter-scan/inter-observer variability in tumor volume delineation on CBCT. Methods: Twenty earlystage (stage I and II) NSCLC patients were included in this analysis. All patients were treated with SBRT with a median dose of 54 Gy in 3 to 5 fractions. Two physicians independently manually contoured the primary gross tumor volume on CBCTs taken immediately before SBRT treatment (Pre) and after themore » same SBRT treatment (Post). Absolute volume differences (AVD) were calculated between the Pre and Post CBCTs for a given treatment to quantify inter-scan variability, and then between the two observers for a given CBCT to quantify inter-observer variability. AVD was also normalized with respect to average volume to obtain relative volume differences (RVD). Bland-Altman approach was used to evaluate variability. All statistics were calculated with SAS version 9.4. Results: The 95% limit of agreement (mean ± 2SD) on AVD and RVD measurements between Pre and Post scans were −0.32cc to 0.32cc and −0.5% to 0.5% versus −1.9 cc to 1.8 cc and −15.9% to 15.3% for the two observers respectively. The 95% limit of agreement of AVD and RVD between the two observers were −3.3 cc to 2.3 cc and −42.4% to 28.2% respectively. The greatest variability in inter-scan RVD was observed with very small tumors (< 5 cc). Conclusion: Inter-scan variability in RVD is greatest with small tumors. Inter-observer variability was larger than inter-scan variability. The 95% limit of agreement for inter-observer and inter-scan variability (∼15–30%) helps define a threshold for clinically meaningful change in tumor volume to assess SBRT response, with larger thresholds needed for very small tumors. Part of the work was funded by a Kaye award; Disclosure/Conflict of interest: Raymond H. Mak: Stock ownership: Celgene, Inc. Consulting: Boehringer-Ingelheim, Inc.« less

  16. Automatic segmentation of lung parenchyma based on curvature of ribs using HRCT images in scleroderma studies

    NASA Astrophysics Data System (ADS)

    Prasad, M. N.; Brown, M. S.; Ahmad, S.; Abtin, F.; Allen, J.; da Costa, I.; Kim, H. J.; McNitt-Gray, M. F.; Goldin, J. G.

    2008-03-01

    Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis. Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to perform segmentation of the lungs using a technique that selects an optimal threshold for a given scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build the model for selection of the best fitting lung boundary. The performance of the new technique was compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing. The two techniques were evaluated against manual reference segmentations using a volumetric overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the fixed threshold technique.

  17. Monitoring Start of Season in Alaska

    NASA Astrophysics Data System (ADS)

    Robin, J.; Dubayah, R.; Sparrow, E.; Levine, E.

    2006-12-01

    In biomes that have distinct winter seasons, start of spring phenological events, specifically timing of budburst and green-up of leaves, coincides with transpiration. Seasons leave annual signatures that reflect the dynamic nature of the hydrologic cycle and link the different spheres of the Earth system. This paper evaluates whether continuity between AVHRR and MODIS normalized difference vegetation index (NDVI) is achievable for monitoring land surface phenology, specifically start of season (SOS), in Alaska. Additionally, two thresholds, one based on NDVI and the other on accumulated growing degree-days (GDD), are compared to determine which most accurately predicts SOS for Fairbanks. Ratio of maximum greenness at SOS was computed from biweekly AVHRR and MODIS composites for 2001 through 2004 for Anchorage and Fairbanks regions. SOS dates were determined from annual green-up observations made by GLOBE students. Results showed that different processing as well as spectral characteristics of each sensor restrict continuity between the two datasets. MODIS values were consistently higher and had less inter-annual variability during the height of the growing season than corresponding AVHRR values. Furthermore, a threshold of 131-175 accumulated GDD was a better predictor of SOS for Fairbanks than a NDVI threshold applied to AVHRR and MODIS datasets. The NDVI threshold was developed from biweekly AVHRR composites from 1982 through 2004 and corresponding annual green-up observations at University of Alaska-Fairbanks (UAF). The GDD threshold was developed from 20+ years of historic daily mean air temperature data and the same green-up observations. SOS dates computed with the GDD threshold most closely resembled actual green-up dates observed by GLOBE students and UAF researchers. Overall, biweekly composites and effects of clouds, snow, and conifers limit the ability of NDVI to monitor phenological changes in Alaska.

  18. Genes under weaker stabilizing selection increase network evolvability and rapid regulatory adaptation to an environmental shift.

    PubMed

    Laarits, T; Bordalo, P; Lemos, B

    2016-08-01

    Regulatory networks play a central role in the modulation of gene expression, the control of cellular differentiation, and the emergence of complex phenotypes. Regulatory networks could constrain or facilitate evolutionary adaptation in gene expression levels. Here, we model the adaptation of regulatory networks and gene expression levels to a shift in the environment that alters the optimal expression level of a single gene. Our analyses show signatures of natural selection on regulatory networks that both constrain and facilitate rapid evolution of gene expression level towards new optima. The analyses are interpreted from the standpoint of neutral expectations and illustrate the challenge to making inferences about network adaptation. Furthermore, we examine the consequence of variable stabilizing selection across genes on the strength and direction of interactions in regulatory networks and in their subsequent adaptation. We observe that directional selection on a highly constrained gene previously under strong stabilizing selection was more efficient when the gene was embedded within a network of partners under relaxed stabilizing selection pressure. The observation leads to the expectation that evolutionarily resilient regulatory networks will contain optimal ratios of genes whose expression is under weak and strong stabilizing selection. Altogether, our results suggest that the variable strengths of stabilizing selection across genes within regulatory networks might itself contribute to the long-term adaptation of complex phenotypes. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  19. Monopolar Detection Thresholds Predict Spatial Selectivity of Neural Excitation in Cochlear Implants: Implications for Speech Recognition

    PubMed Central

    2016-01-01

    The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658

  20. The role of internal climate variability for interpreting climate change scenarios

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas

    2013-04-01

    When communicating information on climate change, the use of multi-model ensembles has been advocated to sample uncertainties over a range as wide as possible. To meet the demand for easily accessible results, the ensemble is often summarised by its multi-model mean signal. In rare cases, additional uncertainty measures are given to avoid loosing all information on the ensemble spread, e.g., the highest and lowest projected values. Such approaches, however, disregard the fundamentally different nature of the different types of uncertainties and might cause wrong interpretations and subsequently wrong decisions for adaptation. Whereas scenario and climate model uncertainties are of epistemic nature, i.e., caused by an in principle reducible lack of knowledge, uncertainties due to internal climate variability are aleatory, i.e., inherently stochastic and irreducible. As wisely stated in the proverb "climate is what you expect, weather is what you get", a specific region will experience one stochastic realisation of the climate system, but never exactly the expected climate change signal as given by a multi model mean. Depending on the meteorological variable, region and lead time, the signal might be strong or weak compared to the stochastic component. In cases of a low signal-to-noise ratio, even if the climate change signal is a well defined trend, no trends or even opposite trends might be experienced. Here I propose to use the time of emergence (TOE) to quantify and communicate when climate change trends will exceed the internal variability. The TOE provides a useful measure for end users to assess the time horizon for implementing adaptation measures. Furthermore, internal variability is scale dependent - the more local the scale, the stronger the influence of internal climate variability. Thus investigating the TOE as a function of spatial scale could help to assess the required spatial scale for implementing adaptation measures. I exemplify this proposal with a recently published study on the TOE for mean and heavy precipitation trends in Europe. In some regions trends emerge only late in the 21st century or even later, suggesting that in these regions adaptation to internal variability rather than to climate change is required. Yet in other regions the climate change signal is strong, urging for timely adaptation. Douglas Maraun, When at what scale will trends in European mean and heavy precipitation emerge? Env. Res. Lett., in press, 2013.

  1. QUEST+: A general multidimensional Bayesian adaptive psychometric method.

    PubMed

    Watson, Andrew B

    2017-03-01

    QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

  2. Method and apparatus for detection of catalyst failure on-board a motor vehicle using a dual oxygen sensor and an algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemmens, W.B.; Koupal, J.W.; Sabourin, M.A.

    1993-07-20

    Apparatus is described for detecting motor vehicle exhaust gas catalytic converter deterioration comprising a first exhaust gas oxygen sensor adapted for communication with an exhaust stream before passage of the exhaust stream through a catalytic converter and a second exhaust gas oxygen sensor adapted for communication with the exhaust stream after passage of the exhaust stream through the catalytic converter, an on-board vehicle computational means, said computational means adapted to accept oxygen content signals from the before and after catalytic converter oxygen sensors and adapted to generate signal threshold values, said computational means adapted to compare over repeated time intervalsmore » the oxygen content signals to the signal threshold values and to store the output of the compared oxygen content signals, and in response after a specified number of time intervals for a specified mode of motor vehicle operation to determine and indicate a level of catalyst deterioration.« less

  3. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  4. Characteristics of the gait adaptation process due to split-belt treadmill walking under a wide range of right-left speed ratios in humans.

    PubMed

    Yokoyama, Hikaru; Sato, Koji; Ogawa, Tetsuya; Yamamoto, Shin-Ichiro; Nakazawa, Kimitaka; Kawashima, Noritaka

    2018-01-01

    The adaptability of human bipedal locomotion has been studied using split-belt treadmill walking. Most of previous studies utilized experimental protocol under remarkably different split ratios (e.g. 1:2, 1:3, or 1:4). While, there is limited research with regard to adaptive process under the small speed ratios. It is important to know the nature of adaptive process under ratio smaller than 1:2, because systematic evaluation of the gait adaptation under small to moderate split ratios would enable us to examine relative contribution of two forms of adaptation (reactive feedback and predictive feedforward control) on gait adaptation. We therefore examined a gait behavior due to on split-belt treadmill adaptation under five belt speed difference conditions (from 1:1.2 to 1:2). Gait parameters related to reactive control (stance time) showed quick adjustments immediately after imposing the split-belt walking in all five speed ratios. Meanwhile, parameters related to predictive control (step length and anterior force) showed a clear pattern of adaptation and subsequent aftereffects except for the 1:1.2 adaptation. Additionally, the 1:1.2 ratio was distinguished from other ratios by cluster analysis based on the relationship between the size of adaptation and the aftereffect. Our findings indicate that the reactive feedback control was involved in all the speed ratios tested and that the extent of reaction was proportionally dependent on the speed ratio of the split-belt. On the contrary, predictive feedforward control was necessary when the ratio of the split-belt was greater. These results enable us to consider how a given split-belt training condition would affect the relative contribution of the two strategies on gait adaptation, which must be considered when developing rehabilitation interventions for stroke patients.

  5. Towards developing drought impact functions to advance drought monitoring and early warning

    NASA Astrophysics Data System (ADS)

    Bachmair, Sophie; Stahl, Kerstin; Hannaford, Jamie; Svoboda, Mark

    2015-04-01

    In natural hazard analysis, damage functions (also referred to as vulnerability or susceptibility functions) relate hazard intensity to the negative effects of the hazard event, often expressed as damage ratio or monetary loss. While damage functions for floods and seismic hazards have gained considerable attention, there is little knowledge on how drought intensity translates into ecological and socioeconomic impacts. One reason for this is the multifaceted nature of drought affecting different domains of the hydrological cycle and different sectors of human activity (for example, recognizing meteorological - agricultural - hydrological - socioeconomic drought) leading to a wide range of drought impacts. Moreover, drought impacts are often non-structural and hard to quantify or monetarize (e.g. impaired navigability of streams, bans on domestic water use, increased mortality of aquatic species). Knowledge on the relationship between drought intensity and drought impacts, i.e. negative environmental, economic or social effects experienced under drought conditions, however, is vital to identify critical thresholds for drought impact occurrence. Such information may help to improve drought monitoring and early warning (M&EW), one goal of the international DrIVER project (Drought Impacts: Vulnerability thresholds in monitoring and Early-warning Research). The aim of this study is to test the feasibility of designing "drought impact functions" for case study areas in Europe (Germany and UK) and the United States to derive thresholds meaningful for drought impact occurrence; to account for the multidimensionality of drought impacts, we use the broader term "drought impact function" over "damage function". First steps towards developing empirical drought impact functions are (1) to identify meaningful indicators characterizing the hazard intensity (e.g. indicators expressing a precipitation or streamflow deficit), (2) to identify suitable variables representing impacts, damage, or loss due to drought, and (3) to test different statistical models to link drought intensity with drought impact information to derive meaningful thresholds. While the focus regarding drought impact variables lies on text-based impact reports from the European Drought Impact report Inventory (EDII) and the US Drought Impact Reporter (DIR), the information gain through exploiting other variables such as agricultural yield statistics and remotely sensed vegetation indices is explored. First results reveal interesting insights into the complex relationship between drought indicators and impacts and highlight differences among drought impact variables and geographies. Although a simple intensity threshold evoking specific drought impacts cannot be identified, developing drought impact functions helps to elucidate how drought conditions relate to ecological or socioeconomic impacts. Such knowledge may provide guidance for inferring meaningful triggers for drought M&EW and could have potential for a wide range of drought management applications (for example, building drought scenarios for testing the resilience of drought plans or water supply systems).

  6. Quantum Adiabatic Optimization and Combinatorial Landscapes

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Knysh, S.; Morris, R. D.

    2003-01-01

    In this paper we analyze the performance of the Quantum Adiabatic Evolution (QAE) algorithm on a variant of Satisfiability problem for an ensemble of random graphs parametrized by the ratio of clauses to variables, gamma = M / N. We introduce a set of macroscopic parameters (landscapes) and put forward an ansatz of universality for random bit flips. We then formulate the problem of finding the smallest eigenvalue and the excitation gap as a statistical mechanics problem. We use the so-called annealing approximation with a refinement that a finite set of macroscopic variables (verses only energy) is used, and are able to show the existence of a dynamic threshold gamma = gammad, beyond which QAE should take an exponentially long time to find a solution. We compare the results for extended and simplified sets of landscapes and provide numerical evidence in support of our universality ansatz.

  7. Do Optimal Prognostic Thresholds in Continuous Physiological Variables Really Exist? Analysis of Origin of Apparent Thresholds, with Systematic Review for Peak Oxygen Consumption, Ejection Fraction and BNP

    PubMed Central

    Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.

    2014-01-01

    Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020

  8. Dealing with Unknown Variables in Policy/Program Evaluation.

    ERIC Educational Resources Information Center

    Nagel, Stuart S.

    1983-01-01

    Threshold analysis (TA) is introduced as an evaluation model. TA converts unknown variables into questions as to whether a given benefit, cost, or success probability is more or less than a threshold, above which the proposed project would be profitable, and below which it would be unprofitable. (Author/PN)

  9. Contrast adaptation induced by defocus - a possible error signal for emmetropization?

    PubMed

    Ohlendorf, Arne; Schaeffel, Frank

    2009-01-01

    To describe some features of contrast adaptation as induced by imposed positive or negative defocus. To study its time course and selectivity for the sign of the imposed defocus. Contrast adaptation, CA (here referred to as any change in supra-threshold contrast sensitivity) was induced by presenting a movie to the subjects on a computer screen at 1m distance for 10min, while the right eye was defocused by a trial lens (+4D (n=25); -4D (n=10); -2D (n=11 subjects). The PowerRefractor was used to track accommodation binocularly. Contrast sensitivity at threshold was measured by a method of adjustment with a Gabor patch of 1deg angular subtense, filled with 3.22cyc/deg sine wave grating presented on a computer screen at 1m distance on gray background (33cd/m(2)). Supra-threshold contrast sensitivity was quantified by an interocular contrast matching task, in which the subject had to match the contrast of the sine wave grating seen with the right eye with the contrast of a grating with fixed contrast of 0.1. (1) Contrast sensitivity thresholds were not lowered by previous viewing of defocused movies. (2) By wearing positive lenses, the supra-threshold contrast sensitivity in the right eye was raised by about 30% and remained elevated for at least 2min until baseline was reached after about 5min. (3) CA was induced only by positive, but not by negative lenses, even after the distance of the computer screen was taken into account (1m, equivalent to +1D). In five subjects, binocular accommodation was tracked over the full adaptation period. Accommodation appeared to focus the eye not wearing a lens, but short transient switches in focus to the lens wearing eye could not be entirely excluded. Transient contrast adaptation was found at 3.22cyc/deg when positive lenses were worn but not with negative lenses. This asymmetry is intriguing. While it may represent an epiphenomenon of physiological optics, further experiments are necessary to determine whether it could also trace back to differences in CA with defocus of different sign.

  10. Cortical Action Potential Backpropagation Explains Spike Threshold Variability and Rapid-Onset Kinetics

    PubMed Central

    Yu, Yuguo; Shu, Yousheng; McCormick, David A.

    2008-01-01

    Neocortical action potential responses in vivo are characterized by considerable threshold variability, and thus timing and rate variability, even under seemingly identical conditions. This finding suggests that cortical ensembles are required for accurate sensorimotor integration and processing. Intracellularly, trial-to-trial variability results not only from variation in synaptic activities, but also in the transformation of these into patterns of action potentials. Through simultaneous axonal and somatic recordings and computational simulations, we demonstrate that the initiation of action potentials in the axon initial segment followed by backpropagation of these spikes throughout the neuron results in a distortion of the relationship between the timing of synaptic and action potential events. In addition, this backpropagation also results in an unusually high rate of rise of membrane potential at the foot of the action potential. The distortion of the relationship between the amplitude time course of synaptic inputs and action potential output caused by spike back-propagation results in the appearance of high spike threshold variability at the level of the soma. At the point of spike initiation, the axon initial segment, threshold variability is considerably less. Our results indicate that spike generation in cortical neurons is largely as expected by Hodgkin—Huxley theory and is more precise than previously thought. PMID:18632930

  11. Lymph node ratio predicts disease-specific survival in melanoma patients.

    PubMed

    Xing, Yan; Badgwell, Brian D; Ross, Merrick I; Gershenwald, Jeffrey E; Lee, Jeffrey E; Mansfield, Paul F; Lucci, Anthony; Cormier, Janice N

    2009-06-01

    The objectives of this analysis were to compare various measures associated with lymph node (LN) dissection and to identify threshold values associated with disease-specific survival (DSS) outcomes in patients with melanoma. Patients with lymph node-positive melanoma who underwent therapeutic LN dissection of the neck, axilla, and inguinal region were identified from the SEER database (1988-2005). We performed Cox multivariate analyses to determine the impact of the total number of LNs removed, number of negative LNs removed, and LN ratio on DSS. Multivariate cut-point analyses were conducted for each anatomic region to identify the threshold values associated with the largest improvement in DSS. The LN ratio was significantly associated with DSS for all LN regions. The LN ratio thresholds resulting in the greatest difference in 5-year DSS were .07, .13, and .18 for neck, axillary, and inguinal regions, respectively, corresponding to 15, 8, and 6 LNs removed per positive lymph node. After adjustment for other clinicopathologic factors, the hazard ratios (HRs) were .53 (95% confidence interval [CI], .40 to .71) in the neck, .52 (95% CI, .42 to .65) in the axillary, and .47 (95% CI, .36 to .61) in the inguinal regions for patients who met the LN ratio threshold. Among the prognostic factors examined, LN ratio was the best indicator of the extent of LN dissection, regardless of anatomic nodal region. These data provide evidence-based guidelines for defining adequate LN dissections in melanoma patients. (c) 2009 American Cancer Society.

  12. Evaluation of factors that affect analytic variability of urine protein-to-creatinine ratio determination in dogs.

    PubMed

    Rossi, Gabriele; Giori, Luca; Campagnola, Simona; Zatelli, Andrea; Zini, Eric; Paltrinieri, Saverio

    2012-06-01

    To determine whether preanalytic and analytic factors affect evaluation of the urinary protein-to-creatinine (UPC) ratio in dogs. 50 canine urine samples. The UPC ratio was measured to assess the intra-assay imprecision (20 measurements within a single session), the influence of predilution (1:10, 1:20, and 1:100) for urine creatinine concentration measurement, and the effect of storage at room temperature (approx 20°C), 4°C, and -20°C. The coefficient of variation at room temperature determined with the 1:20 predilution was < 10.0%, with the highest coefficients of variation found in samples with a low protein concentration or low urine specific gravity. This variability could result in misclassification of samples with UPC ratios close to the thresholds defined by the International Renal Interest Society to classify dogs as nonproteinuric (0.2), borderline proteinuric (0.21 to 0.50), or proteinuric (> 0.51). A proportional bias was found in samples prediluted 1:10, compared with samples prediluted 1:20 or 1:100. At room temperature, the UPC ratio did not significantly increase after 2 and 4 hours. After 12 hours at room temperature and at 4°C, the UPC ratio significantly increased. The UPC ratio did not significantly change during 3 months of storage at -20°C. The intra-assay precision of the UPC ratio was sufficiently low to avoid misclassification of samples, except for values close to 0.2 or 0.5. The optimal predilution ratio for urine creatinine concentration measurement was 1:20. A 1:100 predilution is recommended in samples with a urine specific gravity > 1.030. The UPC ratio must be measured as soon as samples are collected. Alternatively, samples should be immediately frozen to increase their stability and minimize the risk of misclassification of proteinuria.

  13. Study on a low complexity adaptive modulation algorithm in OFDM-ROF system with sub-carrier grouping technology

    NASA Astrophysics Data System (ADS)

    Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao

    2018-01-01

    During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.

  14. Physiology-Based Modeling May Predict Surgical Treatment Outcome for Obstructive Sleep Apnea

    PubMed Central

    Li, Yanru; Ye, Jingying; Han, Demin; Cao, Xin; Ding, Xiu; Zhang, Yuhuan; Xu, Wen; Orr, Jeremy; Jen, Rachel; Sands, Scott; Malhotra, Atul; Owens, Robert

    2017-01-01

    Study Objectives: To test whether the integration of both anatomical and nonanatomical parameters (ventilatory control, arousal threshold, muscle responsiveness) in a physiology-based model will improve the ability to predict outcomes after upper airway surgery for obstructive sleep apnea (OSA). Methods: In 31 patients who underwent upper airway surgery for OSA, loop gain and arousal threshold were calculated from preoperative polysomnography (PSG). Three models were compared: (1) a multiple regression based on an extensive list of PSG parameters alone; (2) a multivariate regression using PSG parameters plus PSG-derived estimates of loop gain, arousal threshold, and other trait surrogates; (3) a physiological model incorporating selected variables as surrogates of anatomical and nonanatomical traits important for OSA pathogenesis. Results: Although preoperative loop gain was positively correlated with postoperative apnea-hypopnea index (AHI) (P = .008) and arousal threshold was negatively correlated (P = .011), in both model 1 and 2, the only significant variable was preoperative AHI, which explained 42% of the variance in postoperative AHI. In contrast, the physiological model (model 3), which included AHIREM (anatomy term), fraction of events that were hypopnea (arousal term), the ratio of AHIREM and AHINREM (muscle responsiveness term), loop gain, and central/mixed apnea index (control of breathing terms), was able to explain 61% of the variance in postoperative AHI. Conclusions: Although loop gain and arousal threshold are associated with residual AHI after surgery, only preoperative AHI was predictive using multivariate regression modeling. Instead, incorporating selected surrogates of physiological traits on the basis of OSA pathophysiology created a model that has more association with actual residual AHI. Commentary: A commentary on this article appears in this issue on page 1023. Clinical Trial Registration: ClinicalTrials.Gov; Title: The Impact of Sleep Apnea Treatment on Physiology Traits in Chinese Patients With Obstructive Sleep Apnea; Identifier: NCT02696629; URL: https://clinicaltrials.gov/show/NCT02696629 Citation: Li Y, Ye J, Han D, Cao X, Ding X, Zhang Y, Xu W, Orr J, Jen R, Sands S, Malhotra A, Owens R. Physiology-based modeling may predict surgical treatment outcome for obstructive sleep apnea. J Clin Sleep Med. 2017;13(9):1029–1037. PMID:28818154

  15. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.

  16. Passive activity observation (PAO) method to estimate outdoor thermal adaptation in public space: case studies in Australian cities.

    PubMed

    Sharifi, Ehsan; Boland, John

    2018-06-18

    Outdoor thermal comfort is influenced by people's climate expectations, perceptions and adaptation capacity. Varied individual response to comfortable or stressful thermal environments results in a deviation between actual outdoor thermal activity choices and those predicted by thermal comfort indices. This paper presents a passive activity observation (PAO) method for estimating contextual limits of outdoor thermal adaptation. The PAO method determines which thermal environment result in statistically meaningful changes may occur in outdoor activity patterns, and it estimates thresholds of outdoor thermal neutrality and limits of thermal adaptation in public space based on activity observation and microclimate field measurement. Applications of the PAO method have been demonstrated in Adelaide, Melbourne and Sydney, where outdoor activities were analysed against outdoor thermal comfort indices between 2013 and 2014. Adjusted apparent temperature (aAT), adaptive predicted mean vote (aPMV), outdoor standard effective temperature (OUT_SET), physiological equivalent temperature (PET) and universal thermal comfort index (UTCI) are calculated from the PAO data. Using the PAO method, the high threshold of outdoor thermal neutrality was observed between 24 °C for optional activities and 34 °C for necessary activities (UTCI scale). Meanwhile, the ultimate limit of thermal adaptation in uncontrolled public spaces is estimated to be between 28 °C for social activities and 48 °C for necessary activities. Normalised results indicate that city-wide high thresholds for outdoor thermal neutrality vary from 25 °C in Melbourne to 26 °C in Sydney and 30 °C in Adelaide. The PAO method is a relatively fast and localised method for measuring limits of outdoor thermal adaptation and effectively informs urban design and policy making in the context of climate change.

  17. Impact of rainfall spatial variability on Flash Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Douinot, Audrey; Roux, Hélène; Garambois, Pierre-André; Larnier, Kevin

    2014-05-01

    According to the United States National Hazard Statistics database, flooding and flash flooding have caused the largest number of deaths of any weather-related phenomenon over the last 30 years (Flash Flood Guidance Improvement Team, 2003). Like the storms that cause them, flash floods are very variable and non-linear phenomena in time and space, with the result that understanding and anticipating flash flood genesis is far from straightforward. In the U.S., the Flash Flood Guidance (FFG) estimates the average number of inches of rainfall for given durations required to produce flash flooding in the indicated county. In Europe, flash flood often occurred on small catchments (approximately 100 km2) and it has been shown that the spatial variability of rainfall has a great impact on the catchment response (Le Lay and Saulnier, 2007). Therefore, in this study, based on the Flash flood Guidance method, rainfall spatial variability information is introduced in the threshold estimation. As for FFG, the threshold is the number of millimeters of rainfall required to produce a discharge higher than the discharge corresponding to the first level (yellow) warning of the French flood warning service (SCHAPI: Service Central d'Hydrométéorologie et d'Appui à la Prévision des Inondations). The indexes δ1 and δ2 of Zoccatelli et al. (2010), based on the spatial moments of catchment rainfall, are used to characterize the rainfall spatial distribution. Rainfall spatial variability impacts on warning threshold and on hydrological processes are then studied. The spatially distributed hydrological model MARINE (Roux et al., 2011), dedicated to flash flood prediction is forced with synthetic rainfall patterns of different spatial distributions. This allows the determination of a warning threshold diagram: knowing the spatial distribution of the rainfall forecast and therefore the 2 indexes δ1 and δ2, the threshold value is read on the diagram. A warning threshold diagram is built for each studied catchment. The proposed methodology is applied on three Mediterranean catchments often submitted to flash floods. The new forecasting method as well as the Flash Flood Guidance method (uniform rainfall threshold) are tested on 25 flash floods events that had occurred on those catchments. Results show a significant impact of rainfall spatial variability. Indeed, it appears that the uniform rainfall threshold (FFG threshold) always overestimates the observed rainfall threshold. The difference between the FFG threshold and the proposed threshold ranges from 8% to 30%. The proposed methodology allows the calculation of a threshold more representative of the observed one. However, results strongly depend on the related event duration and on the catchment properties. For instance, the impact of the rainfall spatial variability seems to be correlated with the catchment size. According to these results, it seems to be interesting to introduce information on the catchment properties in the threshold calculation. Flash Flood Guidance Improvement Team, 2003. River Forecast Center (RFC) Development Management Team. Final Report. Office of Hydrologic Development (OHD), Silver Spring, Mary-land. Le Lay, M. and Saulnier, G.-M., 2007. Exploring the signature of climate and landscape spatial variabilities in flash flood events: Case of the 8-9 September 2002 Cévennes-Vivarais catastrophic event. Geophysical Research Letters, 34(L13401), doi:10.1029/2007GL029746. Roux, H., Labat, D., Garambois, P.-A., Maubourguet, M.-M., Chorda, J. and Dartus, D., 2011. A physically-based parsimonious hydrological model for flash floods in Mediterranean catchments. Nat. Hazards Earth Syst. Sci. J1 - NHESS, 11(9), 2567-2582. Zoccatelli, D., Borga, M., Zanon, F., Antonescu, B. and Stancalie, G., 2010. Which rainfall spatial information for flash flood response modelling? A numerical investigation based on data from the Carpathian range, Romania. Journal of Hydrology, 394(1-2), 148-161.

  18. A STATISTICAL MODELING METHODOLOGY FOR THE DETECTION, QUANTIFICATION, AND PREDICTION OF ECOLOGICAL THRESHOLDS

    EPA Science Inventory

    This study will provide a general methodology for integrating threshold information from multiple species ecological metrics, allow for prediction of changes of alternative stable states, and provide a risk assessment tool that can be applied to adaptive management. The integr...

  19. Incorporating adaptive responses into future projections of coral bleaching.

    PubMed

    Logan, Cheryl A; Dunne, John P; Eakin, C Mark; Donner, Simon D

    2014-01-01

    Climate warming threatens to increase mass coral bleaching events, and several studies have projected the demise of tropical coral reefs this century. However, recent evidence indicates corals may be able to respond to thermal stress though adaptive processes (e.g., genetic adaptation, acclimatization, and symbiont shuffling). How these mechanisms might influence warming-induced bleaching remains largely unknown. This study compared how different adaptive processes could affect coral bleaching projections. We used the latest bias-corrected global sea surface temperature (SST) output from the NOAA/GFDL Earth System Model 2 (ESM2M) for the preindustrial period through 2100 to project coral bleaching trajectories. Initial results showed that, in the absence of adaptive processes, application of a preindustrial climatology to the NOAA Coral Reef Watch bleaching prediction method overpredicts the present-day bleaching frequency. This suggests that corals may have already responded adaptively to some warming over the industrial period. We then modified the prediction method so that the bleaching threshold either permanently increased in response to thermal history (e.g., simulating directional genetic selection) or temporarily increased for 2-10 years in response to a bleaching event (e.g., simulating symbiont shuffling). A bleaching threshold that changes relative to the preceding 60 years of thermal history reduced the frequency of mass bleaching events by 20-80% compared with the 'no adaptive response' prediction model by 2100, depending on the emissions scenario. When both types of adaptive responses were applied, up to 14% more reef cells avoided high-frequency bleaching by 2100. However, temporary increases in bleaching thresholds alone only delayed the occurrence of high-frequency bleaching by ca. 10 years in all but the lowest emissions scenario. Future research should test the rate and limit of different adaptive responses for coral species across latitudes and ocean basins to determine if and how much corals can respond to increasing thermal stress.

  20. Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type.

    PubMed

    Zhou, Douglas; Sun, Yi; Rangan, Aaditya V; Cai, David

    2010-04-01

    We discuss how to characterize long-time dynamics of non-smooth dynamical systems, such as integrate-and-fire (I&F) like neuronal network, using Lyapunov exponents and present a stable numerical method for the accurate evaluation of the spectrum of Lyapunov exponents for this large class of dynamics. These dynamics contain (i) jump conditions as in the firing-reset dynamics and (ii) degeneracy such as in the refractory period in which voltage-like variables of the network collapse to a single constant value. Using the networks of linear I&F neurons, exponential I&F neurons, and I&F neurons with adaptive threshold, we illustrate our method and discuss the rich dynamics of these networks.

  1. The survey and criterion of the compass rose in Chinese A-share market

    NASA Astrophysics Data System (ADS)

    Tian, Wenzhao; Wang, Yanxiang; Huo, Zhao; Li, Yilin

    2018-02-01

    The compass rose is one of the few "recurring patterns" found in financial markets. In this paper, the compass rose in Chinese A-share market is comprehensively investigated. It is newly discovered that among the 1331 A-shares, which had been listed for more than 15 years by the end of 2015, only about 20 show the compass rose. The outcome of the analysis shows that there exists a threshold of the ratio of the data points on main rays to all data points. Only when this ratio is above the threshold, the compass rose appears. The reasons why such a threshold exists, and its interrelationship with the data frequency and the tick/volatility ratio are analyzed.

  2. Method of Improved Fuzzy Contrast Combined Adaptive Threshold in NSCT for Medical Image Enhancement

    PubMed Central

    Yang, Jie; Kasabov, Nikola

    2017-01-01

    Noises and artifacts are introduced to medical images due to acquisition techniques and systems. This interference leads to low contrast and distortion in images, which not only impacts the effectiveness of the medical image but also seriously affects the clinical diagnoses. This paper proposes an algorithm for medical image enhancement based on the nonsubsampled contourlet transform (NSCT), which combines adaptive threshold and an improved fuzzy set. First, the original image is decomposed into the NSCT domain with a low-frequency subband and several high-frequency subbands. Then, a linear transformation is adopted for the coefficients of the low-frequency component. An adaptive threshold method is used for the removal of high-frequency image noise. Finally, the improved fuzzy set is used to enhance the global contrast and the Laplace operator is used to enhance the details of the medical images. Experiments and simulation results show that the proposed method is superior to existing methods of image noise removal, improves the contrast of the image significantly, and obtains a better visual effect. PMID:28744464

  3. Free testosterone as marker of adaptation to medium-intensive exercise.

    PubMed

    Shkurnikov, M U; Donnikov, A E; Akimov, E B; Sakharov, D A; Tonevitsky, A G

    2008-09-01

    A 4-week study of adaptation reserves of the body was carried out during medium intensive exercise (medium intensive training: 60-80% threshold anaerobic metabolism). Two groups of athletes were singled out by the results of pulsometry analysis: with less than 20% work duration at the level above the 80% threshold anaerobic metabolism and with more than 20% work duration at the level above 80% threshold anaerobic metabolism. No appreciable differences between the concentrations of total testosterone, growth hormone, and cortisol before and after exercise in the groups with different percentage of anaerobic work duration were detected. In group 1 the concentrations of free testosterone did not change throughout the period of observation in comparison with the levels before training. In group 2, the level of free testosterone increased in comparison with the basal level: from 0.61+/-0.12 nmol/liter at the end of week 1 to 0.98+/-0.11 nmol/liter at the end of week 4 (p<0.01). The results indicate that the level of free testosterone can be used for evaluating the degree of athlete's adaptation to medium intensive exercise.

  4. Detection in fixed and random noise in foveal and parafoveal vision explained by template learning

    NASA Technical Reports Server (NTRS)

    Beard, B. L.; Ahumada, A. J. Jr; Watson, A. B. (Principal Investigator)

    1999-01-01

    Foveal and parafoveal contrast detection thresholds for Gabor and checkerboard targets were measured in white noise by means of a two-interval forced-choice paradigm. Two white-noise conditions were used: fixed and twin. In the fixed noise condition a single noise sample was presented in both intervals of all the trials. In the twin noise condition the same noise sample was used in the two intervals of a trial, but a new sample was generated for each trial. Fixed noise conditions usually resulted in lower thresholds than twin noise. Template learning models are presented that attribute this advantage of fixed over twin noise either to fixed memory templates' reducing uncertainty by incorporation of the noise or to the introduction, by the learning process itself, of more variability in the twin noise condition. Quantitative predictions of the template learning process show that it contributes to the accelerating nonlinear increase in performance with signal amplitude at low signal-to-noise ratios.

  5. Crack Growth Behavior in the Threshold Region for High Cycle Fatigue Loading

    NASA Technical Reports Server (NTRS)

    Forman, R. G.; Zanganeh, M.

    2014-01-01

    This paper describes the results of a research program conducted to improve the understanding of fatigue crack growth rate behavior in the threshold growth rate region and to answer a question on the validity of threshold region test data. The validity question relates to the view held by some experimentalists that using the ASTM load shedding test method does not produce valid threshold test results and material properties. The question involves the fanning behavior observed in threshold region of da/dN plots for some materials in which the low R-ratio data fans out from the high R-ratio data. This fanning behavior or elevation of threshold values in the low R-ratio tests is generally assumed to be caused by an increase in crack closure in the low R-ratio tests. Also, the increase in crack closure is assumed by some experimentalists to result from using the ASTM load shedding test procedure. The belief is that this procedure induces load history effects which cause remote closure from plasticity and/or roughness changes in the surface morphology. However, experimental studies performed by the authors have shown that the increase in crack closure is a result of extensive crack tip bifurcations that can occur in some materials, particularly in aluminum alloys, when the crack tip cyclic yield zone size becomes less than the grain size of the alloy. This behavior is related to the high stacking fault energy (SFE) property of aluminum alloys which results in easier slip characteristics. Therefore, the fanning behavior which occurs in aluminum alloys is a function of intrinsic dislocation property of the alloy, and therefore, the fanned data does represent the true threshold properties of the material. However, for the corrosion sensitive steel alloys tested in laboratory air, the occurrence of fanning results from fretting corrosion at the crack tips, and these results should not be considered to be representative of valid threshold properties because the fanning is eliminated when testing is performed in dry air.

  6. Regression Discontinuity for Causal Effect Estimation in Epidemiology.

    PubMed

    Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till

    Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.

  7. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  8. Is the sky the limit? On the expansion threshold of a species' range.

    PubMed

    Polechová, Jitka

    2018-06-15

    More than 100 years after Grigg's influential analysis of species' borders, the causes of limits to species' ranges still represent a puzzle that has never been understood with clarity. The topic has become especially important recently as many scientists have become interested in the potential for species' ranges to shift in response to climate change-and yet nearly all of those studies fail to recognise or incorporate evolutionary genetics in a way that relates to theoretical developments. I show that range margins can be understood based on just two measurable parameters: (i) the fitness cost of dispersal-a measure of environmental heterogeneity-and (ii) the strength of genetic drift, which reduces genetic diversity. Together, these two parameters define an 'expansion threshold': adaptation fails when genetic drift reduces genetic diversity below that required for adaptation to a heterogeneous environment. When the key parameters drop below this expansion threshold locally, a sharp range margin forms. When they drop below this threshold throughout the species' range, adaptation collapses everywhere, resulting in either extinction or formation of a fragmented metapopulation. Because the effects of dispersal differ fundamentally with dimension, the second parameter-the strength of genetic drift-is qualitatively different compared to a linear habitat. In two-dimensional habitats, genetic drift becomes effectively independent of selection. It decreases with 'neighbourhood size'-the number of individuals accessible by dispersal within one generation. Moreover, in contrast to earlier predictions, which neglected evolution of genetic variance and/or stochasticity in two dimensions, dispersal into small marginal populations aids adaptation. This is because the reduction of both genetic and demographic stochasticity has a stronger effect than the cost of dispersal through increased maladaptation. The expansion threshold thus provides a novel, theoretically justified, and testable prediction for formation of the range margin and collapse of the species' range.

  9. Six weeks of a polarized training-intensity distribution leads to greater physiological and performance adaptations than a threshold model in trained cyclists.

    PubMed

    Neal, Craig M; Hunter, Angus M; Brennan, Lorraine; O'Sullivan, Aifric; Hamilton, D Lee; De Vito, Giuseppe; Galloway, Stuart D R

    2013-02-15

    This study was undertaken to investigate physiological adaptation with two endurance-training periods differing in intensity distribution. In a randomized crossover fashion, separated by 4 wk of detraining, 12 male cyclists completed two 6-wk training periods: 1) a polarized model [6.4 (±1.4 SD) h/wk; 80%, 0%, and 20% of training time in low-, moderate-, and high-intensity zones, respectively]; and 2) a threshold model [7.5 (±2.0 SD) h/wk; 57%, 43%, and 0% training-intensity distribution]. Before and after each training period, following 2 days of diet and exercise control, fasted skeletal muscle biopsies were obtained for mitochondrial enzyme activity and monocarboxylate transporter (MCT) 1 and 4 expression, and morning first-void urine samples were collected for NMR spectroscopy-based metabolomics analysis. Endurance performance (40-km time trial), incremental exercise, peak power output (PPO), and high-intensity exercise capacity (95% maximal work rate to exhaustion) were also assessed. Endurance performance, PPOs, lactate threshold (LT), MCT4, and high-intensity exercise capacity all increased over both training periods. Improvements were greater following polarized rather than threshold for PPO [mean (±SE) change of 8 (±2)% vs. 3 (±1)%, P < 0.05], LT [9 (±3)% vs. 2 (±4)%, P < 0.05], and high-intensity exercise capacity [85 (±14)% vs. 37 (±14)%, P < 0.05]. No changes in mitochondrial enzyme activities or MCT1 were observed following training. A significant multilevel, partial least squares-discriminant analysis model was obtained for the threshold model but not the polarized model in the metabolomics analysis. A polarized training distribution results in greater systemic adaptation over 6 wk in already well-trained cyclists. Markers of muscle metabolic adaptation are largely unchanged, but metabolomics markers suggest different cellular metabolic stress that requires further investigation.

  10. Measurement of retinal wall-to-lumen ratio by adaptive optics retinal camera: a clinical research.

    PubMed

    Meixner, Eva; Michelson, Georg

    2015-11-01

    To measure the wall-to-lumen ratio (WLR) and the cross-sectional area of the vascular wall (WCSA) of retinal arterioles by an Adaptive Optics (AO) retinal camera. Forty-seven human subjects were examined and their medical history was explored. WLR and WCSA were measured on the basis of retinal arteriolar wall thickness (VW), lumen diameter (LD) and vessel diameter (VD) assessed by rtx1 Adaptive Optics retinal camera. WLR was calculated by the formula [Formula: see text]. Arterio-venous ratio (AVR) and microvascular abnormalities were attained by quantitative and qualitative assessment of fundus photographs. Influence of age, arterial hypertension, body mass index (BMI) and retinal microvascular abnormalities on the WLR was examined. An age-adjusted WLR was created to test influences on WLR independently of age. Considering WLR and WCSA, a distinction between eutrophic and hypertrophic retinal remodeling processes was possible. The intra-observer variability (IOV) was 6 % ± 0.9 for arteriolar wall thickness and 2 % ± 0.2 for arteriolar wall thickness plus vessel lumen. WLR depended significantly on the wall thickness (r = 0.715; p < 0.01) of retinal arterioles, but was independent of the total vessel diameter (r = 0.052; p = 0.728). WLR correlated significantly with age (r = 0.769; p < 0.01). Arterial hypertension and a higher BMI were significantly associated with an increased age-adjusted WLR. WLR correlated significantly with the stage of microvascular abnormalities. 55 % of the hypertensive subjects and 11 % of the normotensive subjects showed eutrophic remodeling, while hypertrophic remodeling was not detectable. WLR correlated inversely with AVR. AVR was independent of the arteriolar wall thickness, age and arterial hypertension. The technique of AO retinal imaging allows a direct measurement of the retinal vessel wall and lumen diameter with good intra-observer variability. Age, arterial hypertension and an elevated BMI level are significantly associated with an increased WLR. The wall-to-lumen ratio measured by AO can be used to detect structural retinal microvascular alterations in an early stage of remodeling processes.

  11. Wavelet methodology to improve single unit isolation in primary motor cortex cells

    PubMed Central

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A.

    2016-01-01

    The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein’s unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. PMID:25794461

  12. Adaptive measurements of urban runoff quality

    NASA Astrophysics Data System (ADS)

    Wong, Brandon P.; Kerkez, Branko

    2016-11-01

    An approach to adaptively measure runoff water quality dynamics is introduced, focusing specifically on characterizing the timing and magnitude of urban pollutographs. Rather than relying on a static schedule or flow-weighted sampling, which can miss important water quality dynamics if parameterized inadequately, novel Internet-enabled sensor nodes are used to autonomously adapt their measurement frequency to real-time weather forecasts and hydrologic conditions. This dynamic approach has the potential to significantly improve the use of constrained experimental resources, such as automated grab samplers, which continue to provide a strong alternative to sampling water quality dynamics when in situ sensors are not available. Compared to conventional flow-weighted or time-weighted sampling schemes, which rely on preset thresholds, a major benefit of the approach is the ability to dynamically adapt to features of an underlying hydrologic signal. A 28 km2 urban watershed was studied to characterize concentrations of total suspended solids (TSS) and total phosphorus. Water quality samples were autonomously triggered in response to features in the underlying hydrograph and real-time weather forecasts. The study watershed did not exhibit a strong first flush and intraevent concentration variability was driven by flow acceleration, wherein the largest loadings of TSS and total phosphorus corresponded with the steepest rising limbs of the storm hydrograph. The scalability of the proposed method is discussed in the context of larger sensor network deployments, as well the potential to improving control of urban water quality.

  13. Optimization of artificial neural network models through genetic algorithms for surface ozone concentration forecasting.

    PubMed

    Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G

    2012-09-01

    This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.

  14. Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.

    PubMed

    Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John

    2018-03-01

    Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.

  15. Near-threshold fatigue crack behaviour in EUROFER 97 at different temperatures

    NASA Astrophysics Data System (ADS)

    Aktaa, J.; Lerch, M.

    2006-07-01

    The fatigue crack behaviour in EUROFER 97 was investigated at room temperature (RT), 300, 500 and 550 °C for the assessment of cracks in first wall structures built from EUROFER 97 of future fusion reactors. For this purpose, fatigue crack growth tests were performed using CT specimens with two R-ratios, R = 0.1 and R = 0.5 ( R is the load ratio with R = Fmin/ Fmax where Fmin and Fmax are the minimum and maximum applied loads within a cycle, respectively). Hence, fatigue crack threshold, fatigue crack growth behaviour in the near-threshold range and their dependences on temperature and R-ratio were determined and described using an analytical formula. The fatigue crack threshold showed a monotonous dependence on temperature which is for R = 0.5 insignificantly small. The fatigue crack growth behaviour exhibited for R = 0.1 a non-monotonous dependence on temperature which is explained by the decrease of yield stress and the increase of creep damage with increasing temperature.

  16. Masked hearing thresholds of a beluga whale ( Delphinapterus leucas) in icebreaker noise

    NASA Astrophysics Data System (ADS)

    Erbe, C.; Farmer, D. M.

    An experiment is presented that measured masked hearing thresholds of a beluga whale at the Vancouver Aquarium. The masked signal was a typical beluga vocalization; the masking noise included two types of icebreaker noise and naturally occurring icecracking noise. Thresholds were measured behaviorally in a go/no-go paradigm. Results were that bubbler system noise exhibited the strongest masking effect with a critical noise-to-signal ratio of 15.4 dB. Propeller cavitation noise completely masked the vocalization for noise-to-signal ratios greater than 18.0 dB. Natural icecracking noise showed the least interference with a threshold at 29.0 dB. A psychophysical analysis indicated that the whale did not have a consistent decision bias.

  17. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  18. Device for adapting continuously variable transmissions to infinitely variable transmissions with forward-neutral-reverse capabilities

    DOEpatents

    Wilkes, Donald F.; Purvis, James W.; Miller, A. Keith

    1997-01-01

    An infinitely variable transmission is capable of operating between a maximum speed in one direction and a minimum speed in an opposite direction, including a zero output angular velocity, while being supplied with energy at a constant angular velocity. Input energy is divided between a first power path carrying an orbital set of elements and a second path that includes a variable speed adjustment mechanism. The second power path also connects with the orbital set of elements in such a way as to vary the rate of angular rotation thereof. The combined effects of power from the first and second power paths are combined and delivered to an output element by the orbital element set. The transmission can be designed to operate over a preselected ratio of forward to reverse output speeds.

  19. Characteristics of the gait adaptation process due to split-belt treadmill walking under a wide range of right-left speed ratios in humans

    PubMed Central

    Ogawa, Tetsuya; Yamamoto, Shin-Ichiro; Nakazawa, Kimitaka

    2018-01-01

    The adaptability of human bipedal locomotion has been studied using split-belt treadmill walking. Most of previous studies utilized experimental protocol under remarkably different split ratios (e.g. 1:2, 1:3, or 1:4). While, there is limited research with regard to adaptive process under the small speed ratios. It is important to know the nature of adaptive process under ratio smaller than 1:2, because systematic evaluation of the gait adaptation under small to moderate split ratios would enable us to examine relative contribution of two forms of adaptation (reactive feedback and predictive feedforward control) on gait adaptation. We therefore examined a gait behavior due to on split-belt treadmill adaptation under five belt speed difference conditions (from 1:1.2 to 1:2). Gait parameters related to reactive control (stance time) showed quick adjustments immediately after imposing the split-belt walking in all five speed ratios. Meanwhile, parameters related to predictive control (step length and anterior force) showed a clear pattern of adaptation and subsequent aftereffects except for the 1:1.2 adaptation. Additionally, the 1:1.2 ratio was distinguished from other ratios by cluster analysis based on the relationship between the size of adaptation and the aftereffect. Our findings indicate that the reactive feedback control was involved in all the speed ratios tested and that the extent of reaction was proportionally dependent on the speed ratio of the split-belt. On the contrary, predictive feedforward control was necessary when the ratio of the split-belt was greater. These results enable us to consider how a given split-belt training condition would affect the relative contribution of the two strategies on gait adaptation, which must be considered when developing rehabilitation interventions for stroke patients. PMID:29694404

  20. Expected effects of residual chlorine and nitrogen in sewage effluent on an estuarine pelagic ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hattis, D.; Lemerise, A.; Ratick, S.

    1995-12-31

    The authors used physical, toxicological, and system dynamic modeling tools to estimate the probable ecological effects caused by residual chlorine and nitrogen in sewage effluent discharged into Greenwich Cove, RI, USA. An energy systems model of the pelagic ecosystem in Narragansett Bay was developed and adapted for use in Greenwich Cove. This model allowed them to assess the indirect effects on organisms in the food web that result from a direct toxic effect on a given organism. Indirect food web mediated effects were the primary mode of loss for bluefish, but not for menhaden. The authors chose gross primary production,more » the flux of carbon to the benthos, fish out-migration, and fish harvest as outcome variables indicative of different valuable ecosystem functions. Organism responses were modeled using an assumption that lethal toxic responses occur as individual organism thresholds are exceeded, and that in general thresholds are lognormally distributed in a population of mixed individuals. They performed sensitivity analyses to assess the implications of different plausible values for the probit slopes used in the model. The putative toxic damage repair rate, combined with estimates of the exposure variability for each species, determined the averaging time that was likely to be most important in producing toxicity. Temperature was an important external factor in the physical, toxicological, and ecological models. These three models can be integrated into a single model applicable to other locations and stressors given the availability of appropriate data.« less

  1. Cost-effectiveness of sialendoscopy versus medical management for radioiodine-induced sialadenitis.

    PubMed

    Kowalczyk, David M; Jordan, J Randall; Stringer, Scott P

    2018-03-30

    The medical management and radiographic identification of radioiodine-induced sialadenitis (RAIS) is challenging. This study utilizes a cost-effectiveness analysis to compare upfront sialendoscopy as both a diagnostic and therapeutic option versus multiple modalities of diagnostic radiography along with medical management. Literature review and cost-effectiveness analysis. A literature review was performed to identify the outcomes of medical management, sialendoscopy, diagnostic radiography, and surgical complications. All charges were obtained from the University of Mississippi Budget Office in 2017 US dollars and converted to costs using the 2017 Medicare Cost-to-Charge Ratio for urban medical centers. A cost-effectiveness analysis was used to evaluate the four treatment arms-sialendoscopy, medical management- ultrasound, medical management-computed tomography (CT) sialography, and medical management-magnetic resonance (MR) sialography. Sensitivity analyses were used to evaluate the confidence levels of the economic evaluation. The incremental cost-effectiveness ratio for upfront sialendoscopy versus medical management-ultrasound was $30,402.30, which demonstrates that sialendoscopy is the more cost-effective option given a willingness-to-pay threshold of $50,000. The probability that this decision is correct at a willingness-to-pay of $50,000 is 64.5%. Sialendoscopic improvement was the most sensitive variable requiring a threshold of 0.70. Of the three imaging modalities, ultrasound dominated MR and CT sialography, both of which required a willingness-to-pay of greater than $90,000 to realize a difference. Upfront sialendoscopy is more cost-effective compared to medical management utilizing diagnostic ultrasound assuming a willingness-to-pay threshold of $50,000. There is a clear cost-effectiveness to using ultrasound with medical management over CT and MR sialography in the diagnosis and management of RAIS. NA. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.

  2. Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

    NASA Astrophysics Data System (ADS)

    Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen

    2017-04-01

    Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.

  3. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  4. Incrementing data quality of multi-frequency echograms using the Adaptive Wiener Filter (AWF) denoising algorithm

    NASA Astrophysics Data System (ADS)

    Peña, M.

    2016-10-01

    Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.

  5. Microbial communities acclimate to recurring changes in soil redox potential status

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeAngelis, Kristen M.; Silver, Whendee; Thompson, Andrew

    Rapidly fluctuating environmental conditions can significantly stress organisms, particularly when fluctuations cross thresholds of normal physiological tolerance. Redox potential fluctuations are common in humid tropical soils, and microbial community acclimation or avoidance strategies for survival will in turn shape microbial community diversity and biogeochemistry. To assess the extent to which indigenous bacterial and archaeal communities are adapted to changing in redox potential, soils were incubated under static anoxic, static oxic or fluctuating redox potential conditions, and the standing (DNA-based) and active (RNA-based) communities and biogeochemistry were determined. Fluctuating redox potential conditions permitted simultaneous CO{sub 2} respiration, methanogenesis, N{sub 2}O productionmore » and iron reduction. Exposure to static anaerobic conditions significantly changed community composition, while 4-day redox potential fluctuations did not. Using RNA: DNA ratios as a measure of activity, 285 taxa were more active under fluctuating than static conditions, compared with three taxa that were more active under static compared with fluctuating conditions. These data suggest an indigenous microbialcommunity adapted to fluctuating redox potential.« less

  6. Parametric adaptive filtering and data validation in the bar GW detector AURIGA

    NASA Astrophysics Data System (ADS)

    Ortolan, A.; Baggio, L.; Cerdonio, M.; Prodi, G. A.; Vedovato, G.; Vitale, S.

    2002-04-01

    We report on our experience gained in the signal processing of the resonant GW detector AURIGA. Signal amplitude and arrival time are estimated by means of a matched-adaptive Wiener filter. The detector noise, entering in the filter set-up, is modelled as a parametric ARMA process; to account for slow non-stationarity of the noise, the ARMA parameters are estimated on an hourly basis. A requirement of the set-up of an unbiased Wiener filter is the separation of time spans with 'almost Gaussian' noise from non-Gaussian and/or strongly non-stationary time spans. The separation algorithm consists basically of a variance estimate with the Chauvenet convergence method and a threshold on the Curtosis index. The subsequent validation of data is strictly connected with the separation procedure: in fact, by injecting a large number of artificial GW signals into the 'almost Gaussian' part of the AURIGA data stream, we have demonstrated that the effective probability distributions of the signal-to-noise ratio χ2 and the time of arrival are those that are expected.

  7. Determination of cellular strains by combined atomic force microscopy and finite element modeling.

    PubMed Central

    Charras, Guillaume T; Horton, Mike A

    2002-01-01

    Many organs adapt to their mechanical environment as a result of physiological change or disease. Cells are both the detectors and effectors of this process. Though many studies have been performed in vitro to investigate the mechanisms of detection and adaptation to mechanical strains, the cellular strains remain unknown and results from different stimulation techniques cannot be compared. By combining experimental determination of cell profiles and elasticities by atomic force microscopy with finite element modeling and computational fluid dynamics, we report the cellular strain distributions exerted by common whole-cell straining techniques and from micromanipulation techniques, hence enabling their comparison. Using data from our own analyses and experiments performed by others, we examine the threshold of activation for different signal transduction processes and the strain components that they may detect. We show that modulating cell elasticity, by increasing the F-actin content of the cytoskeleton, or cellular Poisson ratio are good strategies to resist fluid shear or hydrostatic pressure. We report that stray fluid flow in some substrate-stretch systems elicits significant cellular strains. In conclusion, this technique shows promise in furthering our understanding of the interplay among mechanical forces, strain detection, gene expression, and cellular adaptation in physiology and disease. PMID:12124270

  8. Socio-hydrological modelling of floods: investigating community resilience, adaptation capacity and risk

    NASA Astrophysics Data System (ADS)

    Ciullo, Alessio; Viglione, Alberto; Castellarin, Attilio

    2016-04-01

    Changes in flood risk occur because of changes in climate and hydrology, and in societal exposure and vulnerability. Research on change in flood risk has demonstrated that the mutual interactions and continuous feedbacks between floods and societies has to be taken into account in flood risk management. The present work builds on an existing conceptual model of an hypothetical city located in the proximity of a river, along whose floodplains the community evolves over time. The model reproduces the dynamic co-evolution of four variables: flooding, population density of the flooplain, amount of structural protection measures and memory of floods. These variables are then combined in a way to mimic the temporal change of community resilience, defined as the (inverse of the) amount of time for the community to recover from a shock, and adaptation capacity, defined as ratio between damages due to subsequent events. Also, temporal changing exposure, vulnerability and probability of flooding are also modelled, which results in a dynamically varying flood-risk. Examples are provided that show how factors such as collective memory and risk taking attitude influence the dynamics of community resilience, adaptation capacity and risk.

  9. An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth

    NASA Astrophysics Data System (ADS)

    Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge

    2017-01-01

    A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.

  10. Graphene barristor, a triode device with a gate-controlled Schottky barrier.

    PubMed

    Yang, Heejun; Heo, Jinseong; Park, Seongjun; Song, Hyun Jae; Seo, David H; Byun, Kyung-Eun; Kim, Philip; Yoo, InKyeong; Chung, Hyun-Jong; Kim, Kinam

    2012-06-01

    Despite several years of research into graphene electronics, sufficient on/off current ratio I(on)/I(off) in graphene transistors with conventional device structures has been impossible to obtain. We report on a three-terminal active device, a graphene variable-barrier "barristor" (GB), in which the key is an atomically sharp interface between graphene and hydrogenated silicon. Large modulation on the device current (on/off ratio of 10(5)) is achieved by adjusting the gate voltage to control the graphene-silicon Schottky barrier. The absence of Fermi-level pinning at the interface allows the barrier's height to be tuned to 0.2 electron volt by adjusting graphene's work function, which results in large shifts of diode threshold voltages. Fabricating GBs on respective 150-mm wafers and combining complementary p- and n-type GBs, we demonstrate inverter and half-adder logic circuits.

  11. Neuronal detection thresholds during vestibular compensation: contributions of response variability and sensory substitution.

    PubMed

    Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E

    2014-04-01

    The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1-2 deg s(-1). After unilateral vestibular injury patients' direction-discrimination thresholds worsen to ∼20 deg s(-1), and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s(-1)). While thresholds showed slight improvement by week 3 (25 deg s(-1)), they never recovered to control values - a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues.

  12. Neuronal detection thresholds during vestibular compensation: contributions of response variability and sensory substitution

    PubMed Central

    Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E

    2014-01-01

    The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1–2 deg s−1. After unilateral vestibular injury patients’ direction–discrimination thresholds worsen to ∼20 deg s−1, and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s−1). While thresholds showed slight improvement by week 3 (25 deg s−1), they never recovered to control values – a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues. PMID:24366259

  13. Long-term follow-up after near-infrared spectroscopy coronary imaging: Insights from the lipid cORe plaque association with CLinical events (ORACLE-NIRS) registry.

    PubMed

    Danek, Barbara Anna; Karatasakis, Aris; Karacsonyi, Judit; Alame, Aya; Resendes, Erica; Kalsaria, Pratik; Nguyen-Trong, Phuong-Khanh J; Rangan, Bavana V; Roesle, Michele; Abdullah, Shuaib; Banerjee, Subhash; Brilakis, Emmanouil S

    Coronary lipid core plaque may be associated with the incidence of subsequent cardiovascular events. We analyzed outcomes of 239 patients who underwent near-infrared spectroscopy (NIRS) coronary imaging between 2009-2011. Multivariable Cox regression was used to identify variables independently associated with the incidence of major adverse cardiovascular events (MACE; cardiac mortality, acute coronary syndromes (ACS), stroke, and unplanned revascularization) during follow-up. Mean patient age was 64±9years, 99% were men, and 50% were diabetic, presenting with stable coronary artery disease (61%) or an acute coronary syndrome (ACS, 39%). Target vessel pre-stenting median lipid core burden index (LCBI) was 88 [interquartile range, IQR 50-130]. Median LCBI in non-target vessels was 57 [IQR 26-94]. Median follow-up was 5.3years. The 5-year MACE rate was 37.5% (cardiac mortality was 15.0%). On multivariable analysis the following variables were associated with MACE: diabetes mellitus, prior percutaneous coronary intervention performed at index angiography, and non-target vessel LCBI. Non-target vessel LCBI of 77 was determined using receiver-operating characteristic curve analysis to be a threshold for prediction of MACE in our cohort. The adjusted hazard ratio (HR) for non-target vessel LCBI ≥77 was 14.05 (95% confidence interval (CI) 2.47-133.51, p=0.002). The 5-year cumulative incidence of events in the above-threshold group was 58.0% vs. 13.1% in the below-threshold group. During long-term follow-up of patients who underwent NIRS imaging, high LCBI in a non-PCI target vessel was associated with increased incidence of MACE. Published by Elsevier Inc.

  14. Chronic pain impairs cognitive flexibility and engages novel learning strategies in rats.

    PubMed

    Cowen, Stephen L; Phelps, Caroline E; Navratilova, Edita; McKinzie, David L; Okun, Alec; Husain, Omar; Gleason, Scott D; Witkin, Jeffrey M; Porreca, Frank

    2018-03-22

    Cognitive flexibility, the ability to adapt behavior to changing outcomes, is critical for survival. The prefrontal cortex is a key site of cognitive control and chronic pain is known to lead to significant morphological changes to this brain region. Nevertheless, the effects of chronic pain on cognitive flexibility and learning remain uncertain. We used an instrumental paradigm to assess adaptive learning in an experimental model of chronic pain induced by tight ligation of the spinal nerves L5/6 (SNL model). Naïve, sham-operated, and SNL rats were trained to perform fixed-ratio, variable-ratio, and contingency-shift behaviors for food reward. Although all groups learned an initial lever-reward contingency, learning was slower in SNL animals in a subsequent choice task that reversed reinforcement contingencies. Temporal analysis of lever-press responses across sessions indicated no apparent deficits in memory consolidation or retrieval. However, analysis of learning within sessions revealed that the lever presses of SNL animals occurred in bursts followed by delays. Unexpectedly, the degree of bursting correlated positively with learning. Under a variable-ratio probabilistic task, SNL rats chose a less profitable behavioral strategy compared to naïve and sham-operated animals. Following extinction of behavior for learned preferences, SNL animals reverted to their initially preferred (i.e., less profitable) behavioral choice. Our data suggest, that in the face of uncertainty, chronic pain drives a preference for familiar associations, consistent with reduced cognitive flexibility. The observed burst-like responding may represent a novel learning strategy in animals with chronic pain.

  15. Deep sub-threshold ϕ production in Au+Au collisions

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, J.; Arnold, O.; Behnke, C.; Belounnas, A.; Belyaev, A.; Berger-Chen, J. C.; Biernat, J.; Blanco, A.; Blume, C.; Böhmer, M.; Bordalo, P.; Chernenko, S.; Chlad, L.; Deveaux, C.; Dreyer, J.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Filip, P.; Fonte, P.; Franco, C.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gernhäuser, R.; Golubeva, M.; Greifenhagen, R.; Guber, F.; Gumberidze, M.; Harabasz, S.; Heinz, T.; Hennino, T.; Hlavac, S.; Höhne, C.; Holzmann, R.; Ierusalimov, A.; Ivashkin, A.; Kämpfer, B.; Karavicheva, T.; Kardan, B.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Kühn, W.; Kugler, A.; Kunz, T.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lapidus, K.; Lebedev, A.; Lopes, L.; Lorenz, M.; Mahmoud, T.; Maier, L.; Mangiarotti, A.; Markert, J.; Maurus, S.; Metag, V.; Michel, J.; Mihaylov, D. M.; Morozov, S.; Müntz, C.; Münzer, R.; Naumann, L.; Nowakowski, K. N.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Petukhov, O.; Pietraszko, J.; Przygoda, W.; Ramos, S.; Ramstein, B.; Reshetin, A.; Rodriguez-Ramos, P.; Rosier, P.; Rost, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schuldes, H.; Schwab, E.; Scozzi, F.; Seck, F.; Sellheim, P.; Siebenson, J.; Silva, L.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Szala, M.; Tlusty, P.; Traxler, M.; Tsertos, H.; Usenko, E.; Wagner, V.; Wendisch, C.; Wiebusch, M. G.; Wirth, J.; Zanevsky, Y.; Zumbruch, P.; Hades Collaboration

    2018-03-01

    We present data on charged kaons (K±) and ϕ mesons in Au(1.23A GeV)+Au collisions. It is the first simultaneous measurement of K- and ϕ mesons in central heavy-ion collisions below a kinetic beam energy of 10A GeV. The ϕ /K- multiplicity ratio is found to be surprisingly high with a value of 0.52 ± 0.16 and shows no dependence on the centrality of the collision. Consequently, the different slopes of the K+ and K- transverse-mass spectra can be explained solely by feed-down, which substantially softens the spectra of K- mesons. Hence, in contrast to the commonly adapted argumentation in literature, the different slopes do not necessarily imply diverging freeze-out temperatures of K+ and K- mesons caused by different couplings to baryons.

  16. Carbon deposition thresholds on nickel-based solid oxide fuel cell anodes II. Steam:carbon ratio and current density

    NASA Astrophysics Data System (ADS)

    Kuhn, J.; Kesler, O.

    2015-03-01

    For the second part of a two part publication, coking thresholds with respect to molar steam:carbon ratio (SC) and current density in nickel-based solid oxide fuel cells were determined. Anode-supported button cell samples were exposed to 2-component and 5-component gas mixtures with 1 ≤ SC ≤ 2 and zero fuel utilization for 10 h, followed by measurement of the resulting carbon mass. The effect of current density was explored by measuring carbon mass under conditions known to be prone to coking while increasing the current density until the cell was carbon-free. The SC coking thresholds were measured to be ∼1.04 and ∼1.18 at 600 and 700 °C, respectively. Current density experiments validated the thresholds measured with respect to fuel utilization and steam:carbon ratio. Coking thresholds at 600 °C could be predicted with thermodynamic equilibrium calculations when the Gibbs free energy of carbon was appropriately modified. Here, the Gibbs free energy of carbon on nickel-based anode support cermets was measured to be -6.91 ± 0.08 kJ mol-1. The results of this two part publication show that thermodynamic equilibrium calculations with appropriate modification to the Gibbs free energy of solid-phase carbon can be used to predict coking thresholds on nickel-based anodes at 600-700 °C.

  17. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  18. A data centred method to estimate and map how the local distribution of daily precipitation is changing

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nick

    2014-05-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles in distributions of variables such as daily temperature or precipitation. Here we focus on these local changes and on a method to transform daily observations of precipitation into patterns of local climate change. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results show regionally consistent patterns of systematic increase in precipitation on the wettest days, and of drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, 2013, S. C. Chapman, N. W. Watkins, Mapping climate change in European temperature distributions, Environ. Res. Lett. 8, 034031 [2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  19. Assessment and Mmanagement of North American horseshoe crab populations, with emphasis on a multispecies framework for Delaware Bay, U.S.A. populations: Chapter 24

    USGS Publications Warehouse

    Millard, Michael J.; Sweka, John A.; McGowan, Conor P.; Smith, David R.

    2015-01-01

    The horseshoe crab fishery on the US Atlantic coast represents a compelling fishery management story for many reasons, including ecological complexity, health and human safety ramifications, and socio-economic conflicts. Knowledge of stock status and assessment and monitoring capabilities for the species have increased greatly in the last 15 years and permitted managers to make more informed harvest recommendations. Incorporating the bioenergetics needs of migratory shorebirds, which feed on horseshoe crab eggs, into the management framework for horseshoe crabs was identified as a goal, particularly in the Delaware Bay region where the birds and horseshoe crabs exhibit an important ecological interaction. In response, significant effort was invested in studying the population dynamics, migration ecology, and the ecologic relationship of a key migratory shorebird, the Red Knot, to horseshoe crabs. A suite of models was developed that linked Red Knot populations to horseshoe crab populations through a mass gain function where female spawning crab abundance determined what proportion of the migrating Red Knot population reached a critical body mass threshold. These models were incorporated in an adaptive management framework wherein optimal harvest decisions for horseshoe crab are recommended based on several resource-based and value-based variables and thresholds. The current adaptive framework represents a true multispecies management effort where additional data over time are employed to improve the predictive models and reduce parametric uncertainty. The possibility of increasing phenologic asynchrony between the two taxa in response to climate change presents a potential challenge to their ecologic interaction in Delaware Bay.

  20. Experimental and Automated Analysis Techniques for High-resolution Electrical Mapping of Small Intestine Slow Wave Activity

    PubMed Central

    Angeli, Timothy R; O'Grady, Gregory; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; Du, Peng; Pullan, Andrew J; Bissett, Ian P

    2013-01-01

    Background/Aims Small intestine motility is governed by an electrical slow wave activity, and abnormal slow wave events have been associated with intestinal dysmotility. High-resolution (HR) techniques are necessary to analyze slow wave propagation, but progress has been limited by few available electrode options and laborious manual analysis. This study presents novel methods for in vivo HR mapping of small intestine slow wave activity. Methods Recordings were obtained from along the porcine small intestine using flexible printed circuit board arrays (256 electrodes; 4 mm spacing). Filtering options were compared, and analysis was automated through adaptations of the falling-edge variable-threshold (FEVT) algorithm and graphical visualization tools. Results A Savitzky-Golay filter was chosen with polynomial-order 9 and window size 1.7 seconds, which maintained 94% of slow wave amplitude, 57% of gradient and achieved a noise correction ratio of 0.083. Optimized FEVT parameters achieved 87% sensitivity and 90% positive-predictive value. Automated activation mapping and animation successfully revealed slow wave propagation patterns, and frequency, velocity, and amplitude were calculated and compared at 5 locations along the intestine (16.4 ± 0.3 cpm, 13.4 ± 1.7 mm/sec, and 43 ± 6 µV, respectively, in the proximal jejunum). Conclusions The methods developed and validated here will greatly assist small intestine HR mapping, and will enable experimental and translational work to evaluate small intestine motility in health and disease. PMID:23667749

  1. Mn/Ca intra- and inter-test variability in the benthic foraminifer Ammonia tepida

    NASA Astrophysics Data System (ADS)

    Petersen, Jassin; Barras, Christine; Bézos, Antoine; La, Carole; de Nooijer, Lennart J.; Meysman, Filip J. R.; Mouret, Aurélia; Slomp, Caroline P.; Jorissen, Frans J.

    2018-01-01

    The adaptation of some benthic foraminiferal species to low-oxygen conditions provides the prospect of using the chemical composition of their tests as proxies for bottom water oxygenation. Manganese may be particularly suitable as such a geochemical proxy because this redox element is soluble in reduced form (Mn2+) and hence can be incorporated into benthic foraminiferal tests under low-oxygen conditions. Therefore, intra- and inter-test differences in foraminiferal Mn/Ca ratios may hold important information about short-term variability in pore water Mn2+ concentrations and sediment redox conditions. Here, we studied Mn/Ca intra- and inter-test variability in living individuals of the shallow infaunal foraminifer Ammonia tepida sampled in Lake Grevelingen (the Netherlands) in three different months of 2012. The deeper parts of this lake are characterized by seasonal hypoxia/anoxia with associated shifts in microbial activity and sediment geochemistry, leading to seasonal Mn2+ accumulation in the pore water. Earlier laboratory experiments with similar seawater Mn2+ concentrations as encountered in the pore waters of Lake Grevelingen suggest that intra-test variability due to ontogenetic trends (i.e. size-related effects) and/or other vital effects occurring during calcification in A. tepida (11-25 % relative SD, RSD) is responsible for part of the observed variability in Mn/Ca. Our present results show that the seasonally highly dynamic environmental conditions in the study area lead to a strongly increased Mn/Ca intra- and inter-test variability (average of 45 % RSD). Within single specimens, both increasing and decreasing trends in Mn/Ca ratios with size are observed. Our results suggest that the variability in successive single-chamber Mn/Ca ratios reflects the temporal variability in pore water Mn2+. Additionally, active or passive migration of the foraminifera in the surface sediment may explain part of the observed Mn/Ca variability.

  2. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  3. A factorization approach to next-to-leading-power threshold logarithms

    NASA Astrophysics Data System (ADS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Melville, S.; Vernazza, L.; White, C. D.

    2015-06-01

    Threshold logarithms become dominant in partonic cross sections when the selected final state forces gluon radiation to be soft or collinear. Such radiation factorizes at the level of scattering amplitudes, and this leads to the resummation of threshold logarithms which appear at leading power in the threshold variable. In this paper, we consider the extension of this factorization to include effects suppressed by a single power of the threshold variable. Building upon the Low-Burnett-Kroll-Del Duca (LBKD) theorem, we propose a decomposition of radiative amplitudes into universal building blocks, which contain all effects ultimately responsible for next-to-leading-power (NLP) threshold logarithms in hadronic cross sections for electroweak annihilation processes. In particular, we provide a NLO evaluation of the radiative jet function, responsible for the interference of next-to-soft and collinear effects in these cross sections. As a test, using our expression for the amplitude, we reproduce all abelian-like NLP threshold logarithms in the NNLO Drell-Yan cross section, including the interplay of real and virtual emissions. Our results are a significant step towards developing a generally applicable resummation formalism for NLP threshold effects, and illustrate the breakdown of next-to-soft theorems for gauge theory amplitudes at loop level.

  4. Introducing hydrological information in rainfall intensity-duration thresholds

    NASA Astrophysics Data System (ADS)

    Greco, Roberto; Bogaard, Thom

    2016-04-01

    Regional landslide hazard assessment is mainly based on empirically derived precipitation-intensity-duration (PID) thresholds. Generally, two features of rainfall events are plotted to discriminate between observed occurrence and absence of occurrence of mass movements. Hereafter, a separation line is drawn in logarithmic space. Although successfully applied in many case studies, such PID thresholds suffer from many false positives as well as limited physical process insight. One of the main limitations is indeed that they do not include any information about the hydrological processes occurring along the slopes, so that the triggering is only related to rainfall characteristics. In order to introduce such an hydrological information in the definition of rainfall thresholds for shallow landslide triggering assessment, in this study the introduction of non-dimensional rainfall characteristics is proposed. In particular, rain storm depth, intensity and duration are divided by a characteristic infiltration depth, a characteristic infiltration rate and a characteristic duration, respectively. These latter variables depend on the hydraulic properties and on the moisture state of the soil cover at the beginning of the precipitation. The proposed variables are applied to the case of a slope covered with shallow pyroclastic deposits in Cervinara (southern Italy), for which experimental data of hourly rainfall and soil suction were available. Rainfall thresholds defined with the proposed non-dimensional variables perform significantly better than those defined with dimensional variables, either in the intensity-duration plane or in the depth-duration plane.

  5. Percolation in suspensions of hard nanoparticles: From spheres to needles

    NASA Astrophysics Data System (ADS)

    Schilling, Tanja; Miller, Mark A.; van der Schoot, Paul

    2015-09-01

    We investigate geometric percolation and scaling relations in suspensions of nanorods, covering the entire range of aspect ratios from spheres to extremely slender needles. A new version of connectedness percolation theory is introduced and tested against specialised Monte Carlo simulations. The theory accurately predicts percolation thresholds for aspect ratios of rod length to width as low as 10. The percolation threshold for rod-like particles of aspect ratios below 1000 deviates significantly from the inverse aspect ratio scaling prediction, thought to be valid in the limit of infinitely slender rods and often used as a rule of thumb for nanofibres in composite materials. Hence, most fibres that are currently used as fillers in composite materials cannot be regarded as practically infinitely slender for the purposes of percolation theory. Comparing percolation thresholds of hard rods and new benchmark results for ideal rods, we find that i) for large aspect ratios, they differ by a factor that is inversely proportional to the connectivity distance between the hard cores, and ii) they approach the slender rod limit differently.

  6. Effects of V2O5/Au bi-layer electrodes on the top contact Pentacene-based organic thin film transistors

    NASA Astrophysics Data System (ADS)

    Borthakur, Tribeni; Sarma, Ranjit

    2017-05-01

    Top-contact Pentacene-based organic thin film transistors (OTFTs) with a thin layer of Vanadium Pent-oxide between Pentacene and Au layer are fabricated. Here we have found that the devices with V2O5/Au bi-layer source-drain electrode exhibit better field-effect mobility, high on-off ratio, low threshold voltage and low sub-threshold slope than the devices with Au only. The field-effect mobility, current on-off ratio, threshold voltage and sub-threshold slope of V2O5/Au bi-layer OTFT estimated from the device with 15 nm thick V2O5 layer is .77 cm2 v-1 s-1, 7.5×105, -2.9 V and .36 V/decade respectively.

  7. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  8. Determination of the anaerobic threshold in the pre-operative assessment clinic: inter-observer measurement error.

    PubMed

    Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M

    2009-11-01

    The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.

  9. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  10. Autonomic function responses to training: Correlation with body composition changes.

    PubMed

    Tian, Ye; Huang, Chuanye; He, Zihong; Hong, Ping; Zhao, Jiexiu

    2015-11-01

    The causal relation between autonomic function and adiposity is an unresolved issue. Thus, we studied whether resting heart rate variability (HRV) changes could be used to predict changes in body composition after 16 weeks of individualized exercise training. A total of 117 sedentary overweight/obese adults volunteered to join an intervention group (IN, n=82) or a control group (CON, n=35). The intervention group trained for 30-40 min three times a week with an intensity of 85-100% of individual ventilatory threshold (Thvent). At baseline and after a 16-week training period, resting HRV variables, body composition and peak oxygen uptake (VO2peak) were assessed. Compared with CON, exercise training significantly improved HRV and body composition and increased VO2peak (P<0.05). Significant correlations were observed between changes of HRV variables and body composition indices and VO2peak (P<0.05). Greater individual changes in HRV in response to exercise training were observed for those with greater total and central fat loss. Individual aerobic-based exercise training was for improving autonomic function and resting HRV responses to aerobic training is a potential indicator for adaptations to exercise training. Copyright © 2015. Published by Elsevier Inc.

  11. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  12. Using Multiple Metrics to Analyze Trends and Sensitivity of Climate Variability in New York City

    NASA Astrophysics Data System (ADS)

    Huang, J.; Towey, K.; Booth, J. F.; Baez, S. D.

    2017-12-01

    As the overall temperature of Earth continues to warm, changes in the Earth's climate are being observed through extreme weather events, such as heavy precipitation events and heat waves. This study examines the daily precipitation and temperature record of the greater New York City region during the 1979-2014 period. Daily station observations from three greater New York City airports: John F. Kennedy (JFK), LaGuardia (LGA) and Newark (EWR), are used in this study. Multiple statistical metrics are used in this study to analyze trends and variability in temperature and precipitation in the greater New York City region. The temperature climatology reveals a distinct seasonal cycle, while the precipitation climatology exhibits greater annual variability. Two types of thresholds are used to examine the variability of extreme events: extreme threshold and daily anomaly threshold. The extreme threshold indicates how the strength of the overall maximum is changing whereas the daily anomaly threshold indicates if the strength of the daily maximum is changing over time. We observed an increase in the frequency of anomalous daily precipitation events over the last 36 years, with the greatest frequency occurring in 2011. The most extreme precipitation events occur during the months of late summer through early fall, with approximately four expected extreme events occurring per year during the summer and fall. For temperature, the greatest frequency and variation in temperature anomalies occur during winter and spring. In addition, temperature variance is also analyzed to determine if there is greater day-to-day temperature variability today than in the past.

  13. Motor Unit Interpulse Intervals During High Force Contractions.

    PubMed

    Stock, Matt S; Thompson, Brennan J

    2016-01-01

    We examined the means, medians, and variability for motor-unit interpulse intervals (IPIs) during voluntary, high force contractions. Eight men (mean age = 22 years) attempted to perform isometric contractions at 90% of their maximal voluntary contraction force while bipolar surface electromyographic (EMG) signals were detected from the vastus lateralis and vastus medialis muscles. Surface EMG signal decomposition was used to determine the recruitment thresholds and IPIs of motor units that demonstrated accuracy levels ≥ 96.0%. Motor units with high recruitment thresholds demonstrated longer mean IPIs, but the coefficients of variation were similar across all recruitment thresholds. Polynomial regression analyses indicated that for both muscles, the relationship between the means and standard deviations of the IPIs was linear. The majority of IPI histograms were positively skewed. Although low-threshold motor units were associated with shorter IPIs, the variability among motor units with differing recruitment thresholds was comparable.

  14. Percolation, phase separation, and gelation in fluids and mixtures of spheres and rods

    NASA Astrophysics Data System (ADS)

    Jadrich, Ryan; Schweizer, Kenneth S.

    2011-12-01

    The relationship between kinetic arrest, connectivity percolation, structure and phase separation in protein, nanoparticle, and colloidal suspensions is a rich and complex problem. Using a combination of integral equation theory, connectivity percolation methods, naïve mode coupling theory, and the activated dynamics nonlinear Langevin equation approach, we study this problem for isotropic one-component fluids of spheres and variable aspect ratio rigid rods, and also percolation in rod-sphere mixtures. The key control parameters are interparticle attraction strength and its (short) spatial range, total packing fraction, and mixture composition. For spherical particles, formation of a homogeneous one-phase kinetically stable and percolated physical gel is predicted to be possible, but depends on non-universal factors. On the other hand, the dynamic crossover to activated dynamics and physical bond formation, which signals discrete cluster formation below the percolation threshold, almost always occurs in the one phase region. Rods more easily gel in the homogeneous isotropic regime, but whether a percolation or kinetic arrest boundary is reached first upon increasing interparticle attraction depends sensitively on packing fraction, rod aspect ratio and attraction range. Overall, the connectivity percolation threshold is much more sensitive to attraction range than either the kinetic arrest or phase separation boundaries. Our results appear to be qualitatively consistent with recent experiments on polymer-colloid depletion systems and brush mediated attractive nanoparticle suspensions.

  15. The absolute threshold of cone vision

    PubMed Central

    Koeing, Darran; Hofer, Heidi

    2013-01-01

    We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115

  16. Is the bitter rejection response always adaptive?

    PubMed

    Glendinning, J I

    1994-12-01

    The bitter rejection response consists of a suite of withdrawal reflexes and negative affective responses. It is generally assumed to have evolved as a way to facilitate avoidance of foods that are poisonous because they usually taste bitter to humans. Using previously published studies, the present paper examines the relationship between bitterness and toxicity in mammals, and then assesses the ecological costs and benefits of the bitter rejection response in carnivorous, omnivorous, and herbivorous (grazing and browsing) mammals. If the bitter rejection response accurately predicts the potential toxicity of foods, then one would expect the threshold for the response to be lower for highly toxic compounds than for nontoxic compounds. The data revealed no such relationship. Bitter taste thresholds varied independently of toxicity thresholds, indicating that the bitter rejection response is just as likely to be elicited by a harmless bitter food as it is by a harmful one. Thus, it is not necessarily in an animal's best interest to have an extremely high or low bitter threshold. Based on this observation, it was hypothesized that the adaptiveness of the bitter rejection response depends upon the relative occurrence of bitter and potentially toxic compounds in an animal's diet. Animals with a relatively high occurrence of bitter and potentially toxic compounds in their diet (e.g., browsing herbivores) were predicted to have evolved a high bitter taste threshold and tolerance to dietary poisons. Such an adaptation would be necessary because a browser cannot "afford" to reject all foods that are bitter and potentially toxic without unduly restricting its dietary options. At the other extreme, animals that rarely encounter bitter and potentially toxic compounds in their diet (e.g., carnivores) were predicted to have evolved a low bitter threshold. Carnivores could "afford" to utilize such a stringent rejection mechanism because foods containing bitter and potentially toxic compounds constitute a small portion of their diet. Since the low bitter threshold would reduce substantially the risk of ingesting anything poisonous, carnivores were also expected to have a relatively low tolerance to dietary poisons. This hypothesis was supported by a comparison involving 30 mammal species, in which a suggestive relationship was found between quinine hydrochloride sensitivity and trophic group, with carnivores > omnivores > grazers > browsers. Further support for the hypothesis was provided by a comparison across browsers and grazers in terms of the production of tannin-binding salivary proteins, which probably represent an adaptation for reducing the bitterness and astringency of tannins.(ABSTRACT TRUNCATED AT 400 WORDS)

  17. Flood disturbance and regrowth of vegetation in ephemeral channels: conditions and interactions

    NASA Astrophysics Data System (ADS)

    Hooke, J.

    2012-04-01

    Flood flows disturb vegetation growing in ephemeral channels but more information is needed on the thresholds for damage and removal and on the regrowth processes and timescales after floods. Once vegetation is re-established then it has feedback effects on processes and may raise thresholds. Several sites in SE Spain have been monitored for the effects of flows and for the growth and responses of plants over a period of >15 years. Two major floods and many minor flows have occurred. Measurements on quadrats and in different zones of the valley floor have allowed quantification of the thresholds for damage of different species of plant. Position of the plants in the channel also has a marked influence on effect of flows; velocities and flow forces for different parts have been calculated. The threshold for removal or mortality of certain plants in these Mediterranean valleys is very high. Types and species of plants regrowing in different zones have been identified and rates of growth measured. The relationship to climatic and weather conditions between channel flows is analysed. Growth rates between floods are closely related to moisture availability, mainly influenced by inter-annual variability of rainfall but also varying with location in the channel. One site in which hydrological regime was altered by human actions has shown marked change in vegetation cover and in channel response. Feedback effects reduce erosion and increase sedimentation and these effects have been measured directly and by calculation of roughness and resistance effects. The results demonstrate the different degrees of adaptation of plants to disturbance, natural vegetation such as phreatophytes showing high resilience but crop trees such as olives and almonds on floodplains being vulnerable to high flows.

  18. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    NASA Astrophysics Data System (ADS)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  19. Face recognition: database acquisition, hybrid algorithms, and human studies

    NASA Astrophysics Data System (ADS)

    Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry

    1997-02-01

    One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.

  20. English vowel identification and vowel formant discrimination by native Mandarin Chinese- and native English-speaking listeners: The effect of vowel duration dependence.

    PubMed

    Mi, Lin; Tao, Sha; Wang, Wenjing; Dong, Qi; Guan, Jingjing; Liu, Chang

    2016-03-01

    The purpose of this study was to examine the relationship between English vowel identification and English vowel formant discrimination for native Mandarin Chinese- and native English-speaking listeners. The identification of 12 English vowels was measured with the duration cue preserved or removed. The thresholds of vowel formant discrimination on the F2 of two English vowels,/Λ/and/i/, were also estimated using an adaptive-tracking procedure. Native Mandarin Chinese-speaking listeners showed significantly higher thresholds of vowel formant discrimination and lower identification scores than native English-speaking listeners. The duration effect on English vowel identification was similar between native Mandarin Chinese- and native English-speaking listeners. Moreover, regardless of listeners' language background, vowel identification was significantly correlated with vowel formant discrimination for the listeners who were less dependent on duration cues, whereas the correlation between vowel identification and vowel formant discrimination was not significant for the listeners who were highly dependent on duration cues. This study revealed individual variability in using multiple acoustic cues to identify English vowels for both native and non-native listeners. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Test-Retest Variability in Lesion SUV and Lesion SUR in 18F-FDG PET: An Analysis of Data from Two Prospective Multicenter Trials.

    PubMed

    Hofheinz, Frank; Apostolova, Ivayla; Oehme, Liane; Kotzerke, Jörg; van den Hoff, Jörg

    2017-11-01

    Quantitative assessment of radio- and chemotherapy response with 18 F-FDG whole-body PET has attracted increasing interest in recent years. In most published work, SUV has been used for this purpose. In the context of therapy response assessment, the reliability of lesion SUVs, notably their test-retest stability, thus becomes crucial. However, a recent study demonstrated substantial test-retest variability (TRV) in SUVs. The purpose of the present study was to investigate whether the tumor-to-blood SUV ratio (SUR) can improve TRV in tracer uptake. Methods: 73 patients with advanced non-small cell lung cancer from the prospective multicenter trials ACRIN 6678 ( n = 34) and MK-0646-008 ( n = 39) were included in this study. All patients underwent two 18 F-FDG PET/CT investigations on two different days (time difference, 3.6 ± 2.1 d; range, 1-7 d) before therapy. For each patient, up to 7 tumor lesions were evaluated. For each lesion, SUV max and SUV peak were determined. Blood SUV was determined as the mean value of a 3-dimensional aortic region of interest that was delineated on the attenuation CT image and transferred to the PET image. SURs were computed as the ratio of tumor SUV to blood SUV and were uptake time-corrected to 75 min after injection. TRV was quantified as 1.96 multiplied by the root-mean-square deviation of the fractional paired differences in SUV and SUR. The combined effect of blood normalization and uptake time correction was inspected by considering R TRV (TRV SUR /TRV SUV ), a ratio reflecting the reduction in the TRV in SUR relative to SUV. R TRV was correlated with the group-averaged-value difference (δ) in CF mean (δCF mean ) of the quantity δCF = |CF - 1|, where CF is the numeric factor that converts individual ratios of paired SUVs into corresponding SURs. This correlation analysis was performed by successively increasing a threshold value δCF min and computing δCF mean and R TRV for the remaining subgroup of patients/lesions with δCF ≥ δCF min Results: The group-averaged TRV SUV and TRV SUR were 32.1 and 29.0, respectively, which correspond to a reduction of variability in SUR by an R TRV factor of 0.9 in comparison to SUV. This rather marginal improvement can be understood to be a consequence of the atypically low intrasubject variability in blood SUV and uptake time and the accordingly small δCF values in the investigated prospective study groups. In fact, subgroup analysis with increasing δCF min thresholds revealed a pronounced negative correlation (Spearman ρ = -0.99, P < 0.001) between R TRV and δCF mean , where R TRV ≈ 0.4 in the δCF min = 20% subgroup, corresponding to a more than 2-fold reduction of TRV SUR compared with TRV SUV Conclusion: Variability in blood SUV and uptake time has been identified as a causal factor in the TRV in lesion SUV. Therefore, TRV in lesion uptake measurements can be reduced by replacing SUV with SUR as the uptake measure. The improvement becomes substantial for the level of variability in blood SUV and uptake time typically observed in the clinical context. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  2. A Mission-Adaptive Variable Camber Flap Control System to Optimize High Lift and Cruise Lift-to-Drag Ratios of Future N+3 Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Urnes, James, Sr.; Nguyen, Nhan; Ippolito, Corey; Totah, Joseph; Trinh, Khanh; Ting, Eric

    2013-01-01

    Boeing and NASA are conducting a joint study program to design a wing flap system that will provide mission-adaptive lift and drag performance for future transport aircraft having light-weight, flexible wings. This Variable Camber Continuous Trailing Edge Flap (VCCTEF) system offers a lighter-weight lift control system having two performance objectives: (1) an efficient high lift capability for take-off and landing, and (2) reduction in cruise drag through control of the twist shape of the flexible wing. This control system during cruise will command varying flap settings along the span of the wing in order to establish an optimum wing twist for the current gross weight and cruise flight condition, and continue to change the wing twist as the aircraft changes gross weight and cruise conditions for each mission segment. Design weight of the flap control system is being minimized through use of light-weight shape memory alloy (SMA) actuation augmented with electric actuators. The VCCTEF program is developing better lift and drag performance of flexible wing transports with the further benefits of lighter-weight actuation and less drag using the variable camber shape of the flap.

  3. Psychophysical chromatic mechanisms in macaque monkey.

    PubMed

    Stoughton, Cleo M; Lafer-Sousa, Rosa; Gagin, Galina; Conway, Bevil R

    2012-10-24

    Chromatic mechanisms have been studied extensively with psychophysical techniques in humans, but the number and nature of the mechanisms are still controversial. Appeals to monkey neurophysiology are often used to sort out the competing claims and to test hypotheses arising from the experiments in humans, but psychophysical chromatic mechanisms have never been assessed in monkeys. Here we address this issue by measuring color-detection thresholds in monkeys before and after chromatic adaptation, employing a standard approach used to determine chromatic mechanisms in humans. We conducted separate experiments using adaptation configured as either flickering full-field colors or heterochromatic gratings. Full-field colors would favor activity within the visual system at or before the arrival of retinal signals to V1, before the spatial transformation of color signals by the cortex. Conversely, gratings would favor activity within the cortex where neurons are often sensitive to spatial chromatic structure. Detection thresholds were selectively elevated for the colors of full-field adaptation when it modulated along either of the two cardinal chromatic axes that define cone-opponent color space [L vs M or S vs (L + M)], providing evidence for two privileged cardinal chromatic mechanisms implemented early in the visual-processing hierarchy. Adaptation with gratings produced elevated thresholds for colors of the adaptation regardless of its chromatic makeup, suggesting a cortical representation comprised of multiple higher-order mechanisms each selective for a different direction in color space. The results suggest that color is represented by two cardinal channels early in the processing hierarchy and many chromatic channels in brain regions closer to perceptual readout.

  4. Dark adaptation and the retinoid cycle of vision.

    PubMed

    Lamb, T D; Pugh, E N

    2004-05-01

    Following exposure of our eye to very intense illumination, we experience a greatly elevated visual threshold, that takes tens of minutes to return completely to normal. The slowness of this phenomenon of "dark adaptation" has been studied for many decades, yet is still not fully understood. Here we review the biochemical and physical processes involved in eliminating the products of light absorption from the photoreceptor outer segment, in recycling the released retinoid to its original isomeric form as 11-cis retinal, and in regenerating the visual pigment rhodopsin. Then we analyse the time-course of three aspects of human dark adaptation: the recovery of psychophysical threshold, the recovery of rod photoreceptor circulating current, and the regeneration of rhodopsin. We begin with normal human subjects, and then analyse the recovery in several retinal disorders, including Oguchi disease, vitamin A deficiency, fundus albipunctatus, Bothnia dystrophy and Stargardt disease. We review a large body of evidence showing that the time-course of human dark adaptation and pigment regeneration is determined by the local concentration of 11-cis retinal, and that after a large bleach the recovery is limited by the rate at which 11-cis retinal is delivered to opsin in the bleached rod outer segments. We present a mathematical model that successfully describes a wide range of results in human and other mammals. The theoretical analysis provides a simple means of estimating the relative concentration of free 11-cis retinal in the retina/RPE, in disorders exhibiting slowed dark adaptation, from analysis of psychophysical measurements of threshold recovery or from analysis of pigment regeneration kinetics.

  5. Wavelet methodology to improve single unit isolation in primary motor cortex cells.

    PubMed

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A

    2015-05-15

    The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein's unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. Copyright © 2015. Published by Elsevier B.V.

  6. Critical ratios of beluga whales (Delphinapterus leucas) and masked signal duration.

    PubMed

    Erbe, Christine

    2008-10-01

    This article examines the masking of a complex beluga vocalization by natural and anthropogenic noise. The call consisted of six 150 ms pulses exhibiting spectral peaks between 800 Hz and 8 kHz. Comparing the spectra and spectrograms of the call and noises at detection threshold showed that the animal did not hear the entire call at threshold. It only heard parts of the call in frequency and time. From the masked hearing thresholds in broadband continuous noises, critical ratios were computed. Fletcher critical bands were narrower than either 15 or 111 of an octave at the low frequencies of the call (<2 kHz), depending on which frequency the animal cued on. From the masked hearing thresholds in intermittent noises, the audible signal duration at detection threshold was computed. The intermittent noises differed in gap length, gap number, and masking, but the total audible signal duration at threshold was the same: 660 ms. This observation supports a multiple-looks model. The two amplitude modulated noises exhibited weaker masking than the unmodulated noises hinting at a comodulation masking release.

  7. Locally adaptive decision in detection of clustered microcalcifications in mammograms.

    PubMed

    Sainz de Cea, María V; Nishikawa, Robert M; Yang, Yongyi

    2018-02-15

    In computer-aided detection or diagnosis of clustered microcalcifications (MCs) in mammograms, the performance often suffers from not only the presence of false positives (FPs) among the detected individual MCs but also large variability in detection accuracy among different cases. To address this issue, we investigate a locally adaptive decision scheme in MC detection by exploiting the noise characteristics in a lesion area. Instead of developing a new MC detector, we propose a decision scheme on how to best decide whether a detected object is an MC or not in the detector output. We formulate the individual MCs as statistical outliers compared to the many noisy detections in a lesion area so as to account for the local image characteristics. To identify the MCs, we first consider a parametric method for outlier detection, the Mahalanobis distance detector, which is based on a multi-dimensional Gaussian distribution on the noisy detections. We also consider a non-parametric method which is based on a stochastic neighbor graph model of the detected objects. We demonstrated the proposed decision approach with two existing MC detectors on a set of 188 full-field digital mammograms (95 cases). The results, evaluated using free response operating characteristic (FROC) analysis, showed a significant improvement in detection accuracy by the proposed outlier decision approach over traditional thresholding (the partial area under the FROC curve increased from 3.95 to 4.25, p-value  <10 -4 ). There was also a reduction in case-to-case variability in detected FPs at a given sensitivity level. The proposed adaptive decision approach could not only reduce the number of FPs in detected MCs but also improve case-to-case consistency in detection.

  8. Locally adaptive decision in detection of clustered microcalcifications in mammograms

    NASA Astrophysics Data System (ADS)

    Sainz de Cea, María V.; Nishikawa, Robert M.; Yang, Yongyi

    2018-02-01

    In computer-aided detection or diagnosis of clustered microcalcifications (MCs) in mammograms, the performance often suffers from not only the presence of false positives (FPs) among the detected individual MCs but also large variability in detection accuracy among different cases. To address this issue, we investigate a locally adaptive decision scheme in MC detection by exploiting the noise characteristics in a lesion area. Instead of developing a new MC detector, we propose a decision scheme on how to best decide whether a detected object is an MC or not in the detector output. We formulate the individual MCs as statistical outliers compared to the many noisy detections in a lesion area so as to account for the local image characteristics. To identify the MCs, we first consider a parametric method for outlier detection, the Mahalanobis distance detector, which is based on a multi-dimensional Gaussian distribution on the noisy detections. We also consider a non-parametric method which is based on a stochastic neighbor graph model of the detected objects. We demonstrated the proposed decision approach with two existing MC detectors on a set of 188 full-field digital mammograms (95 cases). The results, evaluated using free response operating characteristic (FROC) analysis, showed a significant improvement in detection accuracy by the proposed outlier decision approach over traditional thresholding (the partial area under the FROC curve increased from 3.95 to 4.25, p-value  <10-4). There was also a reduction in case-to-case variability in detected FPs at a given sensitivity level. The proposed adaptive decision approach could not only reduce the number of FPs in detected MCs but also improve case-to-case consistency in detection.

  9. Temporal Variability and Statistics of the Strehl Ratio in Adaptive-Optics Images

    DTIC Science & Technology

    2010-01-01

    with the appropriate models and the residuals were extracted. This was done using the ARIMA modelling (Box & Jenkins 1970). ARIMA stands for...It was used here for the opposite goal – to obtain the values of the i.i.d. “noise” and test its distribution. Mixed ARIMA models of order 2 were...often sufficient to ensure non- significant autocorrelation of the residuals. Table 2 lists the stationary sequences with their respective ARIMA models

  10. Soil texture and climatc conditions for biocrust growth limitation: a meta analysis

    NASA Astrophysics Data System (ADS)

    Fischer, Thomas; Subbotina, Mariia

    2015-04-01

    Along with afforestation, attempts have been made to combat desertification by managing soil crusts, and is has been reported that recovery rates of biocrusts are dependent on many factors, including the type, severity, and extent of disturbance; structure of the vascular plant community; conditions of adjoining substrates; availability of inoculation material; and climate during and after disturbance (Belnap & Eldridge 2001). Because biological soil crusts are known to be more stable on and to prefer fine substrates (Belnap 2001), the question arises as to how successful crust management practices can be applied to coarser soil. In previous studies we observed similar crust biomasses on finer soils under arid and on coarser soils under temperate conditions. We hypothesized that the higher water holding capacity of finer substrates would favor crust development, and that the amount of silt and clay in the substrate that is required for enhanced crust development would vary with changes in climatic conditions. In a global meta study, climatic and soil texture threshold values promoting BSC growth were derived. While examining literature sources, it became evident that the amount of studies to be incorporated into this meta analysis was reversely related to the amount of common environmental parameters they share. We selected annual mean precipitaion, mean temperature and the amount of silt and clay as driving variables for crust growth. Response variable was the "relative crust biomass", which was computed per literature source as the ratio between each individual crust biomass value of the given study to the study maximum value reported. We distinguished lichen, green algal, cyanobacterial and moss crusts. To quantify threshold conditions at which crust biomass responded to differences in texture and climate, we (I) determined correlations between bioclimatic variables, (II) calculated linear models to determine the effect of typical climatic variables with soil clay content and with study site as a random effect. (III) Threshold values of texture and climatc effects were identified using a regression tree. Three mean annual temperature classes for texture dependent BSC growth limitation were identified: (1) <9 °C with a threshold value of 25% silt and clay (limited growth on coarser soils), (2) 9-19 °C, where texture did have no influence on relative crust biomass, and (3) >19 °C at soils with <4 or >17% silt and clay. Because biocrust development is limited under certain climatic and soil texture conditions, it is suggested to consider soil texture for biocrust rehabilitation purposes and in biogeochemical modeling of cryptogamic ground covers. References Belnap, J. & Eldridge, D. 2001. Disturbance and Recovery of Biological Soil Crusts. In: Belnap, J. & Lange, O. (eds.) Biological Soil Crusts: Structure, Function, and Management, Springer, Berlin. Belnap, J. 2001. Biological Soil Crusts and Wind Erosion. In: Belnap, J. & Lange, O. (eds.) Fischer, T., Subbotina, M. 2014. Climatic and soil texture threshold values for cryptogamic cover development: a meta analysis. Biologia 69/11:1520-1530,

  11. Reliable motion detection of small targets in video with low signal-to-clutter ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, S.A.; Naylor, R.B.

    1995-07-01

    Studies show that vigilance decreases rapidly after several minutes when human operators are required to search live video for infrequent intrusion detections. Therefore, there is a need for systems which can automatically detect targets in live video and reserve the operator`s attention for assessment only. Thus far, automated systems have not simultaneously provided adequate detection sensitivity, false alarm suppression, and ease of setup when used in external, unconstrained environments. This unsatisfactory performance can be exacerbated by poor video imagery with low contrast, high noise, dynamic clutter, image misregistration, and/or the presence of small, slow, or erratically moving targets. This papermore » describes a highly adaptive video motion detection and tracking algorithm which has been developed as part of Sandia`s Advanced Exterior Sensor (AES) program. The AES is a wide-area detection and assessment system for use in unconstrained exterior security applications. The AES detection and tracking algorithm provides good performance under stressing data and environmental conditions. Features of the algorithm include: reliable detection with negligible false alarm rate of variable velocity targets having low signal-to-clutter ratios; reliable tracking of targets that exhibit motion that is non-inertial, i.e., varies in direction and velocity; automatic adaptation to both infrared and visible imagery with variable quality; and suppression of false alarms caused by sensor flaws and/or cutouts.« less

  12. Evaluation and Application of Enhancements to the Performance of the ASDE-3 Radar in Heavy Rain

    DOT National Transportation Integrated Search

    1982-03-01

    This report presents the results of a study performed by the Transportation Systems Center (TSC) to evaluate two proposed enhancements to the performance of the ASDE-3 Radar in heavy rain: Adaptive gain and adaptive clutter thresholding, (operating w...

  13. Measures for assessing architectural speech security (privacy) of closed offices and meeting rooms.

    PubMed

    Gover, Bradford N; Bradley, John S

    2004-12-01

    Objective measures were investigated as predictors of the speech security of closed offices and rooms. A new signal-to-noise type measure is shown to be a superior indicator for security than existing measures such as the Articulation Index, the Speech Intelligibility Index, the ratio of the loudness of speech to that of noise, and the A-weighted level difference of speech and noise. This new measure is a weighted sum of clipped one-third-octave-band signal-to-noise ratios; various weightings and clipping levels are explored. Listening tests had 19 subjects rate the audibility and intelligibility of 500 English sentences, filtered to simulate transmission through various wall constructions, and presented along with background noise. The results of the tests indicate that the new measure is highly correlated with sentence intelligibility scores and also with three security thresholds: the threshold of intelligibility (below which speech is unintelligible), the threshold of cadence (below which the cadence of speech is inaudible), and the threshold of audibility (below which speech is inaudible). The ratio of the loudness of speech to that of noise, and simple A-weighted level differences are both shown to be well correlated with these latter two thresholds (cadence and audibility), but not well correlated with intelligibility.

  14. Adaptive 4d Psi-Based Change Detection

    NASA Astrophysics Data System (ADS)

    Yang, Chia-Hsiang; Soergel, Uwe

    2018-04-01

    In a previous work, we proposed a PSI-based 4D change detection to detect disappearing and emerging PS points (3D) along with their occurrence dates (1D). Such change points are usually caused by anthropic events, e.g., building constructions in cities. This method first divides an entire SAR image stack into several subsets by a set of break dates. The PS points, which are selected based on their temporal coherences before or after a break date, are regarded as change candidates. Change points are then extracted from these candidates according to their change indices, which are modelled from their temporal coherences of divided image subsets. Finally, we check the evolution of the change indices for each change point to detect the break date that this change occurred. The experiment validated both feasibility and applicability of our method. However, two questions still remain. First, selection of temporal coherence threshold associates with a trade-off between quality and quantity of PS points. This selection is also crucial for the amount of change points in a more complex way. Second, heuristic selection of change index thresholds brings vulnerability and causes loss of change points. In this study, we adapt our approach to identify change points based on statistical characteristics of change indices rather than thresholding. The experiment validates this adaptive approach and shows increase of change points compared with the old version. In addition, we also explore and discuss optimal selection of temporal coherence threshold.

  15. Perception of Self-Motion and Regulation of Walking Speed in Young-Old Adults.

    PubMed

    Lalonde-Parsi, Marie-Jasmine; Lamontagne, Anouk

    2015-07-01

    Whether a reduced perception of self-motion contributes to poor walking speed adaptations in older adults is unknown. In this study, speed discrimination thresholds (perceptual task) and walking speed adaptations (walking task) were compared between young (19-27 years) and young-old individuals (63-74 years), and the relationship between the performance on the two tasks was examined. Participants were evaluated while viewing a virtual corridor in a helmet-mounted display. Speed discrimination thresholds were determined using a staircase procedure. Walking speed modulation was assessed on a self-paced treadmill while exposed to different self-motion speeds ranging from 0.25 to 2 times the participants' comfortable speed. For each speed, participants were instructed to match the self-motion speed described by the moving corridor. On the walking task, participants displayed smaller walking speed errors at comfortable walking speeds compared with slower of faster speeds. The young-old adults presented larger speed discrimination thresholds (perceptual experiment) and larger walking speed errors (walking experiment) compared with young adults. Larger walking speed errors were associated with higher discrimination thresholds. The enhanced performance on the walking task at comfortable speed suggests that intersensory calibration processes are influenced by experience, hence optimized for frequently encountered conditions. The altered performance of the young-old adults on the perceptual and walking tasks, as well as the relationship observed between the two tasks, suggest that a poor perception of visual motion information may contribute to the poor walking speed adaptations that arise with aging.

  16. A longitudinal study on the ammonia threshold in junior cyclists

    PubMed Central

    Yuan, Y; Chan, K

    2004-01-01

    Objectives: To identify the effect of a one year non-specific training programme on the ammonia threshold of a group of junior cyclists and to correlate ammonia threshold with other common physiological variables. Methods: The cyclists performed tests at three time points (T1, T2, T3) during the year. Follow up tests were conducted every six months after the original test. Ammonia threshold was obtained from a graded exercise with four minute steps. Results: The relatively non-specific one year training programme was effective in inducing an increase in peak VO2 (60.6 (5.9), 65.9 (7.4), and 64.6 (6.5) ml/min/kg at T1, T2, and T3 respectively) and endurance time (18.3 (4.5), 20.1 (5.2), and 27.0 (6.1) minutes at T1, T2, and T3 respectively), but was not effective for the sprint related variables. Ammonia threshold, together with lactate threshold and ventilatory threshold, was not significantly different at the three test times. Only endurance time correlated significantly with ammonia threshold (r  =  0.915, p  =  0.001). Conclusions: The findings suggest that a relatively non-specific one year training programme does not modify the ammonia threshold of junior cyclists. The significant correlation between ammonia threshold and endurance time further confirms that ammonia threshold is a measure of the ability to sustain exercise at submaximal intensities. PMID:15039242

  17. Plasticity in reproduction and growth among 52 range-wide populations of a Mediterranean conifer: adaptive responses to environmental stress.

    PubMed

    Santos-Del-Blanco, L; Bonser, S P; Valladares, F; Chambel, M R; Climent, J

    2013-09-01

    A plastic response towards enhanced reproduction is expected in stressful environments, but it is assumed to trade off against vegetative growth and efficiency in the use of available resources deployed in reproduction [reproductive efficiency (RE)]. Evidence supporting this expectation is scarce for plants, particularly for long-lived species. Forest trees such as Mediterranean pines provide ideal models to study the adaptive value of allocation to reproduction vs. vegetative growth given their among-population differentiation for adaptive traits and their remarkable capacity to cope with dry and low-fertility environments. We studied 52 range-wide Pinus halepensis populations planted into two environmentally contrasting sites during their initial reproductive stage. We investigated the effect of site, population and their interaction on vegetative growth, threshold size for female reproduction, reproductive-vegetative size relationships and RE. We quantified correlations among traits and environmental variables to identify allocation trade-offs and ecotypic trends. Genetic variation for plasticity was high for vegetative growth, whereas it was nonsignificant for reproduction. Size-corrected reproduction was enhanced in the more stressful site supporting the expectation for adverse conditions to elicit plastic responses in reproductive allometry. However, RE was unrelated with early reproductive investment. Our results followed theoretical predictions and support that phenotypic plasticity for reproduction is adaptive under stressful environments. Considering expectations of increased drought in the Mediterranean, we hypothesize that phenotypic plasticity together with natural selection on reproductive traits will play a relevant role in the future adaptation of forest tree species. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  18. The Effect of Hydration on Voice Quality in Adults: A Systematic Review.

    PubMed

    Alves, Maxine; Krüger, Esedra; Pillay, Bhavani; van Lierde, Kristiane; van der Linde, Jeannie

    2017-11-06

    We aimed to critically appraise scientific, peer-reviewed articles, published in the past 10 years on the effects of hydration on voice quality in adults. This is a systematic review. Five databases were searched using the key words "vocal fold hydration", "voice quality", "vocal fold dehydration", and "hygienic voice therapy". The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines were followed. The included studies were scored based on American Speech-Language-Hearing Association's levels of evidence and quality indicators, as well as the Cochrane Collaboration's risk of bias tool. Systemic dehydration as a result of fasting and not ingesting fluids significantly negatively affected the parameters of noise-to-harmonics ratio (NHR), shimmer, jitter, frequency, and the s/z ratio. Water ingestion led to significant improvements in shimmer, jitter, frequency, and maximum phonation time values. Caffeine intake does not appear to negatively affect voice production. Laryngeal desiccation challenges by oral breathing led to surface dehydration which negatively affected jitter, shimmer, NHR, phonation threshold pressure, and perceived phonatory effort. Steam inhalation significantly improved NHR, shimmer, and jitter. Only nebulization of isotonic solution decreased phonation threshold pressure and showed some indication of a potential positive effect of nebulization substances. Treatments in high humidity environments prove to be effective and adaptations of low humidity environments should be encouraged. Recent literature regarding vocal hydration is high quality evidence. Systemic hydration is the easiest and most cost-effective solution to improve voice quality. Recent evidence therefore supports the inclusion of hydration in a vocal hygiene program. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. Linking neocortical, cognitive, and genetic variability in autism with alterations of brain plasticity: the Trigger-Threshold-Target model.

    PubMed

    Mottron, Laurent; Belleville, Sylvie; Rouleau, Guy A; Collignon, Olivier

    2014-11-01

    The phenotype of autism involves heterogeneous adaptive traits (strengths vs. disabilities), different domains of alterations (social vs. non-social), and various associated genetic conditions (syndromic vs. nonsyndromic autism). Three observations suggest that alterations in experience-dependent plasticity are an etiological factor in autism: (1) the main cognitive domains enhanced in autism are controlled by the most plastic cortical brain regions, the multimodal association cortices; (2) autism and sensory deprivation share several features of cortical and functional reorganization; and (3) genetic mutations and/or environmental insults involved in autism all appear to affect developmental synaptic plasticity, and mostly lead to its upregulation. We present the Trigger-Threshold-Target (TTT) model of autism to organize these findings. In this model, genetic mutations trigger brain reorganization in individuals with a low plasticity threshold, mostly within regions sensitive to cortical reallocations. These changes account for the cognitive enhancements and reduced social expertise associated with autism. Enhanced but normal plasticity may underlie non-syndromic autism, whereas syndromic autism may occur when a triggering mutation or event produces an altered plastic reaction, also resulting in intellectual disability and dysmorphism in addition to autism. Differences in the target of brain reorganization (perceptual vs. language regions) account for the main autistic subgroups. In light of this model, future research should investigate how individual and sex-related differences in synaptic/regional brain plasticity influence the occurrence of autism. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Using natural range of variation to set decision thresholds: a case study for great plains grasslands

    USGS Publications Warehouse

    Symstad, Amy J.; Jonas, Jayne L.; Edited by Guntenspergen, Glenn R.

    2014-01-01

    Natural range of variation (NRV) may be used to establish decision thresholds or action assessment points when ecological thresholds are either unknown or do not exist for attributes of interest in a managed ecosystem. The process for estimating NRV involves identifying spatial and temporal scales that adequately capture the heterogeneity of the ecosystem; compiling data for the attributes of interest via study of historic records, analysis and interpretation of proxy records, modeling, space-for-time substitutions, or analysis of long-term monitoring data; and quantifying the NRV from those data. At least 19 National Park Service (NPS) units in North America’s Great Plains are monitoring plant species richness and evenness as indicators of vegetation integrity in native grasslands, but little information on natural, temporal variability of these indicators is available. In this case study, we use six long-term vegetation monitoring datasets to quantify the temporal variability of these attributes in reference conditions for a variety of Great Plains grassland types, and then illustrate the implications of using different NRVs based on these quantities for setting management decision thresholds. Temporal variability of richness (as measured by the coefficient of variation, CV) is fairly consistent across the wide variety of conditions occurring in Colorado shortgrass prairie to Minnesota tallgrass sand savanna (CV 0.20–0.45) and generally less than that of production at the same sites. Temporal variability of evenness spans a greater range of CV than richness, and it is greater than that of production in some sites but less in other sites. This natural temporal variability may mask undesirable changes in Great Plains grasslands vegetation. Consequently, we suggest that managers consider using a relatively narrow NRV (interquartile range of all richness or evenness values observed in reference conditions) for designating a surveillance threshold, at which greater attention to the situation would be paid, and a broader NRV for designating management thresholds, at which action would be instigated.

  1. Audibility threshold spectrum for prominent discrete tone analysis

    NASA Astrophysics Data System (ADS)

    Kimizuka, Ikuo

    2005-09-01

    To evaluate the annoyance of tonal components in noise emissions, ANSI S1.13 (for general purposes) and/or ISO 7779/ECMA-74 (dedicatedfor IT equipment) state two similar metrics: tone-to-noise ratio (TNR) and prominence ratio(PR). By these or either of these two parameters, noise of question with a sharp spectral peak is analyzed by high resolution FFF and classified as prominent when it exceeds some criterion curve. According to present procedures, however this designation is dependent on only the spectral shape. To resolve this problem, the author proposes a threshold spectrum of human ear audibility. The spectrum is based on the reference threshold of hearing which is defined in ISO 389-7 and/or ISO 226. With this spectrum, one can objectively define whether the noise peak of question is audible or not, by simple comparison of the peak amplitude of noise emission and the corresponding value of threshold. Applying the threshold, one can avoid overkilling or unnecessary action for noise. Such a peak with absolutely low amplitude is not audible.

  2. Experimental Determination of DT Yield in High Current DD Dense Plasma Focii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowe, D. R.; Hagen, E. C.; Meehan, B. T.

    2013-06-18

    Dense Plasma Focii (DPF), which utilize deuterium gas to produce 2.45 MeV neutrons, may in fact also produce DT fusion neutrons at 14.1 MeV due to the triton production in the DD reaction. If beam-target fusion is the primary producer of fusion neutrons in DPFs, it is possible that ejected tritons from the first pinch will interact with the second pinch, and so forth. The 2 MJ DPF at National Security Technologies’ Losee Road Facility is able to, and has produced, over 1E12 DD neutrons per pulse, allowing an accurate measurement of the DT/DD ratio. The DT/DD ratio was experimentallymore » verified by using the (n,2n) reaction in a large piece of praseodymium metal, which has a threshold reaction of 8 MeV, and is widely used as a DT yield measurement system1. The DT/DD ratio was experimentally determined for over 100 shots, and then compared to independent variables such as tube pressure, number of pinches per shot, total current, pinch current and charge voltage.« less

  3. An adaptive design for updating the threshold value of a continuous biomarker

    PubMed Central

    Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2017-01-01

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407

  4. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  5. Modeled summer background concentration nutrients and ...

    EPA Pesticide Factsheets

    We used regression models to predict background concentration of four water quality indictors: total nitrogen (N), total phosphorus (P), chloride, and total suspended solids (TSS), in the mid-continent (USA) great rivers, the Upper Mississippi, the Lower Missouri, and the Ohio. From best-model linear regressions of water quality indicators with land use and other stressor variables, we determined the concentration of the indicators when the land use and stressor variables were all set to zero the y-intercept. Except for total P on the Upper Mississippi River and chloride on the Ohio River, we were able to predict background concentration from significant regression models. In every model with more than one predictor variable, the model included at least one variable representing agricultural land use and one variable representing development. Predicted background concentration of total N was the same on the Upper Mississippi and Lower Missouri rivers (350 ug l-1), which was much lower than a published eutrophication threshold and percentile-based thresholds (25th percentile of concentration at all sites in the population) but was similar to a threshold derived from the response of sestonic chlorophyll a to great river total N concentration. Background concentration of total P on the Lower Missouri (53 ug l-1) was also lower than published and percentile-based thresholds. Background TSS concentration was higher on the Lower Missouri (30 mg l-1) than the other ri

  6. Determining the Threshold for HbA1c as a Predictor for Adverse Outcomes After Total Joint Arthroplasty: A Multicenter, Retrospective Study.

    PubMed

    Tarabichi, Majd; Shohat, Noam; Kheir, Michael M; Adelani, Muyibat; Brigati, David; Kearns, Sean M; Patel, Pankajkumar; Clohisy, John C; Higuera, Carlos A; Levine, Brett R; Schwarzkopf, Ran; Parvizi, Javad; Jiranek, William A

    2017-09-01

    Although HbA1c is commonly used for assessing glycemic control before surgery, there is no consensus regarding its role and the appropriate threshold in predicting adverse outcomes. This study was designed to evaluate the potential link between HbA1c and subsequent periprosthetic joint infection (PJI), with the intention of determining the optimal threshold for HbA1c. This is a multicenter retrospective study, which identified 1645 diabetic patients who underwent primary total joint arthroplasty (1004 knees and 641 hips) between 2001 and 2015. All patients had an HbA1c measured within 3 months of surgery. The primary outcome of interest was a PJI at 1 year based on the Musculoskeletal Infection Society criteria. Secondary outcomes included orthopedic (wound and mechanical complications) and nonorthopedic complications (sepsis, thromboembolism, genitourinary, and cardiovascular complications). A regression analysis was performed to determine the independent influence of HbA1c for predicting PJI. Overall 22 cases of PJI occurred at 1 year (1.3%). HbA1c at a threshold of 7.7 was distinct for predicting PJI (area under the curve, 0.65; 95% confidence interval, 0.51-0.78). Using this threshold, PJI rates increased from 0.8% (11 of 1441) to 5.4% (11 of 204). In the stepwise logistic regression analysis, PJI remained the only variable associated with higher HbA1c (odds ratio, 1.5; confidence interval, 1.2-2.0; P = .0001). There was no association between high HbA1c levels and other complications assessed. High HbA1c levels are associated with an increased risk for PJI. A threshold of 7.7% seems to be more indicative of infection than the commonly used 7% and should perhaps be the goal in preoperative patient optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. The asymmetry of U.S. monetary policy: Evidence from a threshold Taylor rule with time-varying threshold values

    NASA Astrophysics Data System (ADS)

    Zhu, Yanli; Chen, Haiqiang

    2017-05-01

    In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.

  8. Adaptive Treatment Strategies in Youth Mental Health: A Commentary on Advantages, Challenges, and Potential Directions.

    PubMed

    Sherrill, Joel T

    2016-01-01

    This commentary underscores the importance and potential of the research approaches and intervention strategies described in the JCCAP special issue on the Science of Adaptive Treatment Strategies in Child and Adolescent Mental Health for addressing the widely observed heterogeneity in response to even our most promising research-informed interventions. First, the commentary briefly summarizes the advantages of these approaches and highlights how these programs of research are responsive to widely agreed-upon calls for more personalized, prescriptive interventions. Next, the commentary briefly discusses key common challenges and gaps in our knowledge that might be addressed to advance the development, testing, and implementation of adaptive intervention strategies. For example, research to identify robust moderators that might serve as potential tailoring variables for initial assignment and sequencing of interventions, efforts to operationalize surrogate endpoints for early identification of individuals who are unlikely to respond to first-line interventions, and research that helps define what constitutes an adequate exposure (i.e., dose) or response threshold (e.g., response that suggests the need to intensify, switch, or augment interventions) would inform decision rules for adaptive algorithms. The commentary concludes with a discussion of potential strategies and current initiatives that might ultimately help facilitate research on more targeted, prescriptive approaches to intervening, including efforts to encourage investigators to use common data elements, to share and integrate data across trials, and to employ a more mechanism-based approach to intervention development and testing.

  9. Suppression of threshold voltage variability in MOSFETs by adjustment of ion implantation parameters

    NASA Astrophysics Data System (ADS)

    Park, Jae Hyun; Chang, Tae-sig; Kim, Minsuk; Woo, Sola; Kim, Sangsig

    2018-01-01

    In this study, we investigate threshold voltage (VTH) variability of metal-oxide-semiconductor field-effect transistors induced by random dopant fluctuation (RDF). Our simulation work demonstrates not only the influence of the implantation parameters such as its dose, tilt angle, energy, and rotation angle on the RDF-induced VTH variability, but also the solution to reduce the effect of this variability. By adjusting the ion implantation parameters, the 3σ (VTH) is reduced from 43.8 mV to 28.9 mV. This 34% reduction is significant, considering that our technique is very cost effective and facilitates easy fabrication, increasing availability.

  10. Rebuilding DEMATEL threshold value: an example of a food and beverage information system.

    PubMed

    Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin

    2016-01-01

    This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.

  11. Central and rear-edge populations can be equally vulnerable to warming

    NASA Astrophysics Data System (ADS)

    Bennett, Scott; Wernberg, Thomas; Arackal Joy, Bijo; de Bettignies, Thibaut; Campbell, Alexandra H.

    2015-12-01

    Rear (warm) edge populations are often considered more susceptible to warming than central (cool) populations because of the warmer ambient temperatures they experience, but this overlooks the potential for local variation in thermal tolerances. Here we provide conceptual models illustrating how sensitivity to warming is affected throughout a species' geographical range for locally adapted and non-adapted populations. We test these models for a range-contracting seaweed using observations from a marine heatwave and a 12-month experiment, translocating seaweeds among central, present and historic range edge locations. Growth, reproductive development and survivorship display different temperature thresholds among central and rear-edge populations, but share a 2.5 °C anomaly threshold. Range contraction, therefore, reflects variation in local anomalies rather than differences in absolute temperatures. This demonstrates that warming sensitivity can be similar throughout a species geographical range and highlights the importance of incorporating local adaptation and acclimatization into climate change vulnerability assessments.

  12. Meteoalarm severe wind gust thresholds from uniform periods in ECA&D

    NASA Astrophysics Data System (ADS)

    Wijnant, I. L.

    2010-09-01

    The main aim of our work is to propose new thresholds for Meteoalarm severe weather warnings which are based on the local climate, specifically for the severe wind gust warnings because the variability of these thresholds is currently rather extreme and unrealistic. In order to achieve this we added validated wind data to the database of the European Climate Assessment and Database project (ECA&D) and analysed them. We also developed wind related indices for ECA&D in order to facilitate further research. Since 2007 most of the severe weather warnings issued by the National Weather Services in Europe can be found on one website: www.meteoalarm.eu. For the 30 participating countries colour codes (yellow, orange, red) are presented on a map of Europe to reflect the severity of the weather event and its possible impact. The thresholds used for these colour codes obviously depend on the type of severe weather, but should also reflect local climate (for example: identical heat waves will have a more significant impact in Sweden than in Spain). The current Meteoalarm guideline is to issue second level warnings (orange) 1-30 times a year and third level warnings (red) less than once a year (being the total number of warnings from a specific country for all of the different sorts of severe weather events in that year). There is no similar guideline for specific sorts of severe weather events and participating countries choose their own thresholds. As a result we see unrealistic differences in the frequency and thresholds of the warnings for neighbouring countries. New thresholds based on return values would reflect the local climate of each country and give a more uniform indication of the social impact. Additionally, without uniform definitions of severe weather it remains difficult to determine if severe weather in Europe is changing. ECA&D receives long series of daily data from 62 countries throughout Europe and the Mediterranean. So far we have 7 countries that provide us with wind data. Quality control and homogeneity tests are conducted on all data before analysis is carried out. For wind data the standard ECA&D homogeneity tests (SNHT, Pettitt, Buishand and Von Neuman Ratio) are performed on the wind gust factor (the ratio of the maximum daily gust to the daily average wind speed) and a relatively new test (Petrovic's ReDistribution Method) on wind direction data. For the Dutch data we compared the results of the homogeneity tests with the available meta-data. Inhomogeneous series are not corrected but the older part (before the most recent break) is excluded from further analysis.

  13. Coupled soil respiration and transpiration dynamics from tree-scale to catchment scale in dry Rocky Mountain pine forests and the role of snowpack

    NASA Astrophysics Data System (ADS)

    Berryman, E.; Barnard, H. R.; Brooks, P. D.; Adams, H.; Burns, M. A.; Wilson, W.; Stielstra, C. M.

    2013-12-01

    A current ecohydrological challenge is quantifying the exact nature of carbon (C) and water couplings across landscapes. An emerging framework of understanding places plant physiological processes as a central control over soil respiration, the largest source of CO2 to the atmosphere. In dry montane forests, spatial and temporal variability in forest physiological processes are governed by hydrological patterns. Critical feedbacks involving respiration, moisture supply and tree physiology are poorly understood and must be quantified at the landscape level to better predict carbon cycle implications of regional drought under future climate change. We present data from an experiment designed to capture landscape variability in key coupled hydrological and C processes in forests of Colorado's Front Range. Sites encompass three catchments within the Boulder Creek watershed, range from 1480 m to 3021 m above sea level and are co-located with the DOE Niwot Ridge Ameriflux site and the Boulder Creek Critical Zone Observatory. Key hydrological measurements (soil moisture, transpiration) are coupled with soil respiration measurements within each catchment at different landscape positions. This three-dimensional study design also allows for the examination of the role of water subsidies from uplands to lowlands in controlling respiration. Initial findings from 2012 reveal a moisture threshold response of the sensitivity of soil respiration to temperature. This threshold may derive from tree physiological responses to variation in moisture availability, which in turn is controlled by the persistence of snowpack. Using data collected in 2013, first, we determine whether respiration moisture thresholds represent triggers for transpiration at the individual tree level. Next, using stable isotope ratios of soil respiration and xylem and soil water, we compare the depths of respiration to depths of water uptake to assign tree vs. understory sources of respiration. This will help determine whether tree root-zone respiration exhibits a similar moisture threshold. Lastly, we examine whether moisture thresholds to temperature sensitivity are consistent across a range of snowpack persistence. Findings are compared to data collected from sites in Arizona and New Mexico to better establish the role of winter precipitation in governing growing season respiration rates. The outcome of this study will contribute to a better understanding of linkages among water, tree physiology, and soil respiration with the ultimate goal of scaling plot-level respiration fluxes to entire catchments.

  14. Context-dependent sex allocation: constraints on the expression and evolution of maternal effects.

    PubMed

    Pryke, Sarah R; Rollins, Lee A; Griffith, Simon C

    2011-10-01

    Despite decades of research, whether vertebrates can and do adaptively adjust the sex ratio of their offspring is still highly debated. However, this may have resulted from the failure of empirical tests to identify large and predictable fitness returns to females from strategic adjustment. Here, we test the effect of diet quality and maternal condition on facultative sex ratio adjustment in the color polymorphic Gouldian finch (Erythrura gouldiae), a species that exhibits extreme maternal allocation in response to severe and predictable (genetically-determined) fitness costs. On high-quality diets, females produced a relatively equal sex ratio, but over-produced sons in poor dietary conditions. Despite the lack of sexual size dimorphism, nutritionally stressed foster sons were healthier, grew faster, and were more likely to survive than daughters. Although these findings are in line with predictions from sex allocation theory, the extent of adjustment is considerably lower than previously reported for this species. Females therefore have strong facultative control over sex allocation, but the extent of adjustment is likely determined by the relative magnitude of fitness gains and the ability to reliably predict sex-specific benefits from environmental (vs. genetic) variables. These findings may help explain the often inconsistent, weak, or inconclusive empirical evidence for adaptive sex ratio adjustment in vertebrates. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  15. Reducing the risk of rear-end collisions with infrastructure-to-vehicle (I2V) integration of variable speed limit control and adaptive cruise control system.

    PubMed

    Li, Ye; Wang, Hao; Wang, Wei; Liu, Shanwen; Xiang, Yun

    2016-08-17

    Adaptive cruise control (ACC) has been investigated recently to explore ways to increase traffic capacity, stabilize traffic flow, and improve traffic safety. However, researchers seldom have studied the integration of ACC and roadside control methods such as the variable speed limit (VSL) to improve safety. The primary objective of this study was to develop an infrastructure-to-vehicle (I2V) integrated system that incorporated both ACC and VSL to reduce rear-end collision risks on freeways. The intelligent driver model was firstly modified to simulate ACC behavior and then the VSL strategy used in this article was introduced. Next, the I2V system was proposed to integrate the 2 advanced techniques, ACC and VSL. Four scenarios of no control, VSL only, ACC only, and the I2V system were tested in simulation experiments. Time exposed time to collision (TET) and time integrated time to collision (TIT), 2 surrogate safety measures derived from time to collision (TTC), were used to evaluate safety issues associated with rear-end collisions. The total travel times of each scenario were also compared. The simulation results indicated that both the VSL-only and ACC-only methods had a positive impact on reducing the TET and TIT values (reduced by 53.0 and 58.6% and 59.0 and 65.3%, respectively). The I2V system combined the advantages of both ACC and VSL to achieve the most safety benefits (reduced by 71.5 and 77.3%, respectively). Sensitivity analysis of the TTC threshold also showed that the I2V system obtained the largest safety benefits with all of the TTC threshold values. The impact of different market penetration rates of ACC vehicles in I2V system indicated that safety benefits increase with an increase in ACC proportions. Compared to VSL-only and ACC-only scenarios, this integrated I2V system is more effective in reducing rear-end collision risks. The findings of this study provide useful information for traffic agencies to implement novel techniques to improve safety on freeways.

  16. Control of growth of juvenile leaves of Eucalyptus globulus: effects of leaf age.

    PubMed

    Metcalfe, J C; Davies, W J; Pereira, J S

    1991-12-01

    Biophysical variables influencing the expansion of plant cells (yield threshold, cell wall extensibility and turgor) were measured in individual Eucalyptus globulus leaves from the time of emergence until cessation of growth. Leaf water relations variables and growth rates were determined as relative humidity was changed on an hourly basis. Yield threshold and cell wall extensibility were estimated from plots of leaf growth rate versus turgor. Cell wall extensibility was also measured by the Instron technique, and yield threshold was determined experimentally both by stress relaxation in a psychrometer chamber and by incubation in a range of polyethylene glycol solutions. Once emerging leaves reached approximately 5 cm(2) in size, increases in leaf area were rapid throughout the expansive phase and varied little between light and dark periods. Both leaf growth rate and turgor were sensitive to changes in humidity, and in the longer term, both yield threshold and cell wall extensibility changed as the leaf aged. Rapidly expanding leaves had a very low yield threshold and high cell wall extensibility, whereas mature leaves had low cell wall extensibility. Yield threshold increased with leaf age.

  17. Mitochondrial threshold effects.

    PubMed Central

    Rossignol, Rodrigue; Faustin, Benjamin; Rocher, Christophe; Malgat, Monique; Mazat, Jean-Pierre; Letellier, Thierry

    2003-01-01

    The study of mitochondrial diseases has revealed dramatic variability in the phenotypic presentation of mitochondrial genetic defects. To attempt to understand this variability, different authors have studied energy metabolism in transmitochondrial cell lines carrying different proportions of various pathogenic mutations in their mitochondrial DNA. The same kinds of experiments have been performed on isolated mitochondria and on tissue biopsies taken from patients with mitochondrial diseases. The results have shown that, in most cases, phenotypic manifestation of the genetic defect occurs only when a threshold level is exceeded, and this phenomenon has been named the 'phenotypic threshold effect'. Subsequently, several authors showed that it was possible to inhibit considerably the activity of a respiratory chain complex, up to a critical value, without affecting the rate of mitochondrial respiration or ATP synthesis. This phenomenon was called the 'biochemical threshold effect'. More recently, quantitative analysis of the effects of various mutations in mitochondrial DNA on the rate of mitochondrial protein synthesis has revealed the existence of a 'translational threshold effect'. In this review these different mitochondrial threshold effects are discussed, along with their molecular bases and the roles that they play in the presentation of mitochondrial diseases. PMID:12467494

  18. Energy Switching Threshold for Climatic Benefits

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Cao, L.; Caldeira, K.

    2013-12-01

    Climate change is one of the great challenges facing humanity currently and in the future. Its most severe impacts may still be avoided if efforts are made to transform current energy systems (1). A transition from the global system of high Greenhouse Gas (GHG) emission electricity generation to low GHG emission energy technologies is required to mitigate climate change (2). Natural gas is increasingly seen as a choice for transitions to renewable sources. However, recent researches in energy and climate puzzled about the climate implications of relying more energy on natural gas. On one hand, a shift to natural gas is promoted as climate mitigation because it has lower carbon per unit energy than coal (3). On the other hand, the effect of switching to natural gas on nuclear-power and other renewable energies development may offset benefits from fuel-switching (4). Cheap natural gas is causing both coal plants and nuclear plants to close in the US. The objective of this study is to measure and evaluate the threshold of energy switching for climatic benefits. We hypothesized that the threshold ratio of energy switching for climatic benefits is related to GHGs emission factors of energy technologies, but the relation is not linear. A model was developed to study the fuel switching threshold for greenhouse gas emission reduction, and transition from coal and nuclear electricity generation to natural gas electricity generation was analyzed as a case study. The results showed that: (i) the threshold ratio of multi-energy switching for climatic benefits changes with GHGs emission factors of energy technologies. (ii)The mathematical relation between the threshold ratio of energy switching and GHGs emission factors of energies is a curved surface function. (iii) The analysis of energy switching threshold for climatic benefits can be used for energy and climate policy decision support.

  19. Adaptive semantic tag mining from heterogeneous clinical research texts.

    PubMed

    Hao, T; Weng, C

    2015-01-01

    To develop an adaptive approach to mine frequent semantic tags (FSTs) from heterogeneous clinical research texts. We develop a "plug-n-play" framework that integrates replaceable unsupervised kernel algorithms with formatting, functional, and utility wrappers for FST mining. Temporal information identification and semantic equivalence detection were two example functional wrappers. We first compared this approach's recall and efficiency for mining FSTs from ClinicalTrials.gov to that of a recently published tag-mining algorithm. Then we assessed this approach's adaptability to two other types of clinical research texts: clinical data requests and clinical trial protocols, by comparing the prevalence trends of FSTs across three texts. Our approach increased the average recall and speed by 12.8% and 47.02% respectively upon the baseline when mining FSTs from ClinicalTrials.gov, and maintained an overlap in relevant FSTs with the base- line ranging between 76.9% and 100% for varying FST frequency thresholds. The FSTs saturated when the data size reached 200 documents. Consistent trends in the prevalence of FST were observed across the three texts as the data size or frequency threshold changed. This paper contributes an adaptive tag-mining framework that is scalable and adaptable without sacrificing its recall. This component-based architectural design can be potentially generalizable to improve the adaptability of other clinical text mining methods.

  20. Quantifying how the full local distribution of daily precipitation is changing and its uncertainties

    NASA Astrophysics Data System (ADS)

    Stainforth, David; Chapman, Sandra; Watkins, Nicholas

    2016-04-01

    The study of the consequences of global warming would benefit from quantification of geographical patterns of change at specific thresholds or quantiles, and better understandings of the intrinsic uncertainties in such quantities. For precipitation a range of indices have been developed which focus on high percentiles (e.g. rainfall falling on days above the 99th percentile) and on absolute extremes (e.g. maximum annual one day precipitation) but scientific assessments are best undertaken in the context of changes in the whole climatic distribution. Furthermore, the relevant thresholds for climate-vulnerable policy decisions, adaptation planning and impact assessments, vary according to the specific sector and location of interest. We present a methodology which maintains the flexibility to provide information at different thresholds for different downstream users, both scientists and decision makers. We develop a method[1,2] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes in daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the amount of precipitation on those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves not only determining which quantiles and geographical locations show the greatest and smallest changes, but also those at which uncertainty undermines the ability to make confident statements about any change there may be. We demonstrate this approach using E-OBS gridded data[3] which are timeseries of local daily precipitation across Europe over the last 60+ years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the geographical pattern of change at given thresholds of precipitation. This information is model- independent, thus providing data of direct value in model calibration and assessment. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, 2013 [2] S C Chapman, D A Stainforth, N W Watkins, 2015 Limits to the quantification of local climate change, ERL,10, 094018 (2015), ERL,10, 094018 [3] M R Haylock et al . 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  1. Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects

    NASA Astrophysics Data System (ADS)

    Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.

    2008-10-01

    A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.

  2. The threshold hypothesis: solving the equation of nurture vs nature in type 1 diabetes.

    PubMed

    Wasserfall, C; Nead, K; Mathews, C; Atkinson, M A

    2011-09-01

    For more than 40 years, the contributions of nurture (i.e. the environment) and nature (i.e. genetics) have been touted for their aetiological importance in type 1 diabetes. Disappointingly, knowledge gains in these areas, while individually successful, have to a large extent occurred in isolation from each other. One reason underlying this divide is the lack of a testable model that simultaneously considers the contributions of genetic and environmental determinants in the formation of this and potentially other disorders that are subject to these variables. To address this void, we have designed a model based on the hypothesis that the aetiological influences of genetics and environment, when evaluated as intersecting and reciprocal trend lines based on odds ratios, result in a method of concurrently evaluating both facets and defining the attributable risk of clinical onset of type 1 diabetes. The model, which we have elected to term the 'threshold hypothesis', also provides a novel means of conceptualising the complex interactions of nurture with nature in type 1 diabetes across various geographical populations.

  3. Interocular transfer of spatial adaptation is weak at low spatial frequencies.

    PubMed

    Baker, Daniel H; Meese, Tim S

    2012-06-15

    Adapting one eye to a high contrast grating reduces sensitivity to similar target gratings shown to the same eye, and also to those shown to the opposite eye. According to the textbook account, interocular transfer (IOT) of adaptation is around 60% of the within-eye effect. However, most previous studies on this were limited to using high spatial frequencies, sustained presentation, and criterion-dependent methods for assessing threshold. Here, we measure IOT across a wide range of spatiotemporal frequencies, using a criterion-free 2AFC method. We find little or no IOT at low spatial frequencies, consistent with other recent observations. At higher spatial frequencies, IOT was present, but weaker than previously reported (around 35%, on average, at 8c/deg). Across all conditions, monocular adaptation raised thresholds by around a factor of 2, and observers showed normal binocular summation, demonstrating that they were not binocularly compromised. These findings prompt a reassessment of our understanding of the binocular architecture implied by interocular adaptation. In particular, the output of monocular channels may be available to perceptual decision making at low spatial frequencies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Speckle reduction during all-fiber common-path optical coherence tomography of the cavernous nerves

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Fiddy, Michael; Fried, Nathaniel M.

    2009-02-01

    Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery, which are responsible for erectile function, may improve nerve preservation and postoperative sexual potency. In this study, we use a rat prostate, ex vivo, to evaluate the feasibility of optical coherence tomography (OCT) as a diagnostic tool for real-time imaging and identification of the cavernous nerves. A novel OCT system based on an all single-mode fiber common-path interferometer-based scanning system is used for this purpose. A wavelet shrinkage denoising technique using Stein's unbiased risk estimator (SURE) algorithm to calculate a data-adaptive threshold is implemented for speckle noise reduction in the OCT image. The signal-to-noise ratio (SNR) was improved by 9 dB and the image quality metrics of the cavernous nerves also improved significantly.

  5. Day-to-Day Heart-Rate Variability Recordings in World-Champion Rowers: Appreciating Unique Athlete Characteristics.

    PubMed

    Plews, Daniel J; Laursen, Paul B; Buchheit, Martin

    2017-05-01

    Heart-rate variability (HRV) is a popular tool for monitoring autonomic nervous system status and training adaptation in athletes. It is believed that increases in HRV indicate effective training adaptation, but these are not always apparent in elite athletes. Resting HRV was recorded in 4 elite rowers (rowers A, B, C, and D) over the 7-wk period before their success at the 2015 World Rowing Championships. The natural logarithm of the square root of the mean sum of the squared differences (Ln rMSSD) between R-R intervals, Ln rMSSD:R-R ratio trends, and the Ln-rMSSD-to-R-R-interval relationship were assessed for each championship-winning rower. The time course of change in Ln rMSSD was athlete-dependent, with stagnation and decreases apparent. However, there were consistent substantial reductions in the Ln rMSSD:R-R ratio: rower A, baseline toward wk 5 (-2.35 ± 1.94); rower B, baseline to wk 4 and 5 (-0.41 ± 0.48 and -0.64 ± 0.65, respectively); rower C, baseline to wk 4 (-0.58 ± 0.66); and rower D, baseline to wk 4, 5, and 6 (-1.15 ± 0.91, -0.81 ± 0.74, and -1.43 ± 0.69, respectively). Reductions in Ln rMSSD concurrent with reductions in the Ln rMSSD:R-R ratio are indicative of parasympathetic saturation. Consequently, 3 of 4 rowers displayed substantial increases in parasympathetic activity despite having decreases in Ln rMSSD. These results confirm that a combination of indices should be used to monitor cardiac autonomic activity.

  6. The use of visual cues in gravity judgements on parabolic motion.

    PubMed

    Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan

    2018-06-21

    Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.

  7. Measurand transient signal suppressor

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  8. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  9. Congenital Hypothyroidism with Neurological and Respiratory Alterations: A Case Detected Using a Variable Diagnostic Threshold for TSH

    PubMed Central

    Barreiro, Jesús; Castro-Feijoo, Lidia; Colón, Cristóbal; Cabanas, Paloma; Heredia, Claudia; Castaño, Luis Antonio; Gómez-Lado, Carmen; Couce, M.Luz; Pombo, Manuel

    2011-01-01

    We report a case of congenital hypothyroidism (CH) with neurological and respiratory alterations due to a heterozygotic c.374-1G > A mutation of TITF1/NKX2-1. The hypothyroidism was detected using a neonatal screening protocol in which the thyroid stimulating hormone (TSH) threshold is re-set each day on the basis of within-day variability and between-day variation. In this case, the threshold on the day of the initial analysis was 8.2 mIU/L, and the measured TSH level in heel-prick blood was 8.3 mIU/L. Conflict of interest:None declared. PMID:22155464

  10. A globally convergent MC algorithm with an adaptive learning rate.

    PubMed

    Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian

    2012-02-01

    This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.

  11. Forecasting the probability of future groundwater levels declining below specified low thresholds in the conterminous U.S.

    USGS Publications Warehouse

    Dudley, Robert W.; Hodgkins, Glenn A.; Dickinson, Jesse

    2017-01-01

    We present a logistic regression approach for forecasting the probability of future groundwater levels declining or maintaining below specific groundwater-level thresholds. We tested our approach on 102 groundwater wells in different climatic regions and aquifers of the United States that are part of the U.S. Geological Survey Groundwater Climate Response Network. We evaluated the importance of current groundwater levels, precipitation, streamflow, seasonal variability, Palmer Drought Severity Index, and atmosphere/ocean indices for developing the logistic regression equations. Several diagnostics of model fit were used to evaluate the regression equations, including testing of autocorrelation of residuals, goodness-of-fit metrics, and bootstrap validation testing. The probabilistic predictions were most successful at wells with high persistence (low month-to-month variability) in their groundwater records and at wells where the groundwater level remained below the defined low threshold for sustained periods (generally three months or longer). The model fit was weakest at wells with strong seasonal variability in levels and with shorter duration low-threshold events. We identified challenges in deriving probabilistic-forecasting models and possible approaches for addressing those challenges.

  12. Comparison of in-air evoked potential and underwater behavioral hearing thresholds in four bottlenose dolphins (Tursiops truncatus).

    PubMed

    Finneran, James J; Houser, Dorian S

    2006-05-01

    Traditional behavioral techniques for hearing assessment in marine mammals are limited by the time and access required to train subjects. Electrophysiological methods, where passive electrodes are used to measure auditory evoked potentials (AEPs), are attractive alternatives to behavioral techniques; however, there have been few attempts to compare AEP and behavioral results for the same subject. In this study, behavioral and AEP hearing thresholds were compared in four bottlenose dolphins. AEP thresholds were measured in-air using a piezoelectric sound projector embedded in a suction cup to deliver amplitude modulated tones to the dolphin through the lower jaw. Evoked potentials were recorded noninvasively using surface electrodes. Adaptive procedures allowed AEP hearing thresholds to be estimated from 10 to 150 kHz in a single ear in about 45 min. Behavioral thresholds were measured in a quiet pool and in San Diego Bay. AEP and behavioral threshold estimates agreed closely as to the upper cutoff frequency beyond which thresholds increased sharply. AEP thresholds were strongly correlated with pool behavioral thresholds across the range of hearing; differences between AEP and pool behavioral thresholds increased with threshold magnitude and ranged from 0 to + 18 dB.

  13. Application of a Threshold Method to Airborne-Spaceborne Attenuating-Wavelength Radars for the Estimation of Space-Time Rain-Rate Statistics.

    NASA Astrophysics Data System (ADS)

    Meneghini, Robert

    1998-09-01

    A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.

  14. Abnormal Liver Biochemistry Is Common in Pediatric Inflammatory Bowel Disease: Prevalence and Associations.

    PubMed

    Valentino, Pamela L; Feldman, Brian M; Walters, Thomas D; Griffiths, Anne M; Ling, Simon C; Pullenayegum, Eleanor M; Kamath, Binita M

    2015-12-01

    Liver enzymes (LEs) abnormalities associated with pediatric inflammatory bowel diseases (IBD) are understudied. We undertook to describe the development and associations of abnormal LEs in pediatric IBD. We ascertained a cohort of 300 children with IBD and collected retrospective data. A Kaplan-Meier analysis determined the time to development of different thresholds of abnormal LEs. Associations between clinical variables and the development of abnormal LEs were determined. The probability of developing the first episode of abnormal LEs above the upper limit of normal (ULN) within 150 months was 58.1% (16.3% by 1 mo post-IBD diagnosis). There was a 6% prevalence of primary sclerosing cholangitis (PSC) or autoimmune sclerosing cholangitis (ASC) in this cohort. Of those diagnosed with PSC/ASC, 93% had persistent LE elevations at a threshold of >2× ULN, while those without PSC/ASC had a 4% probability of this abnormality. Elevated gamma glutamyltranspeptidase levels of 252 U/L had a 99% sensitivity and 71% specificity for PSC/ASC in IBD. After exclusion of patients with PSC/ASC, corticosteroids, antibiotics, and exclusive enteral nutrition demonstrated strongly positive associations with the first development of abnormal LEs >ULN (hazard ratio 2.1 [95% confidence interval, 1.3-3.3], hazard ratio 5.6 [95% confidence interval, 3.6-8.9], hazard ratio 4.2 [95% confidence interval, 1.6-11.3], respectively). Abnormal LEs are common in pediatric IBD and occur early. PSC/ASC is associated with persistently high LEs and gamma glutamyltranspeptidase levels >252 U/L. Children with IBD are at risk of elevated LEs if they require medications other than 5-ASA to induce IBD remission.

  15. A Bayesian goodness of fit test and semiparametric generalization of logistic regression with measurement data.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E

    2013-06-01

    Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.

  16. Screening for chronic kidney disease in Canadian indigenous peoples is cost-effective.

    PubMed

    Ferguson, Thomas W; Tangri, Navdeep; Tan, Zhi; James, Matthew T; Lavallee, Barry D A; Chartrand, Caroline D; McLeod, Lorraine L; Dart, Allison B; Rigatto, Claudio; Komenda, Paul V J

    2017-07-01

    Canadian indigenous (First Nations) have rates of kidney failure that are 2- to 4-fold higher than the non-indigenous general Canadian population. As such, a strategy of targeted screening and treatment for CKD may be cost-effective in this population. Our objective was to assess the cost utility of screening and subsequent treatment for CKD in rural Canadian indigenous adults by both estimated glomerular filtration rate and the urine albumin-to-creatinine ratio. A decision analytic Markov model was constructed comparing the screening and treatment strategy to usual care. Primary outcomes were presented as incremental cost-effectiveness ratios (ICERs) presented as a cost per quality-adjusted life-year (QALY). Screening for CKD was associated with an ICER of $23,700/QALY in comparison to usual care. Restricting the model to screening in communities accessed only by air travel (CKD prevalence 34.4%), this ratio fell to $7,790/QALY. In road accessible communities (CKD prevalence 17.6%) the ICER was $52,480/QALY. The model was robust to changes in influential variables when tested in univariate sensitivity analyses. Probabilistic sensitivity analysis found 72% of simulations to be cost-effective at a $50,000/QALY threshold and 93% of simulations to be cost-effective at a $100,000/QALY threshold. Thus, targeted screening and treatment for CKD using point-of-care testing equipment in rural Canadian indigenous populations is cost-effective, particularly in remote air access-only communities with the highest risk of CKD and kidney failure. Evaluation of targeted screening initiatives with cluster randomized controlled trials and integration of screening into routine clinical visits in communities with the highest risk is recommended. Copyright © 2017 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.

  17. Generation of remote adaptive torsional shear waves with an octagonal phased array to enhance displacements and reduce variability of shear wave speeds: comparison with quasi-plane shear wavefronts.

    PubMed

    Ouared, Abderrahmane; Montagnon, Emmanuel; Cloutier, Guy

    2015-10-21

    A method based on adaptive torsional shear waves (ATSW) is proposed to overcome the strong attenuation of shear waves generated by a radiation force in dynamic elastography. During the inward propagation of ATSW, the magnitude of displacements is enhanced due to the convergence of shear waves and constructive interferences. The proposed method consists in generating ATSW fields from the combination of quasi-plane shear wavefronts by considering a linear superposition of displacement maps. Adaptive torsional shear waves were experimentally generated in homogeneous and heterogeneous tissue mimicking phantoms, and compared to quasi-plane shear wave propagations. Results demonstrated that displacement magnitudes by ATSW could be up to 3 times higher than those obtained with quasi-plane shear waves, that the variability of shear wave speeds was reduced, and that the signal-to-noise ratio of displacements was improved. It was also observed that ATSW could cause mechanical inclusions to resonate in heterogeneous phantoms, which further increased the displacement contrast between the inclusion and the surrounding medium. This method opens a way for the development of new noninvasive tissue characterization strategies based on ATSW in the framework of our previously reported shear wave induced resonance elastography (SWIRE) method proposed for breast cancer diagnosis.

  18. Statistical Quality Control of Moisture Data in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Rukhovets, L.; Todling, R.

    1999-01-01

    A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.

  19. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    PubMed

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  20. An Intelligent Monitoring Network for Detection of Cracks in Anvils of High-Press Apparatus.

    PubMed

    Tian, Hao; Yan, Zhaoli; Yang, Jun

    2018-04-09

    Due to the endurance of alternating high pressure and temperature, the carbide anvils of the high-press apparatus, which are widely used in the synthetic diamond industry, are prone to crack. In this paper, an acoustic method is used to monitor the crack events, and the intelligent monitoring network is proposed to classify the sound samples. The pulse sound signals produced by such cracking are first extracted based on a short-time energy threshold. Then, the signals are processed with the proposed intelligent monitoring network to identify the operation condition of the anvil of the high-pressure apparatus. The monitoring network is an improved convolutional neural network that solves the problems that may occur in practice. The length of pulse sound excited by the crack growth is variable, so a spatial pyramid pooling layer is adopted to solve the variable-length input problem. An adaptive weighted algorithm for loss function is proposed in this method to handle the class imbalance problem. The good performance regarding the accuracy and balance of the proposed intelligent monitoring network is validated through the experiments finally.

  1. Allowing variance may enlarge the safe operating space for exploited ecosystems

    PubMed Central

    Carpenter, Stephen R.; Brock, William A.; Folke, Carl; van Nes, Egbert H.; Scheffer, Marten

    2015-01-01

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2–4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them. PMID:26438857

  2. Higher-than-predicted saltation threshold wind speeds on Titan.

    PubMed

    Burr, Devon M; Bridges, Nathan T; Marshall, John R; Smith, James K; White, Bruce R; Emery, Joshua P

    2015-01-01

    Titan, the largest satellite of Saturn, exhibits extensive aeolian, that is, wind-formed, dunes, features previously identified exclusively on Earth, Mars and Venus. Wind tunnel data collected under ambient and planetary-analogue conditions inform our models of aeolian processes on the terrestrial planets. However, the accuracy of these widely used formulations in predicting the threshold wind speeds required to move sand by saltation, or by short bounces, has not been tested under conditions relevant for non-terrestrial planets. Here we derive saltation threshold wind speeds under the thick-atmosphere, low-gravity and low-sediment-density conditions on Titan, using a high-pressure wind tunnel refurbished to simulate the appropriate kinematic viscosity for the near-surface atmosphere of Titan. The experimentally derived saltation threshold wind speeds are higher than those predicted by models based on terrestrial-analogue experiments, indicating the limitations of these models for such extreme conditions. The models can be reconciled with the experimental results by inclusion of the extremely low ratio of particle density to fluid density on Titan. Whereas the density ratio term enables accurate modelling of aeolian entrainment in thick atmospheres, such as those inferred for some extrasolar planets, our results also indicate that for environments with high density ratios, such as in jets on icy satellites or in tenuous atmospheres or exospheres, the correction for low-density-ratio conditions is not required.

  3. The Role of Parametric Assumptions in Adaptive Bayesian Estimation

    ERIC Educational Resources Information Center

    Alcala-Quintana, Rocio; Garcia-Perez, Miguel A.

    2004-01-01

    Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter…

  4. An electrophysiological investigation of the receptor apparatus of the duck's bill

    PubMed Central

    Gregory, J. E.

    1973-01-01

    1. The properties of receptors in the duck's bill have been studied by recording from units isolated by dissecting fine filaments from the maxillary and ophthalmic nerves. 2. The units studied were divisible into three groups, phasic mechanoreceptors responsive to vibration, thermoreceptive units, and high threshold mechanoreceptors. 3. Vibration-sensitive mechanoreceptors (113 units) had small receptive fields, showed a rapidly adapting discharge to mechanical stimulation of the bill, were sensitive to vibratory but not to thermal stimuli and showed no background discharge. 4. Temperature receptors (twenty-one units) were insensitive to mechanical stimulation and showed a temperature-dependent background discharge. Sudden cooling produced a transient increase in discharge frequency. 5. High threshold mechanosensitive units (eight units) gave a slowly adapting discharge to strong mechanical stimulation and were insensitive to vibratory and thermal stimulation. 6. It is concluded that the low-threshold, vibration-sensitive responses come from Herbst corpuscles. No specific function can yet be assigned to the Grandry corpuscles. PMID:4689962

  5. The NTID speech recognition test: NSRT(®).

    PubMed

    Bochner, Joseph H; Garrison, Wayne M; Doherty, Karen A

    2015-07-01

    The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents' speech perception abilities.

  6. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach

    PubMed Central

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2016-01-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795

  7. Threshold and variability properties of matrix frequency-doubling technology and standard automated perimetry in glaucoma.

    PubMed

    Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C

    2005-07-01

    To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.

  8. Predicting potentially toxigenic Pseudo-nitzschia blooms in the Chesapeake Bay

    USGS Publications Warehouse

    Anderson, C.R.; Sapiano, M.R.P.; Prasad, M.B.K.; Long, W.; Tango, P.J.; Brown, C.W.; Murtugudde, R.

    2010-01-01

    Harmful algal blooms are now recognized as a significant threat to the Chesapeake Bay as they can severely compromise the economic viability of important recreational and commercial fisheries in the largest estuary of the United States. This study describes the development of empirical models for the potentially domoic acid-producing Pseudo-nitzschia species complex present in the Bay, developed from a 22-year time series of cell abundance and concurrent measurements of hydrographic and chemical properties. Using a logistic Generalized Linear Model (GLM) approach, model parameters and performance were compared over a range of Pseudo-nitzschia bloom thresholds relevant to toxin production by different species. Small-threshold blooms (???10cellsmL-1) are explained by time of year, location, and variability in surface values of phosphate, temperature, nitrate plus nitrite, and freshwater discharge. Medium- (100cellsmL-1) to large- threshold (1000cellsmL-1) blooms are further explained by salinity, silicic acid, dissolved organic carbon, and light attenuation (Secchi) depth. These predictors are similar to other models for Pseudo-nitzschia blooms on the west coast, suggesting commonalities across ecosystems. Hindcasts of bloom probabilities at a 19% bloom prediction point yield a Heidke Skill Score of -53%, a Probability of Detection ~75%, a False Alarm Ratio of ~52%, and a Probability of False Detection ~9%. The implication of possible future changes in Baywide nutrient stoichiometry on Pseudo-nitzschia blooms is discussed. ?? 2010 Elsevier B.V.

  9. [Application of artificial neural networks on the prediction of surface ozone concentrations].

    PubMed

    Shen, Lu-Lu; Wang, Yu-Xuan; Duan, Lei

    2011-08-01

    Ozone is an important secondary air pollutant in the lower atmosphere. In order to predict the hourly maximum ozone one day in advance based on the meteorological variables for the Wanqingsha site in Guangzhou, Guangdong province, a neural network model (Multi-Layer Perceptron) and a multiple linear regression model were used and compared. Model inputs are meteorological parameters (wind speed, wind direction, air temperature, relative humidity, barometric pressure and solar radiation) of the next day and hourly maximum ozone concentration of the previous day. The OBS (optimal brain surgeon) was adopted to prune the neutral work, to reduce its complexity and to improve its generalization ability. We find that the pruned neural network has the capacity to predict the peak ozone, with an agreement index of 92.3%, the root mean square error of 0.0428 mg/m3, the R-square of 0.737 and the success index of threshold exceedance 77.0% (the threshold O3 mixing ratio of 0.20 mg/m3). When the neural classifier was added to the neural network model, the success index of threshold exceedance increased to 83.6%. Through comparison of the performance indices between the multiple linear regression model and the neural network model, we conclud that that neural network is a better choice to predict peak ozone from meteorological forecast, which may be applied to practical prediction of ozone concentration.

  10. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  11. Graded-threshold parametric response maps: towards a strategy for adaptive dose painting

    NASA Astrophysics Data System (ADS)

    Lausch, A.; Jensen, N.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.

    2014-03-01

    Purpose: To modify the single-threshold parametric response map (ST-PRM) method for predicting treatment outcomes in order to facilitate its use for guidance of adaptive dose painting in intensity-modulated radiotherapy. Methods: Multiple graded thresholds were used to extend the ST-PRM method (Nat. Med. 2009;15(5):572-576) such that the full functional change distribution within tumours could be represented with respect to multiple confidence interval estimates for functional changes in similar healthy tissue. The ST-PRM and graded-threshold PRM (GT-PRM) methods were applied to functional imaging scans of 5 patients treated for hepatocellular carcinoma. Pre and post-radiotherapy arterial blood flow maps (ABF) were generated from CT-perfusion scans of each patient. ABF maps were rigidly registered based on aligning tumour centres of mass. ST-PRM and GT-PRM analyses were then performed on overlapping tumour regions within the registered ABF maps. Main findings: The ST-PRMs contained many disconnected clusters of voxels classified as having a significant change in function. While this may be useful to predict treatment response, it may pose challenges for identifying boost volumes or for informing dose-painting by numbers strategies. The GT-PRMs included all of the same information as ST-PRMs but also visualized the full tumour functional change distribution. Heterogeneous clusters in the ST-PRMs often became more connected in the GT-PRMs by voxels with similar functional changes. Conclusions: GT-PRMs provided additional information which helped to visualize relationships between significant functional changes identified by ST-PRMs. This may enhance ST-PRM utility for guiding adaptive dose painting.

  12. Fundus-controlled two-color dark adaptometry with the Microperimeter MP1.

    PubMed

    Bowl, Wadim; Stieger, Knut; Lorenz, Birgit

    2015-06-01

    The aim of this study was to provide fundus-controlled two-color adaptometry with an existing device. A quick and easy approach extends the application possibilities of a commercial fundus-controlled perimeter. An external filter holder was placed in front the objective lens of the MP1 (Nidek, Italy) and fitted with filters to modify background, stimulus intensity, and color. Prior to dark adaptometry, the subject's visual sensitivity profile was measured for red and blue stimuli to determine whether rods or cones or both mediated the absolute threshold. After light adaptation, 20 healthy subjects were investigated with a pattern covering six spots at the posterior pole of the retina up to 45 min of dark adaptation. Thresholds were determined using a 200 ms red Goldmann IV and a blue Goldmann II stimulus. The pre-test sensitivity showed a typical distribution of values along the meridian, with high peripheral light increment sensitivity (LIS) and low central LIS for rods and the reverse for cones. After bleach, threshold recovery had a classic biphasic shape. The absolute threshold was reached after approximately 10 min for the red and 15 min for the blue stimulus. Two-color fundus-controlled adaptometry with a commercial MP1 without internal changes to the device provides a quick and easy examination of rod and cone function during dark adaptation at defined retinal loci of the posterior pole. This innovative method will be helpful to measure rod vs. cone function at known loci of the posterior pole in early stages of retinal degenerations.

  13. Comparison of different threshold values r for approximate entropy: application to investigate the heart rate variability between heart failure and healthy control groups.

    PubMed

    Liu, Chengyu; Liu, Changchun; Shao, Peng; Li, Liping; Sun, Xin; Wang, Xinpei; Liu, Feng

    2011-02-01

    Approximate entropy (ApEn) is widely accepted as a complexity measure of the heart rate variability (HRV) signal, but selecting the criteria for the threshold value r is controversial. This paper aims to verify whether Chon's method of forecasting the r(max) is an appropriate one for the HRV signal. The standard limb lead ECG signals of 120 subjects were recorded for 10 min in a supine position. The subjects were divided into two groups: the heart failure (22 females and 38 males, median age 62.4 ± 12.6) and healthy control group (33 females and 27 males, median age 51.5 ± 16.9). Three types of ApEn were calculated: the ApEn(0.2) using the recommended constant r = 0.2, the ApEn(chon) using Chon's method and the ApEn(max) using the true r(max). A Wilcoxon rank sum test showed that the ApEn(0.2) (p = 0.267) and the ApEn(max) (p = 0.813) had no statistical differences between the two groups, while the ApEn(chon) (p = 0.040) had. We generated a synthetic database to study the effect of two influential factors (the signal length N and the ratio of short- and long-term variability sd(1)/sd(2)) on the empirical formula in Chon's method (Chon et al 2009 IEEE Eng. Med. Biol. Mag. 28 18-23). The results showed that the empirical formula proposed by Chon et al is a good method for analyzing the random signal, but not an appropriate tool for analyzing nonlinear signals, such as the logistic or HRV signals.

  14. Mechanosensing of stem bending and its interspecific variability in five neotropical rainforest species.

    PubMed

    Coutand, Catherine; Chevolot, Malia; Lacointe, André; Rowe, Nick; Scotti, Ivan

    2010-02-01

    In rain forests, sapling survival is highly dependent on the regulation of trunk slenderness (height/diameter ratio): shade-intolerant species have to grow in height as fast as possible to reach the canopy but also have to withstand mechanical loadings (wind and their own weight) to avoid buckling. Recent studies suggest that mechanosensing is essential to control tree dimensions and stability-related morphogenesis. Differences in species slenderness have been observed among rainforest trees; the present study thus investigates whether species with different slenderness and growth habits exhibit differences in mechanosensitivity. Recent studies have led to a model of mechanosensing (sum-of-strains model) that predicts a quantitative relationship between the applied sum of longitudinal strains and the plant's responses in the case of a single bending. Saplings of five different neotropical species (Eperua falcata, E. grandiflora, Tachigali melinonii, Symphonia globulifera and Bauhinia guianensis) were subjected to a regimen of controlled mechanical loading phases (bending) alternating with still phases over a period of 2 months. Mechanical loading was controlled in terms of strains and the five species were subjected to the same range of sum of strains. The application of the sum-of-strain model led to a dose-response curve for each species. Dose-response curves were then compared between tested species. The model of mechanosensing (sum-of-strain model) applied in the case of multiple bending as long as the bending frequency was low. A comparison of dose-response curves for each species demonstrated differences in the stimulus threshold, suggesting two groups of responses among the species. Interestingly, the liana species B. guianensis exhibited a higher threshold than other Leguminosae species tested. This study provides a conceptual framework to study variability in plant mechanosensing and demonstrated interspecific variability in mechanosensing.

  15. Two-threshold model for scaling laws of noninteracting snow avalanches

    USGS Publications Warehouse

    Faillettaz, J.; Louchet, F.; Grasso, J.-R.

    2004-01-01

    A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.

  16. Incremental cost-effectiveness evaluation of vaccinating girls against cervical cancer pre- and post-sexual debut in Belgium.

    PubMed

    Demarteau, Nadia; Van Kriekinge, Georges; Simon, Philippe

    2013-08-20

    Vaccination against human papillomavirus (HPV) to prevent cervical cancer (CC) primarily targets young girls before sexual debut and is cost-effective. We assessed whether vaccination with the HPV-16/18 AS04-adjuvanted vaccine added to screening remains cost-effective in females after sexual debut compared to screening alone in Belgium. The role of protection against non-HPV-16/18 was also investigated. A published Markov cohort model was adapted to Belgium. The model replicated the natural history of HPV infection, the effects of screening, and vaccination. Vaccine efficacy (VE) included non-HPV-16/18 protection based on the PATRICIA clinical trial data. Pre- and post-HPV exposure VE were differentiated. Lifetime vaccine protection was assumed. Input data were obtained from literature review, national databases and a Delphi panel. Costing was from a healthcare payer perspective. Costs were discounted at 3% and effects at 1.5%. The incremental cost-effectiveness ratio (ICER) per quality-adjusted life-year (QALY) gained and the number of lesions prevented with vaccination from age 12 to 40 was evaluated. The specific effect of non-HPV-16/18 protection was investigated. Univariate sensitivity analysis was performed on key variables. The model estimated that vaccinating a cohort of 100,000 girls at age 12 would prevent 646 CC cases over a lifetime (102 non-HPV-16/18) with an ICER of €9171/QALY. Vaccinating at age 26 would prevent 340 CC cases (40 non-HPV-16/18) with an ICER of €17,348/QALY and vaccinating at age 40 would prevent 146 CC cases (17 non-HPV-16/18) with an ICER of €42,847/QALY. The ICER remained under the highly cost-effective threshold (1×GDP/capita) until age 33 years and under the cost-effective threshold (3×GDP/capita) beyond age 40. Extending HPV vaccination to females post-sexual debut could lead to a substantial reduction in CC-related burden and would be cost-effective in Belgium. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Consequences of Global Warming of 1.5 °C and 2 °C for Regional Temperature and Precipitation Changes in the Contiguous United States

    PubMed Central

    Bradley, Raymond S.

    2017-01-01

    The differential warming of land and ocean leads to many continental regions in the Northern Hemisphere warming at rates higher than the global mean temperature. Adaptation and conservation efforts will, therefore, benefit from understanding regional consequences of limiting the global mean temperature increase to well below 2°C above pre-industrial levels, a limit agreed upon at the United Nations Climate Summit in Paris in December 2015. Here, we analyze climate model simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to determine the timing and magnitude of regional temperature and precipitation changes across the contiguous United States (US) for global warming of 1.5 and 2°C and highlight consensus and uncertainties in model projections and their implications for making decisions. The regional warming rates differ considerably across the contiguous US, but all regions are projected to reach 2°C about 10-20 years before the global mean temperature. Although there is uncertainty in the timing of exactly when the 1.5 and 2°C thresholds will be crossed regionally, over 80% of the models project at least 2°C warming by 2050 for all regions for the high emissions scenario. This threshold-based approach also highlights regional variations in the rate of warming across the US. The fastest warming region in the contiguous US is the Northeast, which is projected to warm by 3°C when global warming reaches 2°C. The signal-to-noise ratio calculations indicate that the regional warming estimates remain outside the envelope of uncertainty throughout the twenty-first century, making them potentially useful to planners. The regional precipitation projections for global warming of 1.5°C and 2°C are uncertain, but the eastern US is projected to experience wetter winters and the Great Plains and the Northwest US are projected to experience drier summers in the future. The impact of different scenarios on regional precipitation projections is negligible throughout the twenty-first century compared to uncertainties associated with internal variability and model diversity. PMID:28076360

  18. Consequences of Global Warming of 1.5 °C and 2 °C for Regional Temperature and Precipitation Changes in the Contiguous United States.

    PubMed

    Karmalkar, Ambarish V; Bradley, Raymond S

    2017-01-01

    The differential warming of land and ocean leads to many continental regions in the Northern Hemisphere warming at rates higher than the global mean temperature. Adaptation and conservation efforts will, therefore, benefit from understanding regional consequences of limiting the global mean temperature increase to well below 2°C above pre-industrial levels, a limit agreed upon at the United Nations Climate Summit in Paris in December 2015. Here, we analyze climate model simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to determine the timing and magnitude of regional temperature and precipitation changes across the contiguous United States (US) for global warming of 1.5 and 2°C and highlight consensus and uncertainties in model projections and their implications for making decisions. The regional warming rates differ considerably across the contiguous US, but all regions are projected to reach 2°C about 10-20 years before the global mean temperature. Although there is uncertainty in the timing of exactly when the 1.5 and 2°C thresholds will be crossed regionally, over 80% of the models project at least 2°C warming by 2050 for all regions for the high emissions scenario. This threshold-based approach also highlights regional variations in the rate of warming across the US. The fastest warming region in the contiguous US is the Northeast, which is projected to warm by 3°C when global warming reaches 2°C. The signal-to-noise ratio calculations indicate that the regional warming estimates remain outside the envelope of uncertainty throughout the twenty-first century, making them potentially useful to planners. The regional precipitation projections for global warming of 1.5°C and 2°C are uncertain, but the eastern US is projected to experience wetter winters and the Great Plains and the Northwest US are projected to experience drier summers in the future. The impact of different scenarios on regional precipitation projections is negligible throughout the twenty-first century compared to uncertainties associated with internal variability and model diversity.

  19. Evaluation of a Postoperative Pain-Like State on Motivated Behavior in Rats: Effects of Plantar Incision on Progressive-Ratio Food-Maintained Responding.

    PubMed

    Warner, Emily; Krivitsky, Rebecca; Cone, Katherine; Atherton, Phillip; Pitre, Travis; Lanpher, Janell; Giuvelis, Denise; Bergquist, Ivy; King, Tamara; Bilsky, Edward J; Stevenson, Glenn W

    2015-12-01

    There has been recent interest in characterizing the effects of pain-like states on motivated behaviors in order to quantify how pain modulates goal-directed behavior and the persistence of that behavior. The current set of experiments assessed the effects of an incisional postoperative pain manipulation on food-maintained responding under a progressive-ratio (PR) operant schedule. Independent variables included injury state (plantar incision or anesthesia control) and reinforcer type (grain pellet or sugar pellet); dependent variables were tactile sensory thresholds and response breakpoint. Once responding stabilized on the PR schedule, separate groups of rats received a single ventral hind paw incision or anesthesia (control condition). Incision significantly reduced breakpoints in rats responding for grain, but not sugar. In rats responding for sugar, tactile hypersensitivity recovered within 24 hr, indicating a faster recovery of incision-induced tactile hypersensitivity compared to rats responding for grain, which demonstrated recovery at PD2. The NSAID analgesic, diclofenac (5.6 mg/kg) completely restored incision-depressed PR operant responding and tactile sensitivity at 3 hr following incision. The PR schedule differentiated between sucrose and grain, suggesting that relative reinforcing efficacy may be an important determinant in detecting pain-induced changes in motivated behavior. © 2015 Wiley Periodicals, Inc.

  20. Evaluation of a post-operative pain-like state on motivated behavior in rats: Effects of plantar incision on progressive-ratio food-maintained responding

    PubMed Central

    Warner, Emily; Krivitsky, Rebecca; Cone, Katherine; Atherton, Phillip; Pitre, Travis; Lanpher, Janell; Giuvelis, Denise; Bergquist, Ivy; King, Tamara; Bilsky, Edward J.; Stevenson, Glenn W.

    2015-01-01

    There has been recent interest in characterizing the effects of pain-like states on motivated behaviors in order to quantify how pain modulates goal-directed behavior and the persistence of that behavior. The current set of experiments assessed the effects of an incisional post-operative pain manipulation on food-maintained responding under a progressive-ratio (PR) operant schedule. Independent variables included injury state (plantar incision or anesthesia control) and reinforcer type (grain pellet or sugar pellet); dependent variables were tactile sensory thresholds and response breakpoint. Once responding stabilized on the PR schedule, separate groups of rats received a single ventral hind paw incision or anesthesia (control condition). Incision significantly reduced breakpoints in rats responding for grain, but not sugar. In rats responding for sugar, tactile hypersensitivity recovered within 24 hrs, indicating a faster recovery of incision-induced tactile hypersensitivity compared to rats responding for grain, which demonstrated recovery at PD2. The NSAID analgesic, diclofenac (5.6 mg/kg) completely restored incision-depressed PR operant responding and tactile sensitivity at 3 hr following incision. The PR schedule differentiated between sucrose and grain, suggesting that relative reinforcing efficacy may be an important determinant in detecting pain-induced changes in motivated behavior. PMID:26494422

Top