Sample records for adaptive threshold methods

  1. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  2. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    NASA Astrophysics Data System (ADS)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  3. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  4. Self-Tuning Threshold Method for Real-Time Gait Phase Detection Based on Ground Contact Forces Using FSRs.

    PubMed

    Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi

    2018-02-06

    This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.

  5. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  6. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  7. Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.

    PubMed

    Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang

    2016-10-10

    In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.

  8. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  9. A Purkinje shift in the spectral sensitivity of grey squirrels

    PubMed Central

    Silver, Priscilla H.

    1966-01-01

    1. The light-adapted spectral sensitivity of the grey squirrel has been determined by an automated training method at a level about 6 log units above the squirrel's absolute threshold. 2. The maximum sensitivity is near 555 nm, under light-adapted conditions, compared with the dark-adapted maximum near 500 nm found by a similar method. 3. Neither the light-adapted nor the dark-adapted behavioural threshold agrees with electrophysiological findings using single flash techniques, but there is agreement with e.r.g. results obtained with sinusoidal stimuli. PMID:5972118

  10. An adaptive threshold detector and channel parameter estimator for deep space optical communications

    NASA Technical Reports Server (NTRS)

    Arabshahi, P.; Mukai, R.; Yan, T. -Y.

    2001-01-01

    This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.

  11. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  12. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  13. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  14. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  15. Passive activity observation (PAO) method to estimate outdoor thermal adaptation in public space: case studies in Australian cities.

    PubMed

    Sharifi, Ehsan; Boland, John

    2018-06-18

    Outdoor thermal comfort is influenced by people's climate expectations, perceptions and adaptation capacity. Varied individual response to comfortable or stressful thermal environments results in a deviation between actual outdoor thermal activity choices and those predicted by thermal comfort indices. This paper presents a passive activity observation (PAO) method for estimating contextual limits of outdoor thermal adaptation. The PAO method determines which thermal environment result in statistically meaningful changes may occur in outdoor activity patterns, and it estimates thresholds of outdoor thermal neutrality and limits of thermal adaptation in public space based on activity observation and microclimate field measurement. Applications of the PAO method have been demonstrated in Adelaide, Melbourne and Sydney, where outdoor activities were analysed against outdoor thermal comfort indices between 2013 and 2014. Adjusted apparent temperature (aAT), adaptive predicted mean vote (aPMV), outdoor standard effective temperature (OUT_SET), physiological equivalent temperature (PET) and universal thermal comfort index (UTCI) are calculated from the PAO data. Using the PAO method, the high threshold of outdoor thermal neutrality was observed between 24 °C for optional activities and 34 °C for necessary activities (UTCI scale). Meanwhile, the ultimate limit of thermal adaptation in uncontrolled public spaces is estimated to be between 28 °C for social activities and 48 °C for necessary activities. Normalised results indicate that city-wide high thresholds for outdoor thermal neutrality vary from 25 °C in Melbourne to 26 °C in Sydney and 30 °C in Adelaide. The PAO method is a relatively fast and localised method for measuring limits of outdoor thermal adaptation and effectively informs urban design and policy making in the context of climate change.

  16. Multiratio fusion change detection with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.

    2017-04-01

    A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.

  17. QUEST+: A general multidimensional Bayesian adaptive psychometric method.

    PubMed

    Watson, Andrew B

    2017-03-01

    QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

  18. Method for Assessing Contrast Performance under Lighting Conditions such as Entering a Tunnel on Sunny Day.

    PubMed

    Huang, Y; Menozzi, M

    2015-04-01

    Clinical assessment of dark adaptation is time consuming and requires a specialised instrumentation such as a nyktometer. It is therefore not surprising that dark adaptation is rarely tested in practice. As for the case of testing fitness of a driver, demands on adaptation in daily driving tasks mostly depart from settings in a nyktometer. In daily driving, adaptation is stressed by high and fast transitions of light levels, and the period of time which is relevant to safe driving starts right after a transition and ends several seconds later. In the nyktometer dark adaptation is tested after completion of the adaptation process. RESULTS of a nyktometer test may therefore deliver little information about adaptation shortly after light transitions. In an attempt to develop a clinical test aiming to fulfill both a short measurement time and offering test conditions comparable to conditions in driving, we conducted a preliminary study in which contrast sensitivity thresholds were recorded for light transitions as found in daily driving tasks and for various times after transition onsets. Contrast sensitivity performance is compared to dark adaptation performance as assessed by a myktometer. Contrast sensitivity thresholds were recorded in 17 participants by means of a twin projection apparatus. The apparatus enabled the projection of an adapting field and of a Landolt ring both with a variable luminance. Five different stepwise transitions in levels of adapting luminance were tested. All transitions occurred from bright to dark. The Landolt ring was flashed 100 or 500 ms after the transition had occurred. Participants were instructed to report the orientation of the Landolt ring. A Rodenstock Nyktometer, Plate 501, was used to record dark adaptation threshold. Experimental data from the proposed test revealed a noticeably increasing contrast detection threshold measured in dark adaptation in the stronger transition from 14 000 to 8 cd/m2 than in the weaker transition from 2000 to 8 cd/m2. By raising the dark adaption luminance level from 8 to 60 cd/m2 in the stronger transition case, the contrast detection threshold was then improved by a factor of four. Another main finding showed that for the adaptation process from strong glare stimuli to the dark adaptation, a peak deterioration in contrast sensitivity occurred at the light adaptation level of 6000 cd/m2. Comparing the contrast performance assessed by the proposed test with that of the nyktometer test, there was no clear correlation between the two methods. Our suggested method to assess dark adaptation performance proved to be practical in use and, since the patient does not have to spend a long time to attain complete dark adaptation, the method required a short time for measurement. Our negative experience in the use of the myktometer was in agreement with reported experience in the literature. Georg Thieme Verlag KG Stuttgart · New York.

  19. Method of Improved Fuzzy Contrast Combined Adaptive Threshold in NSCT for Medical Image Enhancement

    PubMed Central

    Yang, Jie; Kasabov, Nikola

    2017-01-01

    Noises and artifacts are introduced to medical images due to acquisition techniques and systems. This interference leads to low contrast and distortion in images, which not only impacts the effectiveness of the medical image but also seriously affects the clinical diagnoses. This paper proposes an algorithm for medical image enhancement based on the nonsubsampled contourlet transform (NSCT), which combines adaptive threshold and an improved fuzzy set. First, the original image is decomposed into the NSCT domain with a low-frequency subband and several high-frequency subbands. Then, a linear transformation is adopted for the coefficients of the low-frequency component. An adaptive threshold method is used for the removal of high-frequency image noise. Finally, the improved fuzzy set is used to enhance the global contrast and the Laplace operator is used to enhance the details of the medical images. Experiments and simulation results show that the proposed method is superior to existing methods of image noise removal, improves the contrast of the image significantly, and obtains a better visual effect. PMID:28744464

  20. The method of constant stimuli is inefficient

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Fitzhugh, Andrew

    1990-01-01

    Simpson (1988) has argued that the method of constant stimuli is as efficient as adaptive methods of threshold estimation and has supported this claim with simulations. It is shown that Simpson's simulations are not a reasonable model of the experimental process and that more plausible simulations confirm that adaptive methods are much more efficient that the method of constant stimuli.

  1. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors

    PubMed Central

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  2. Microscopy mineral image enhancement based on improved adaptive threshold in nonsubsampled shearlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Liangliang; Si, Yujuan; Jia, Zhenhong

    2018-03-01

    In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.

  3. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  4. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  5. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  6. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    NASA Astrophysics Data System (ADS)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  7. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  8. Extraction of Extended Small-Scale Objects in Digital Images

    NASA Astrophysics Data System (ADS)

    Volkov, V. Y.

    2015-05-01

    Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.

  9. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  10. Positive-negative corresponding normalized ghost imaging based on an adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.

    2016-11-01

    Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.

  11. Outlier detection for particle image velocimetry data using a locally estimated noise variance

    NASA Astrophysics Data System (ADS)

    Lee, Yong; Yang, Hua; Yin, ZhouPing

    2017-03-01

    This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.

  12. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach

    PubMed Central

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2016-01-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795

  13. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  14. Characterization of Rod Function Phenotypes Across a Range of Age-Related Macular Degeneration Severities and Subretinal Drusenoid Deposits

    PubMed Central

    Flynn, Oliver J.; Cukras, Catherine A.; Jeffrey, Brett G.

    2018-01-01

    Purpose To examine spatial changes in rod-mediated function in relationship to local structural changes across the central retina in eyes with a spectrum of age-related macular degeneration (AMD) disease severity. Methods Participants were categorized into five AMD severity groups based on fundus features. Scotopic thresholds were measured at 14 loci spanning ±18° along the vertical meridian from one eye of each of 42 participants (mean = 71.7 ± 9.9 years). Following a 30% bleach, dark adaptation was measured at eight loci (±12°). Rod intercept time (RIT) was defined from the time to detect a −3.1 log cd/m2 stimulus. RITslope was defined from the linear fit of RIT with decreasing retinal eccentricity. The presence of subretinal drusenoid deposits (SDD), ellipsoid (EZ) band disruption, and drusen at the test loci was evaluated using optical coherence tomography. Results Scotopic thresholds indicated greater rod function loss in the macula, which correlated with increasing AMD group severity. RITslope, which captures the spatial change in the rate of dark adaptation, increased with AMD severity (P < 0.0001). Three rod function phenotypes emerged: RF1, normal rod function; RF2, normal scotopic thresholds but slowed dark adaptation; and RF3, elevated scotopic thresholds with slowed dark adaptation. Dark adaptation was slowed at all loci with SDD or EZ band disruption, and at 32% of loci with no local structural changes. Conclusions Three rod function phenotypes were defined from combined measurement of scotopic threshold and dark adaptation. Spatial changes in dark adaptation across the macula were captured with RITslope, which may be a useful outcome measure for functional studies of AMD. PMID:29847647

  15. Psychophysical Measurement of Rod and Cone Thresholds in Stargardt Disease with Full-Field Stimuli

    PubMed Central

    Collison, Frederick T.; Fishman, Gerald A.; McAnany, J. Jason; Zernant, Jana; Allikmets, Rando

    2014-01-01

    Purpose To investigate psychophysical thresholds in Stargardt disease with the full-field stimulus test (FST). Methods Visual acuity (VA), spectral-domain optical coherence tomography (SD-OCT), full-field electroretinogram (ERG), and FST measurements were made in one eye of 24 patients with Stargardt disease. Dark-adapted rod FST thresholds were measured with short-wavelength stimuli, and cone FST thresholds were obtained from the cone plateau phase of dark adaptation using long-wavelength stimuli. Correlation coefficients were calculated for FST thresholds versus macular thickness, VA and ERG amplitudes. Results Stargardt patient FST cone thresholds correlated significantly with VA, macular thickness, and ERG cone-response amplitudes (all P<0.01). The patients’ FST rod thresholds correlated with ERG rod-response amplitudes (P<0.01), but not macular thickness (P=0.05). All Stargardt disease patients with flecks confined to the macula and most of the patients with flecks extending outside of the macula had normal FST thresholds. All patients with extramacular atrophic changes had elevated FST cone thresholds and most had elevated FST rod thresholds. Conclusion FST rod and cone threshold elevation in Stargardt disease patients correlated well with measures of structure and function, as well as ophthalmoscopic retinal appearance. FST appears to be a useful tool for assessing rod and cone function in Stargardt disease. PMID:24695063

  16. ADAPTIVE THRESHOLD LOGIC.

    DTIC Science & Technology

    The design and construction of a 16 variable threshold logic gate with adaptable weights is described. The operating characteristics of tape wound...and sizes as well as for the 16 input adaptive threshold logic gate. (Author)

  17. Graded-threshold parametric response maps: towards a strategy for adaptive dose painting

    NASA Astrophysics Data System (ADS)

    Lausch, A.; Jensen, N.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.

    2014-03-01

    Purpose: To modify the single-threshold parametric response map (ST-PRM) method for predicting treatment outcomes in order to facilitate its use for guidance of adaptive dose painting in intensity-modulated radiotherapy. Methods: Multiple graded thresholds were used to extend the ST-PRM method (Nat. Med. 2009;15(5):572-576) such that the full functional change distribution within tumours could be represented with respect to multiple confidence interval estimates for functional changes in similar healthy tissue. The ST-PRM and graded-threshold PRM (GT-PRM) methods were applied to functional imaging scans of 5 patients treated for hepatocellular carcinoma. Pre and post-radiotherapy arterial blood flow maps (ABF) were generated from CT-perfusion scans of each patient. ABF maps were rigidly registered based on aligning tumour centres of mass. ST-PRM and GT-PRM analyses were then performed on overlapping tumour regions within the registered ABF maps. Main findings: The ST-PRMs contained many disconnected clusters of voxels classified as having a significant change in function. While this may be useful to predict treatment response, it may pose challenges for identifying boost volumes or for informing dose-painting by numbers strategies. The GT-PRMs included all of the same information as ST-PRMs but also visualized the full tumour functional change distribution. Heterogeneous clusters in the ST-PRMs often became more connected in the GT-PRMs by voxels with similar functional changes. Conclusions: GT-PRMs provided additional information which helped to visualize relationships between significant functional changes identified by ST-PRMs. This may enhance ST-PRM utility for guiding adaptive dose painting.

  18. Predicting missing values in a home care database using an adaptive uncertainty rule method.

    PubMed

    Konias, S; Gogou, G; Bamidis, P D; Vlahavas, I; Maglaveras, N

    2005-01-01

    Contemporary literature illustrates an abundance of adaptive algorithms for mining association rules. However, most literature is unable to deal with the peculiarities, such as missing values and dynamic data creation, that are frequently encountered in fields like medicine. This paper proposes an uncertainty rule method that uses an adaptive threshold for filling missing values in newly added records. A new approach for mining uncertainty rules and filling missing values is proposed, which is in turn particularly suitable for dynamic databases, like the ones used in home care systems. In this study, a new data mining method named FiMV (Filling Missing Values) is illustrated based on the mined uncertainty rules. Uncertainty rules have quite a similar structure to association rules and are extracted by an algorithm proposed in previous work, namely AURG (Adaptive Uncertainty Rule Generation). The main target was to implement an appropriate method for recovering missing values in a dynamic database, where new records are continuously added, without needing to specify any kind of thresholds beforehand. The method was applied to a home care monitoring system database. Randomly, multiple missing values for each record's attributes (rate 5-20% by 5% increments) were introduced in the initial dataset. FiMV demonstrated 100% completion rates with over 90% success in each case, while usual approaches, where all records with missing values are ignored or thresholds are required, experienced significantly reduced completion and success rates. It is concluded that the proposed method is appropriate for the data-cleaning step of the Knowledge Discovery process in databases. The latter, containing much significance for the output efficiency of any data mining technique, can improve the quality of the mined information.

  19. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-01

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857

  20. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.

    PubMed

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-13

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.

  1. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  2. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding.

    PubMed

    Huang, Chao; Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-06-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information.

  3. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  4. Contributions of adaptation currents to dynamic spike threshold on slow timescales: Biophysical insights from conductance-based models

    NASA Astrophysics Data System (ADS)

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin; Li, Huiyan; Che, Yanqiu

    2017-06-01

    Spike-frequency adaptation (SFA) mediated by various adaptation currents, such as voltage-gated K+ current (IM), Ca2+-gated K+ current (IAHP), or Na+-activated K+ current (IKNa), exists in many types of neurons, which has been shown to effectively shape their information transmission properties on slow timescales. Here we use conductance-based models to investigate how the activation of three adaptation currents regulates the threshold voltage for action potential (AP) initiation during the course of SFA. It is observed that the spike threshold gets depolarized and the rate of membrane depolarization (dV/dt) preceding AP is reduced as adaptation currents reduce firing rate. It is indicated that the presence of inhibitory adaptation currents enables the neuron to generate a dynamic threshold inversely correlated with preceding dV/dt on slower timescales than fast dynamics of AP generation. By analyzing the interactions of ionic currents at subthreshold potentials, we find that the activation of adaptation currents increase the outward level of net membrane current prior to AP initiation, which antagonizes inward Na+ to result in a depolarized threshold and lower dV/dt from one AP to the next. Our simulations demonstrate that the threshold dynamics on slow timescales is a secondary effect caused by the activation of adaptation currents. These findings have provided a biophysical interpretation of the relationship between adaptation currents and spike threshold.

  5. SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M

    Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less

  6. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    PubMed Central

    Zhang, Weina; Quan, Wei; Guo, Lei

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666

  7. Evaluation and comparison of 50 Hz current threshold of electrocutaneous sensations using different methods

    PubMed Central

    Lindenblatt, G.; Silny, J.

    2006-01-01

    Leakage currents, tiny currents flowing from an everyday-life appliance through the body to the ground, can cause a non-adequate perception (called electrocutaneous sensation, ECS) or even pain and should be avoided. Safety standards for low-frequency range are based on experimental results of current thresholds of electrocutaneous sensations, which however show a wide range between about 50 μA (rms) and 1000 μA (rms). In order to be able to explain these differences, the perception threshold was measured repeatedly in experiments with test persons under identical experimental setup, but by means of different methods (measuring strategies), namely: direct adjustment, classical threshold as amperage of 50% perception probability, and confidence rating procedure of signal detection theory. The current is injected using a 1 cm2 electrode at the highly touch sensitive part of the index fingertip. These investigations show for the first time that the threshold of electrocutaneous sensations is influenced both by adaptation to the non-adequate stimulus and individual, emotional factors. Therefore, classical methods, on which the majority of the safety investigations are based, cannot be used to determine a leakage current threshold. The confidence rating procedure of the modern signal detection theory yields a value of 179.5 μA (rms) at 50 Hz power supply net frequency as the lower end of the 95% confidence range considering the variance in the investigated group. This value is expected to be free of adaptation influences, and is distinctly lower than the European limits and supports the stricter regulations of Canada and USA. PMID:17111461

  8. Incorporating adaptive responses into future projections of coral bleaching.

    PubMed

    Logan, Cheryl A; Dunne, John P; Eakin, C Mark; Donner, Simon D

    2014-01-01

    Climate warming threatens to increase mass coral bleaching events, and several studies have projected the demise of tropical coral reefs this century. However, recent evidence indicates corals may be able to respond to thermal stress though adaptive processes (e.g., genetic adaptation, acclimatization, and symbiont shuffling). How these mechanisms might influence warming-induced bleaching remains largely unknown. This study compared how different adaptive processes could affect coral bleaching projections. We used the latest bias-corrected global sea surface temperature (SST) output from the NOAA/GFDL Earth System Model 2 (ESM2M) for the preindustrial period through 2100 to project coral bleaching trajectories. Initial results showed that, in the absence of adaptive processes, application of a preindustrial climatology to the NOAA Coral Reef Watch bleaching prediction method overpredicts the present-day bleaching frequency. This suggests that corals may have already responded adaptively to some warming over the industrial period. We then modified the prediction method so that the bleaching threshold either permanently increased in response to thermal history (e.g., simulating directional genetic selection) or temporarily increased for 2-10 years in response to a bleaching event (e.g., simulating symbiont shuffling). A bleaching threshold that changes relative to the preceding 60 years of thermal history reduced the frequency of mass bleaching events by 20-80% compared with the 'no adaptive response' prediction model by 2100, depending on the emissions scenario. When both types of adaptive responses were applied, up to 14% more reef cells avoided high-frequency bleaching by 2100. However, temporary increases in bleaching thresholds alone only delayed the occurrence of high-frequency bleaching by ca. 10 years in all but the lowest emissions scenario. Future research should test the rate and limit of different adaptive responses for coral species across latitudes and ocean basins to determine if and how much corals can respond to increasing thermal stress.

  9. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor.

    PubMed

    Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping

    2009-11-10

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  10. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    PubMed

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  11. Interocular transfer of spatial adaptation is weak at low spatial frequencies.

    PubMed

    Baker, Daniel H; Meese, Tim S

    2012-06-15

    Adapting one eye to a high contrast grating reduces sensitivity to similar target gratings shown to the same eye, and also to those shown to the opposite eye. According to the textbook account, interocular transfer (IOT) of adaptation is around 60% of the within-eye effect. However, most previous studies on this were limited to using high spatial frequencies, sustained presentation, and criterion-dependent methods for assessing threshold. Here, we measure IOT across a wide range of spatiotemporal frequencies, using a criterion-free 2AFC method. We find little or no IOT at low spatial frequencies, consistent with other recent observations. At higher spatial frequencies, IOT was present, but weaker than previously reported (around 35%, on average, at 8c/deg). Across all conditions, monocular adaptation raised thresholds by around a factor of 2, and observers showed normal binocular summation, demonstrating that they were not binocularly compromised. These findings prompt a reassessment of our understanding of the binocular architecture implied by interocular adaptation. In particular, the output of monocular channels may be available to perceptual decision making at low spatial frequencies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Adaptive 4d Psi-Based Change Detection

    NASA Astrophysics Data System (ADS)

    Yang, Chia-Hsiang; Soergel, Uwe

    2018-04-01

    In a previous work, we proposed a PSI-based 4D change detection to detect disappearing and emerging PS points (3D) along with their occurrence dates (1D). Such change points are usually caused by anthropic events, e.g., building constructions in cities. This method first divides an entire SAR image stack into several subsets by a set of break dates. The PS points, which are selected based on their temporal coherences before or after a break date, are regarded as change candidates. Change points are then extracted from these candidates according to their change indices, which are modelled from their temporal coherences of divided image subsets. Finally, we check the evolution of the change indices for each change point to detect the break date that this change occurred. The experiment validated both feasibility and applicability of our method. However, two questions still remain. First, selection of temporal coherence threshold associates with a trade-off between quality and quantity of PS points. This selection is also crucial for the amount of change points in a more complex way. Second, heuristic selection of change index thresholds brings vulnerability and causes loss of change points. In this study, we adapt our approach to identify change points based on statistical characteristics of change indices rather than thresholding. The experiment validates this adaptive approach and shows increase of change points compared with the old version. In addition, we also explore and discuss optimal selection of temporal coherence threshold.

  13. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  14. The Limits to Adaptation; A Systems Approach

    EPA Science Inventory

    The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...

  15. An adaptive design for updating the threshold value of a continuous biomarker

    PubMed Central

    Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2017-01-01

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407

  16. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  17. Method and apparatus for detection of catalyst failure on-board a motor vehicle using a dual oxygen sensor and an algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemmens, W.B.; Koupal, J.W.; Sabourin, M.A.

    1993-07-20

    Apparatus is described for detecting motor vehicle exhaust gas catalytic converter deterioration comprising a first exhaust gas oxygen sensor adapted for communication with an exhaust stream before passage of the exhaust stream through a catalytic converter and a second exhaust gas oxygen sensor adapted for communication with the exhaust stream after passage of the exhaust stream through the catalytic converter, an on-board vehicle computational means, said computational means adapted to accept oxygen content signals from the before and after catalytic converter oxygen sensors and adapted to generate signal threshold values, said computational means adapted to compare over repeated time intervalsmore » the oxygen content signals to the signal threshold values and to store the output of the compared oxygen content signals, and in response after a specified number of time intervals for a specified mode of motor vehicle operation to determine and indicate a level of catalyst deterioration.« less

  18. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  19. Thresholding histogram equalization.

    PubMed

    Chuang, K S; Chen, S; Hwang, I M

    2001-12-01

    The drawbacks of adaptive histogram equalization techniques are the loss of definition on the edges of the object and overenhancement of noise in the images. These drawbacks can be avoided if the noise is excluded in the equalization transformation function computation. A method has been developed to separate the histogram into zones, each with its own equalization transformation. This method can be used to suppress the nonanatomic noise and enhance only certain parts of the object. This method can be combined with other adaptive histogram equalization techniques. Preliminary results indicate that this method can produce images with superior contrast.

  20. QUEST - A Bayesian adaptive psychometric method

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Pelli, D. G.

    1983-01-01

    An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.

  1. Adaptive thresholding image series from fluorescence confocal scanning laser microscope using orientation intensity profiles

    NASA Astrophysics Data System (ADS)

    Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.

    2004-05-01

    Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.

  2. Fitting psychometric functions using a fixed-slope parameter: an advanced alternative for estimating odor thresholds with data generated by ASTM E679.

    PubMed

    Peng, Mei; Jaeger, Sara R; Hautus, Michael J

    2014-03-01

    Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.

  3. Joint Dictionary Learning for Multispectral Change Detection.

    PubMed

    Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao

    2017-04-01

    Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.

  4. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  5. The Limits to Adaptation: A Systems Approach

    EPA Science Inventory

    The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...

  6. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.

    PubMed

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.

  7. An adaptive design for updating the threshold value of a continuous biomarker.

    PubMed

    Spencer, Amy V; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2016-11-30

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker 'positive' and 'negative' is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that 'no population subset exists in which the novel treatment has a desirable response rate' to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  8. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  9. Role of extrinsic noise in the sensitivity of the rod pathway: rapid dark adaptation of nocturnal vision in humans.

    PubMed

    Reeves, Adam; Grayhem, Rebecca

    2016-03-01

    Rod-mediated 500 nm test spots were flashed in Maxwellian view at 5 deg eccentricity, both on steady 10.4 deg fields of intensities (I) from 0.00001 to 1.0 scotopic troland (sc td) and from 0.2 s to 1 s after extinguishing the field. On dim fields, thresholds of tiny (5') tests were proportional to √I (Rose-DeVries law), while thresholds after extinction fell within 0.6 s to the fully dark-adapted absolute threshold. Thresholds of large (1.3 deg) tests were proportional to I (Weber law) and extinction thresholds, to √I. rod thresholds are elevated by photon-driven noise from dim fields that disappears at field extinction; large spot thresholds are additionally elevated by neural light adaptation proportional to √I. At night, recovery from dimly lit fields is fast, not slow.

  10. Robust Adaptive Thresholder For Document Scanning Applications

    NASA Astrophysics Data System (ADS)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  11. Impact of view reduction in CT on radiation dose for patients

    NASA Astrophysics Data System (ADS)

    Parcero, E.; Flores, L.; Sánchez, M. G.; Vidal, V.; Verdú, G.

    2017-08-01

    Iterative methods have become a hot topic of research in computed tomography (CT) imaging because of their capacity to resolve the reconstruction problem from a limited number of projections. This allows the reduction of radiation exposure on patients during the data acquisition. The reconstruction time and the high radiation dose imposed on patients are the two major drawbacks in CT. To solve them effectively we adapted the method for sparse linear equations and sparse least squares (LSQR) with soft threshold filtering (STF) and the fast iterative shrinkage-thresholding algorithm (FISTA) to computed tomography reconstruction. The feasibility of the proposed methods is demonstrated numerically.

  12. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  13. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  14. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  15. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  16. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  17. Definition of temperature thresholds: the example of the French heat wave warning system.

    PubMed

    Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal

    2013-01-01

    Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.

  18. Synergy of adaptive thresholds and multiple transmitters in free-space optical communication.

    PubMed

    Louthain, James A; Schmidt, Jason D

    2010-04-26

    Laser propagation through extended turbulence causes severe beam spread and scintillation. Airborne laser communication systems require special considerations in size, complexity, power, and weight. Rather than using bulky, costly, adaptive optics systems, we reduce the variability of the received signal by integrating a two-transmitter system with an adaptive threshold receiver to average out the deleterious effects of turbulence. In contrast to adaptive optics approaches, systems employing multiple transmitters and adaptive thresholds exhibit performance improvements that are unaffected by turbulence strength. Simulations of this system with on-off-keying (OOK) showed that reducing the scintillation variations with multiple transmitters improves the performance of low-frequency adaptive threshold estimators by 1-3 dB. The combination of multiple transmitters and adaptive thresholding provided at least a 10 dB gain over implementing only transmitter pointing and receiver tilt correction for all three high-Rytov number scenarios. The scenario with a spherical-wave Rytov number R=0.20 enjoyed a 13 dB reduction in the required SNR for BER's between 10(-5) to 10(-3), consistent with the code gain metric. All five scenarios between 0.06 and 0.20 Rytov number improved to within 3 dB of the SNR of the lowest Rytov number scenario.

  19. Vibratory Adaptation of Cutaneous Mechanoreceptive Afferents

    PubMed Central

    Bensmaïa, S. J.; Leung, Y. Y.; Hsiao, S. S.; Johnson, K. O.

    2007-01-01

    The objective of this study was to investigate the effects of extended suprathreshold vibratory stimulation on the sensitivity of slowly adapting type 1 (SA1), rapidly adapting (RA), and Pacinian (PC) afferents. To that end, an algorithm was developed to track afferent absolute (I0) and entrainment (I1) thresholds as they change over time. We recorded afferent responses to periliminal vibratory test stimuli, which were interleaved with intense vibratory conditioning stimuli during the adaptation period of each experimental run. From these measurements, the algorithm allowed us to infer changes in the afferents’ sensitivity. We investigated the stimulus parameters that affect adaptation by assessing the degree to which adaptation depends on the amplitude and frequency of the adapting stimulus. For all three afferent types, I0 and I1 increased with increasing adaptation frequency and amplitude. The degree of adaptation seems to be independent of the firing rate evoked in the afferent by the conditioning stimulus. In the analysis, we distinguished between additive adaptation (in which I0 and I1 shift equally) and multiplicative effects (in which the ratio I1/I0 remains constant). RA threshold shifts are almost perfectly additive. SA1 threshold shifts are close to additive and far from multiplicative (I1 threshold shifts are twice the shifts). PC shifts are more difficult to classify. We used an I0 integrate-and-fire model to study the possible neural mechanisms. A change in transducer gain predicts a multiplicative change in I0 and I1 and is thus ruled out as a mechanism underlying SA1 and RA adaptation. A change in the resting action potential threshold predicts equal, additive change in I0 and I1 and thus accounts well for RA adaptation. A change in the degree of refractoriness during the relative refractory period predicts an additional change in I1 such as that observed for SA1 fibers. We infer that adaptation is caused by an increase in spiking thresholds produced by ion flow through transducer channels in the receptor membrane. In a companion paper, we describe the time-course of vibratory adaptation and recovery for SA1, RA, and PC fibers. PMID:16014802

  20. Relationships Between Vestibular Measures as Potential Predictors for Spaceflight Sensorimotor Adaptation

    NASA Technical Reports Server (NTRS)

    Clark, T. K.; Peters, B.; Gadd, N. E.; De Dios, Y. E.; Wood, S.; Bloomberg, J. J.; Mulavara, A. P.

    2016-01-01

    Introduction: During space exploration missions astronauts are exposed to a series of novel sensorimotor environments, requiring sensorimotor adaptation. Until adaptation is complete, sensorimotor decrements occur, affecting critical tasks such as piloted landing or docking. Of particularly interest are locomotion tasks such as emergency vehicle egress or extra-vehicular activity. While nearly all astronauts eventually adapt sufficiently, it appears there are substantial individual differences in how quickly and effectively this adaptation occurs. These individual differences in capacity for sensorimotor adaptation are poorly understood. Broadly, we aim to identify measures that may serve as pre-flight predictors of and individual's adaptation capacity to spaceflight-induced sensorimotor changes. As a first step, since spaceflight is thought to involve a reinterpretation of graviceptor cues (e.g. otolith cues from the vestibular system) we investigate the relationships between various measures of vestibular function in humans. Methods: In a set of 15 ground-based control subjects, we quantified individual differences in vestibular function using three measures: 1) ocular vestibular evoked myogenic potential (oVEMP), 2) computerized dynamic posturography and 3) vestibular perceptual thresholds. oVEMP responses are elicited using a mechanical stimuli approach. Computerized dynamic posturography was used to quantify Sensory Organization Tests (SOTs), including SOT5M which involved performing pitching head movements while balancing on a sway-reference support surface with eyes closed. We implemented a vestibular perceptual threshold task using the tilt capabilities of the Tilt-Translation Sled (TTS) at JSC. On each trial, the subject was passively roll-tilted left ear down or right ear down in the dark and verbally provided a forced-choice response regarding which direction they felt tilted. The motion profile was a single-cycle sinusoid of angular acceleration with a duration of 5 seconds (frequency of 0.2 Hz), which was selected as it requires sensory integration of otolith and semicircular canal cues. Stimuli direction was randomized and magnitude was determined using an adaptive sampling procedure. One hundred trials were provided and each subject's responses were fit with a psychometric curve to estimate the subject's threshold. Results: Roll tilt perceptual thresholds at 0.2 Hz ranged from 0.5 degrees to 1.82 degrees across the 15 subjects (geometric mean of 1.04 degrees), consistent with previous studies. The inter-individual variability in thresholds may be able to help explain individual differences observed in sensorimotor adaptation to spaceflight. Analysis is ongoing for the oVEMPS and computerized dynamic posturography to identify relationships between the various vestibular measures. Discussion: Predicting individual differences in sensorimotor adaptation is critical both for the development of personalized countermeasures and mission planning. Here we aim to develop a basis of vestibular tests and parameters which may serve as predictors of individual differences in sensorimotor adaptability through studying the relationship between these measures.

  1. Hypersensitivity to Cold Stimuli in Symptomatic Contact Lens Wearers

    PubMed Central

    Situ, Ping; Simpson, Trefford; Begley, Carolyn

    2016-01-01

    Purpose To examine the cooling thresholds and the estimated sensation magnitude at stimulus detection in controls and symptomatic and asymptomatic contact lens (CL) wearers, in order to determine whether detection thresholds depend on the presence of symptoms of dryness and discomfort. Methods 49 adapted CL wearers and 15 non-lens wearing controls had room temperature pneumatic thresholds measured using a custom Belmonte esthesiometer, during Visits 1 and 2 (Baseline CL), Visit 3 (2 weeks no CL wear) and Visit 4 (2 weeks after resuming CL wear). CL wearers were subdivided into symptomatic and asymptomatic groups based on comfortable wearing time (CWT) and CLDEQ-8 score (<8 hours CWT and ≥14 CLDEQ-8 stratified the symptom groups). Detection thresholds were estimated using an ascending method of limits and each threshold was the average of the three first-reported flow rates. The magnitude of intensity, coolness, irritation and pain at detection of the stimulus were estimated using a 1-100 scale (1 very mild, 100 very strong). Results In all measurement conditions, the symptomatic CL wearers were the most sensitive, the asymptomatic CL wearers were the least sensitive and the control group was between the two CL wearing groups (group factor p < 0.001, post hoc asymptomatic vs. symptomatic group, all p’s < 0.015). Similar patterns were found for the estimated magnitude of intensity and irritation (group effect p=0.027 and 0.006 for intensity and irritation, respectively) but not for cooling (p>0.05) at detection threshold. Conclusions Symptomatic CL wearers have higher cold detection sensitivity and report greater intensity and irritation sensation at stimulus detection than the asymptomatic wearers. Room temperature pneumatic esthesiometry may help to better understand the process of sensory adaptation to CL wear. PMID:27046090

  2. Policy Tree Optimization for Adaptive Management of Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Giuliani, M.

    2016-12-01

    Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points", which are threshold values of indicator variables that signal a change in policy. However, there remains a need for a general method to optimize the choice of indicators and their threshold values in a way that is easily interpretable for decision makers. Here we propose a conceptual framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. We demonstrate the approach using Folsom Reservoir, California as a case study, in which operating policies must balance the risk of both floods and droughts. Given a set of feature variables, such as reservoir level, inflow observations and forecasts, and time of year, the resulting policy defines the conditions under which flood control and water supply hedging operations should be triggered. Importantly, the tree-based rule sets are easy to interpret for decision making, and can be compared to historical operating policies to understand the adaptations needed under possible climate change scenarios. Several remaining challenges are discussed, including the empirical convergence properties of the method, and extensions to irreversible decisions such as infrastructure. Policy tree optimization, and corresponding open-source software, provide a generalizable, interpretable approach to designing adaptive policies under uncertainty for water resources systems.

  3. Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs

    NASA Astrophysics Data System (ADS)

    Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen

    2012-03-01

    The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.

  4. Closed-loop adaptation of neurofeedback based on mental effort facilitates reinforcement learning of brain self-regulation.

    PubMed

    Bauer, Robert; Fels, Meike; Royter, Vladislav; Raco, Valerio; Gharabaghi, Alireza

    2016-09-01

    Considering self-rated mental effort during neurofeedback may improve training of brain self-regulation. Twenty-one healthy, right-handed subjects performed kinesthetic motor imagery of opening their left hand, while threshold-based classification of beta-band desynchronization resulted in proprioceptive robotic feedback. The experiment consisted of two blocks in a cross-over design. The participants rated their perceived mental effort nine times per block. In the adaptive block, the threshold was adjusted on the basis of these ratings whereas adjustments were carried out at random in the other block. Electroencephalography was used to examine the cortical activation patterns during the training sessions. The perceived mental effort was correlated with the difficulty threshold of neurofeedback training. Adaptive threshold-setting reduced mental effort and increased the classification accuracy and positive predictive value. This was paralleled by an inter-hemispheric cortical activation pattern in low frequency bands connecting the right frontal and left parietal areas. Optimal balance of mental effort was achieved at thresholds significantly higher than maximum classification accuracy. Rating of mental effort is a feasible approach for effective threshold-adaptation during neurofeedback training. Closed-loop adaptation of the neurofeedback difficulty level facilitates reinforcement learning of brain self-regulation. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. A simple plug-in bagging ensemble based on threshold-moving for classifying binary and multiclass imbalanced data.

    PubMed

    Collell, Guillem; Prelec, Drazen; Patil, Kaustubh R

    2018-01-31

    Class imbalance presents a major hurdle in the application of classification methods. A commonly taken approach is to learn ensembles of classifiers using rebalanced data. Examples include bootstrap averaging (bagging) combined with either undersampling or oversampling of the minority class examples. However, rebalancing methods entail asymmetric changes to the examples of different classes, which in turn can introduce their own biases. Furthermore, these methods often require specifying the performance measure of interest a priori, i.e., before learning. An alternative is to employ the threshold moving technique, which applies a threshold to the continuous output of a model, offering the possibility to adapt to a performance measure a posteriori , i.e., a plug-in method. Surprisingly, little attention has been paid to this combination of a bagging ensemble and threshold-moving. In this paper, we study this combination and demonstrate its competitiveness. Contrary to the other resampling methods, we preserve the natural class distribution of the data resulting in well-calibrated posterior probabilities. Additionally, we extend the proposed method to handle multiclass data. We validated our method on binary and multiclass benchmark data sets by using both, decision trees and neural networks as base classifiers. We perform analyses that provide insights into the proposed method.

  6. Thresholds for the perception of whole-body linear sinusoidal motion in the horizontal plane

    NASA Technical Reports Server (NTRS)

    Mah, Robert W.; Young, Laurence R.; Steele, Charles R.; Schubert, Earl D.

    1989-01-01

    An improved linear sled has been developed to provide precise motion stimuli without generating perceptible extraneous motion cues (a noiseless environment). A modified adaptive forced-choice method was employed to determine perceptual thresholds to whole-body linear sinusoidal motion in 25 subjects. Thresholds for the detection of movement in the horizontal plane were found to be lower than those reported previously. At frequencies of 0.2 to 0.5 Hz, thresholds were shown to be independent of frequency, while at frequencies of 1.0 to 3.0 Hz, thresholds showed a decreasing sensitivity with increasing frequency, indicating that the perceptual process is not sensitive to the rate change of acceleration of the motion stimulus. The results suggest that the perception of motion behaves as an integrating accelerometer with a bandwidth of at least 3 Hz.

  7. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    DOE PAGES

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; ...

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  8. Contrast adaptation induced by defocus - a possible error signal for emmetropization?

    PubMed

    Ohlendorf, Arne; Schaeffel, Frank

    2009-01-01

    To describe some features of contrast adaptation as induced by imposed positive or negative defocus. To study its time course and selectivity for the sign of the imposed defocus. Contrast adaptation, CA (here referred to as any change in supra-threshold contrast sensitivity) was induced by presenting a movie to the subjects on a computer screen at 1m distance for 10min, while the right eye was defocused by a trial lens (+4D (n=25); -4D (n=10); -2D (n=11 subjects). The PowerRefractor was used to track accommodation binocularly. Contrast sensitivity at threshold was measured by a method of adjustment with a Gabor patch of 1deg angular subtense, filled with 3.22cyc/deg sine wave grating presented on a computer screen at 1m distance on gray background (33cd/m(2)). Supra-threshold contrast sensitivity was quantified by an interocular contrast matching task, in which the subject had to match the contrast of the sine wave grating seen with the right eye with the contrast of a grating with fixed contrast of 0.1. (1) Contrast sensitivity thresholds were not lowered by previous viewing of defocused movies. (2) By wearing positive lenses, the supra-threshold contrast sensitivity in the right eye was raised by about 30% and remained elevated for at least 2min until baseline was reached after about 5min. (3) CA was induced only by positive, but not by negative lenses, even after the distance of the computer screen was taken into account (1m, equivalent to +1D). In five subjects, binocular accommodation was tracked over the full adaptation period. Accommodation appeared to focus the eye not wearing a lens, but short transient switches in focus to the lens wearing eye could not be entirely excluded. Transient contrast adaptation was found at 3.22cyc/deg when positive lenses were worn but not with negative lenses. This asymmetry is intriguing. While it may represent an epiphenomenon of physiological optics, further experiments are necessary to determine whether it could also trace back to differences in CA with defocus of different sign.

  9. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  10. DARK ADAPTATION IN DINEUTES

    PubMed Central

    Clark, Leonard B.

    1938-01-01

    The level of dark adaptation of the whirligig beetle can be measured in terms of the threshold intensity calling forth a response. The course of dark adaptation was determined at levels of light adaptation of 6.5, 91.6, and 6100 foot-candles. All data can be fitted by the same curve. This indicates that dark adaptation follows parts of the same course irrespective of the level of light adaptation. The intensity of the adapting light determines the level at which dark adaptation will begin. The relation between log aI 0 (instantaneous threshold) and log of adapting light intensity is linear over the range studied. PMID:19873056

  11. Comparison of in-air evoked potential and underwater behavioral hearing thresholds in four bottlenose dolphins (Tursiops truncatus).

    PubMed

    Finneran, James J; Houser, Dorian S

    2006-05-01

    Traditional behavioral techniques for hearing assessment in marine mammals are limited by the time and access required to train subjects. Electrophysiological methods, where passive electrodes are used to measure auditory evoked potentials (AEPs), are attractive alternatives to behavioral techniques; however, there have been few attempts to compare AEP and behavioral results for the same subject. In this study, behavioral and AEP hearing thresholds were compared in four bottlenose dolphins. AEP thresholds were measured in-air using a piezoelectric sound projector embedded in a suction cup to deliver amplitude modulated tones to the dolphin through the lower jaw. Evoked potentials were recorded noninvasively using surface electrodes. Adaptive procedures allowed AEP hearing thresholds to be estimated from 10 to 150 kHz in a single ear in about 45 min. Behavioral thresholds were measured in a quiet pool and in San Diego Bay. AEP and behavioral threshold estimates agreed closely as to the upper cutoff frequency beyond which thresholds increased sharply. AEP thresholds were strongly correlated with pool behavioral thresholds across the range of hearing; differences between AEP and pool behavioral thresholds increased with threshold magnitude and ranged from 0 to + 18 dB.

  12. A robust bi-orthogonal/dynamically-orthogonal method using the covariance pseudo-inverse with application to stochastic flow problems

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em

    2017-09-01

    We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we analyze the performance of the method in the presence of eigenvalue crossing and zero eigenvalues; (ii) stochastic Kovasznay flow: we examine the method in the presence of a singular covariance matrix; and (iii) we examine the adaptivity of the method for an incompressible flow over a cylinder where for large stochastic forcing thirteen DO/BO modes are active.

  13. Adaptive semantic tag mining from heterogeneous clinical research texts.

    PubMed

    Hao, T; Weng, C

    2015-01-01

    To develop an adaptive approach to mine frequent semantic tags (FSTs) from heterogeneous clinical research texts. We develop a "plug-n-play" framework that integrates replaceable unsupervised kernel algorithms with formatting, functional, and utility wrappers for FST mining. Temporal information identification and semantic equivalence detection were two example functional wrappers. We first compared this approach's recall and efficiency for mining FSTs from ClinicalTrials.gov to that of a recently published tag-mining algorithm. Then we assessed this approach's adaptability to two other types of clinical research texts: clinical data requests and clinical trial protocols, by comparing the prevalence trends of FSTs across three texts. Our approach increased the average recall and speed by 12.8% and 47.02% respectively upon the baseline when mining FSTs from ClinicalTrials.gov, and maintained an overlap in relevant FSTs with the base- line ranging between 76.9% and 100% for varying FST frequency thresholds. The FSTs saturated when the data size reached 200 documents. Consistent trends in the prevalence of FST were observed across the three texts as the data size or frequency threshold changed. This paper contributes an adaptive tag-mining framework that is scalable and adaptable without sacrificing its recall. This component-based architectural design can be potentially generalizable to improve the adaptability of other clinical text mining methods.

  14. A lane line segmentation algorithm based on adaptive threshold and connected domain theory

    NASA Astrophysics Data System (ADS)

    Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang

    2018-04-01

    Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.

  15. Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1996-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  16. Unipolar terminal-attractor based neural associative memory with adaptive threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1993-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  17. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    PubMed

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.

  18. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  19. Longitudinal changes in femur bone mineral density after spinal cord injury: effects of slice placement and peel method

    PubMed Central

    Dudley-Javoroski, S.

    2010-01-01

    Summary Surveillance of femur metaphysis bone mineral density (BMD) decline after spinal cord injury (SCI) may be subject to slice placement error of 2.5%. Adaptations to anti-osteoporosis measures should exceed this potential source of error. Image analysis parameters likewise affect BMD output and should be selected strategically in longitudinal studies. Introduction Understanding the longitudinal changes in bone mineral density (BMD) after spinal cord injury (SCI) is important when assessing new interventions. We determined the longitudinal effect of SCI on BMD of the femur metaphysis. To facilitate interpretation of longitudinal outcomes, we (1) determined the BMD difference associated with erroneous peripheral quantitative computed tomography (pQCT) slice placement, and (2) determined the effect of operator-selected pQCT peel algorithms on BMD. Methods pQCT images were obtained from the femur metaphysis (12% of length from distal end) of adult subjects with and without SCI. Slice placement errors were simulated at 3 mm intervals and were processed in two ways (threshold-based vs. concentric peel). Results BMD demonstrated a rapid decline over 2 years post-injury. BMD differences attributable to operator-selected peel methods were large (17.3% for subjects with SCI). Conclusions Femur metaphysis BMD declines after SCI in a manner similar to other anatomic sites. Concentric (percentage-based) peel methods may be most appropriate when special sensitivity is required to detect BMD adaptations. Threshold-based methods may be more appropriate when asymmetric adaptations are observed. PMID:19707702

  20. Lowering threshold energy for femtosecond laser pulse photodisruption through turbid media using adaptive optics

    NASA Astrophysics Data System (ADS)

    Hansen, A.; Ripken, Tammo; Krueger, Ronald R.; Lubatschowski, Holger

    2011-03-01

    Focussed femtosecond laser pulses are applied in ophthalmic tissues to create an optical breakdown and therefore a tissue dissection through photodisruption. The threshold irradiance for the optical breakdown depends on the photon density in the focal volume which can be influenced by the pulse energy, the size of the irradiated area (focus), and the irradiation time. For an application in the posterior eye segment the aberrations of the anterior eye elements cause a distortion of the wavefront and therefore an increased focal volume which reduces the photon density and thus raises the required energy for surpassing the threshold irradiance. The influence of adaptive optics on lowering the pulse energy required for photodisruption by refining a distorted focus was investigated. A reduction of the threshold energy can be shown when using adaptive optics. The spatial confinement with adaptive optics furthermore raises the irradiance at constant pulse energy. The lowered threshold energy allows for tissue dissection with reduced peripheral damage. This offers the possibility for moving femtosecond laser surgery from corneal or lental applications in the anterior eye to vitreal or retinal applications in the posterior eye.

  1. Shear wave speed estimation by adaptive random sample consensus method.

    PubMed

    Lin, Haoming; Wang, Tianfu; Chen, Siping

    2014-01-01

    This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.

  2. A human visual based binarization technique for histological images

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  3. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    PubMed

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  4. Adaptive Event-Triggered Control Based on Heuristic Dynamic Programming for Nonlinear Discrete-Time Systems.

    PubMed

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2017-07-01

    This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.

  5. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less

  6. EEG Artifact Removal Using a Wavelet Neural Network

    NASA Technical Reports Server (NTRS)

    Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    !n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.

  7. Quantification of pulmonary vessel diameter in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate

    2015-03-01

    Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.

  8. Protecting the entanglement of twisted photons by adaptive optics

    NASA Astrophysics Data System (ADS)

    Leonhard, Nina; Sorelli, Giacomo; Shatokhin, Vyacheslav N.; Reinlein, Claudia; Buchleitner, Andreas

    2018-01-01

    We study the efficiency of adaptive optics (AO) correction for the free-space propagation of entangled photonic orbital-angular-momentum (OAM) qubit states to reverse moderate atmospheric turbulence distortions. We show that AO can significantly reduce crosstalk to modes within and outside the encoding subspace and thereby stabilize entanglement against turbulence. This method establishes a reliable quantum channel for OAM photons in turbulence, and it enhances the threshold turbulence strength for secure quantum communication by at least a factor 2.

  9. Predicting coral bleaching hotspots: the role of regional variability in thermal stress and potential adaptation rates

    NASA Astrophysics Data System (ADS)

    Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.

    2012-03-01

    Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.

  10. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality

    PubMed Central

    Hondula, David M.; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-01-01

    Background: Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to “adaptation uncertainty” (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. Objectives: This study had three aims: a) Compare the range in projected impacts that arises from using different adaptation modeling methods; b) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c) recommend modeling method(s) to use in future impact assessments. Methods: We estimated impacts for 2070–2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. Results: The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Conclusions: Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634 PMID:28885979

  11. Fundus-controlled two-color dark adaptometry with the Microperimeter MP1.

    PubMed

    Bowl, Wadim; Stieger, Knut; Lorenz, Birgit

    2015-06-01

    The aim of this study was to provide fundus-controlled two-color adaptometry with an existing device. A quick and easy approach extends the application possibilities of a commercial fundus-controlled perimeter. An external filter holder was placed in front the objective lens of the MP1 (Nidek, Italy) and fitted with filters to modify background, stimulus intensity, and color. Prior to dark adaptometry, the subject's visual sensitivity profile was measured for red and blue stimuli to determine whether rods or cones or both mediated the absolute threshold. After light adaptation, 20 healthy subjects were investigated with a pattern covering six spots at the posterior pole of the retina up to 45 min of dark adaptation. Thresholds were determined using a 200 ms red Goldmann IV and a blue Goldmann II stimulus. The pre-test sensitivity showed a typical distribution of values along the meridian, with high peripheral light increment sensitivity (LIS) and low central LIS for rods and the reverse for cones. After bleach, threshold recovery had a classic biphasic shape. The absolute threshold was reached after approximately 10 min for the red and 15 min for the blue stimulus. Two-color fundus-controlled adaptometry with a commercial MP1 without internal changes to the device provides a quick and easy examination of rod and cone function during dark adaptation at defined retinal loci of the posterior pole. This innovative method will be helpful to measure rod vs. cone function at known loci of the posterior pole in early stages of retinal degenerations.

  12. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin

    PubMed Central

    2013-01-01

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405

  13. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3'-Diaminobenzidine&Haematoxylin.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene

    2013-03-25

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.

  14. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.

    PubMed

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.

  15. Stochastic analysis of epidemics on adaptive time varying networks

    NASA Astrophysics Data System (ADS)

    Kotnis, Bhushan; Kuri, Joy

    2013-06-01

    Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an “adaptive threshold,” i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.

  16. Radiation-hardened fast acquisition/weak signal tracking system and method

    NASA Technical Reports Server (NTRS)

    Winternitz, Luke (Inventor); Boegner, Gregory J. (Inventor); Sirotzky, Steve (Inventor)

    2009-01-01

    A global positioning system (GPS) receiver and method of acquiring and tracking GPS signals comprises an antenna adapted to receive GPS signals; an analog radio frequency device operatively connected to the antenna and adapted to convert the GPS signals from an analog format to a digital format; a plurality of GPS signal tracking correlators operatively connected to the analog RF device; a GPS signal acquisition component operatively connected to the analog RF device and the plurality of GPS signal tracking correlators, wherein the GPS signal acquisition component is adapted to calculate a maximum vector on a databit correlation grid; and a microprocessor operatively connected to the plurality of GPS signal tracking correlators and the GPS signal acquisition component, wherein the microprocessor is adapted to compare the maximum vector with a predetermined correlation threshold to allow the GPS signal to be fully acquired and tracked.

  17. Tactile Acuity Charts: A Reliable Measure of Spatial Acuity

    PubMed Central

    Bruns, Patrick; Camargo, Carlos J.; Campanella, Humberto; Esteve, Jaume; Dinse, Hubert R.; Röder, Brigitte

    2014-01-01

    For assessing tactile spatial resolution it has recently been recommended to use tactile acuity charts which follow the design principles of the Snellen letter charts for visual acuity and involve active touch. However, it is currently unknown whether acuity thresholds obtained with this newly developed psychophysical procedure are in accordance with established measures of tactile acuity that involve passive contact with fixed duration and control of contact force. Here we directly compared tactile acuity thresholds obtained with the acuity charts to traditional two-point and grating orientation thresholds in a group of young healthy adults. For this purpose, two types of charts, using either Braille-like dot patterns or embossed Landolt rings with different orientations, were adapted from previous studies. Measurements with the two types of charts were equivalent, but generally more reliable with the dot pattern chart. A comparison with the two-point and grating orientation task data showed that the test-retest reliability of the acuity chart measurements after one week was superior to that of the passive methods. Individual thresholds obtained with the acuity charts agreed reasonably with the grating orientation threshold, but less so with the two-point threshold that yielded relatively distinct acuity estimates compared to the other methods. This potentially considerable amount of mismatch between different measures of tactile acuity suggests that tactile spatial resolution is a complex entity that should ideally be measured with different methods in parallel. The simple test procedure and high reliability of the acuity charts makes them a promising complement and alternative to the traditional two-point and grating orientation thresholds. PMID:24504346

  18. A critique of the use of indicator-species scores for identifying thresholds in species responses

    USGS Publications Warehouse

    Cuffney, Thomas F.; Qian, Song S.

    2013-01-01

    Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.

  19. Characterization of Rod Function Phenotypes Across a Range of Age-Related Macular Degeneration Severities and Subretinal Drusenoid Deposits.

    PubMed

    Flynn, Oliver J; Cukras, Catherine A; Jeffrey, Brett G

    2018-05-01

    To examine spatial changes in rod-mediated function in relationship to local structural changes across the central retina in eyes with a spectrum of age-related macular degeneration (AMD) disease severity. Participants were categorized into five AMD severity groups based on fundus features. Scotopic thresholds were measured at 14 loci spanning ±18° along the vertical meridian from one eye of each of 42 participants (mean = 71.7 ± 9.9 years). Following a 30% bleach, dark adaptation was measured at eight loci (±12°). Rod intercept time (RIT) was defined from the time to detect a -3.1 log cd/m2 stimulus. RITslope was defined from the linear fit of RIT with decreasing retinal eccentricity. The presence of subretinal drusenoid deposits (SDD), ellipsoid (EZ) band disruption, and drusen at the test loci was evaluated using optical coherence tomography. Scotopic thresholds indicated greater rod function loss in the macula, which correlated with increasing AMD group severity. RITslope, which captures the spatial change in the rate of dark adaptation, increased with AMD severity (P < 0.0001). Three rod function phenotypes emerged: RF1, normal rod function; RF2, normal scotopic thresholds but slowed dark adaptation; and RF3, elevated scotopic thresholds with slowed dark adaptation. Dark adaptation was slowed at all loci with SDD or EZ band disruption, and at 32% of loci with no local structural changes. Three rod function phenotypes were defined from combined measurement of scotopic threshold and dark adaptation. Spatial changes in dark adaptation across the macula were captured with RITslope, which may be a useful outcome measure for functional studies of AMD.

  20. Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.

    PubMed

    Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo

    2013-06-20

    A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.

  1. An adaptive tensor voting algorithm combined with texture spectrum

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  2. An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Langston, C. A.

    2016-12-01

    Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.

  3. A self-adaptive algorithm for traffic sign detection in motion image based on color and shape features

    NASA Astrophysics Data System (ADS)

    Zhang, Ka; Sheng, Yehua; Gong, Zhijun; Ye, Chun; Li, Yongqiang; Liang, Cheng

    2007-06-01

    As an important sub-system in intelligent transportation system (ITS), the detection and recognition of traffic signs from mobile images is becoming one of the hot spots in the international research field of ITS. Considering the problem of traffic sign automatic detection in motion images, a new self-adaptive algorithm for traffic sign detection based on color and shape features is proposed in this paper. Firstly, global statistical color features of different images are computed based on statistics theory. Secondly, some self-adaptive thresholds and special segmentation rules for image segmentation are designed according to these global color features. Then, for red, yellow and blue traffic signs, the color image is segmented to three binary images by these thresholds and rules. Thirdly, if the number of white pixels in the segmented binary image exceeds the filtering threshold, the binary image should be further filtered. Fourthly, the method of gray-value projection is used to confirm top, bottom, left and right boundaries for candidate regions of traffic signs in the segmented binary image. Lastly, if the shape feature of candidate region satisfies the need of real traffic sign, this candidate region is confirmed as the detected traffic sign region. The new algorithm is applied to actual motion images of natural scenes taken by a CCD camera of the mobile photogrammetry system in Nanjing at different time. The experimental results show that the algorithm is not only simple, robust and more adaptive to natural scene images, but also reliable and high-speed on real traffic sign detection.

  4. An augmented parametric response map with consideration of image registration error: towards guidance of locally adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Lausch, Anthony; Chen, Jeff; Ward, Aaron D.; Gaede, Stewart; Lee, Ting-Yim; Wong, Eugene

    2014-11-01

    Parametric response map (PRM) analysis is a voxel-wise technique for predicting overall treatment outcome, which shows promise as a tool for guiding personalized locally adaptive radiotherapy (RT). However, image registration error (IRE) introduces uncertainty into this analysis which may limit its use for guiding RT. Here we extend the PRM method to include an IRE-related PRM analysis confidence interval and also incorporate multiple graded classification thresholds to facilitate visualization. A Gaussian IRE model was used to compute an expected value and confidence interval for PRM analysis. The augmented PRM (A-PRM) was evaluated using CT-perfusion functional image data from patients treated with RT for glioma and hepatocellular carcinoma. Known rigid IREs were simulated by applying one thousand different rigid transformations to each image set. PRM and A-PRM analyses of the transformed images were then compared to analyses of the original images (ground truth) in order to investigate the two methods in the presence of controlled IRE. The A-PRM was shown to help visualize and quantify IRE-related analysis uncertainty. The use of multiple graded classification thresholds also provided additional contextual information which could be useful for visually identifying adaptive RT targets (e.g. sub-volume boosts). The A-PRM should facilitate reliable PRM guided adaptive RT by allowing the user to identify if a patient’s unique IRE-related PRM analysis uncertainty has the potential to influence target delineation.

  5. Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography.

    PubMed

    Zaki, Farzana; Wang, Yahui; Su, Hao; Yuan, Xin; Liu, Xuan

    2017-05-01

    Optical coherence tomography (OCT) is based on coherence detection of interferometric signals and hence inevitably suffers from speckle noise. To remove speckle noise in OCT images, wavelet domain thresholding has demonstrated significant advantages in suppressing noise magnitude while preserving image sharpness. However, speckle noise in OCT images has different characteristics in different spatial scales, which has not been considered in previous applications of wavelet domain thresholding. In this study, we demonstrate a noise adaptive wavelet thresholding (NAWT) algorithm that exploits the difference of noise characteristics in different wavelet sub-bands. The algorithm is simple, fast, effective and is closely related to the physical origin of speckle noise in OCT image. Our results demonstrate that NAWT outperforms conventional wavelet thresholding.

  6. Adaptive thresholding with inverted triangular area for real-time detection of the heart rate from photoplethysmogram traces on a smartphone.

    PubMed

    Jiang, Wen Jun; Wittek, Peter; Zhao, Li; Gao, Shi Chao

    2014-01-01

    Photoplethysmogram (PPG) signals acquired by smartphone cameras are weaker than those acquired by dedicated pulse oximeters. Furthermore, the signals have lower sampling rates, have notches in the waveform and are more severely affected by baseline drift, leading to specific morphological characteristics. This paper introduces a new feature, the inverted triangular area, to address these specific characteristics. The new feature enables real-time adaptive waveform detection using an algorithm of linear time complexity. It can also recognize notches in the waveform and it is inherently robust to baseline drift. An implementation of the algorithm on Android is available for free download. We collected data from 24 volunteers and compared our algorithm in peak detection with two competing algorithms designed for PPG signals, Incremental-Merge Segmentation (IMS) and Adaptive Thresholding (ADT). A sensitivity of 98.0% and a positive predictive value of 98.8% were obtained, which were 7.7% higher than the IMS algorithm in sensitivity, and 8.3% higher than the ADT algorithm in positive predictive value. The experimental results confirmed the applicability of the proposed method.

  7. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  8. Optical communication system performance with tracking error induced signal fading.

    NASA Technical Reports Server (NTRS)

    Tycz, M.; Fitzmaurice, M. W.; Premo, D. A.

    1973-01-01

    System performance is determined for an optical communication system using noncoherent detection in the presence of tracking error induced signal fading assuming (1) binary on-off modulation (OOK) with both fixed and adaptive threshold receivers, and (2) binary polarization modulation (BPM). BPM is shown to maintain its inherent 2- to 3-dB advantage over OOK when adaptive thresholding is used, and to have a substantially greater advantage when the OOK system is restricted to a fixed decision threshold.

  9. Image registration method for medical image sequences

    DOEpatents

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  10. Stroke-model-based character extraction from gray-level document images.

    PubMed

    Ye, X; Cheriet, M; Suen, C Y

    2001-01-01

    Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.

  11. Degraded Chinese rubbing images thresholding based on local first-order statistics

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Hou, Ling-Ying; Huang, Han

    2017-06-01

    It is a necessary step for Chinese character segmentation from degraded document images in Optical Character Recognizer (OCR); however, it is challenging due to various kinds of noising in such an image. In this paper, we present three local first-order statistics method that had been adaptive thresholding for segmenting text and non-text of Chinese rubbing image. Both visual inspection and numerically investigate for the segmentation results of rubbing image had been obtained. In experiments, it obtained better results than classical techniques in the binarization of real Chinese rubbing image and PHIBD 2012 datasets.

  12. Novel image processing method study for a label-free optical biosensor

    NASA Astrophysics Data System (ADS)

    Yang, Chenhao; Wei, Li'an; Yang, Rusong; Feng, Ying

    2015-10-01

    Optical biosensor is generally divided into labeled type and label-free type, the former mainly contains fluorescence labeled method and radioactive-labeled method, while fluorescence-labeled method is more mature in the application. The mainly image processing methods of fluorescent-labeled biosensor includes smooth filtering, artificial gridding and constant thresholding. Since some fluorescent molecules may influence the biological reaction, label-free methods have been the main developing direction of optical biosensors nowadays. The using of wider field of view and larger angle of incidence light path which could effectively improve the sensitivity of the label-free biosensor also brought more difficulties in image processing, comparing with the fluorescent-labeled biosensor. Otsu's method is widely applied in machine vision, etc, which choose the threshold to minimize the intraclass variance of the thresholded black and white pixels. It's capacity-constrained with the asymmetrical distribution of images as a global threshold segmentation. In order to solve the irregularity of light intensity on the transducer, we improved the algorithm. In this paper, we present a new image processing algorithm based on a reflectance modulation biosensor platform, which mainly comprises the design of sliding normalization algorithm for image rectification and utilizing the improved otsu's method for image segmentation, in order to implement automatic recognition of target areas. Finally we used adaptive gridding method extracting the target parameters for analysis. Those methods could improve the efficiency of image processing, reduce human intervention, enhance the reliability of experiments and laid the foundation for the realization of high throughput of label-free optical biosensors.

  13. An efficient cloud detection method for high resolution remote sensing panchromatic imagery

    NASA Astrophysics Data System (ADS)

    Li, Chaowei; Lin, Zaiping; Deng, Xinpu

    2018-04-01

    In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.

  14. Thresholds for Coral Bleaching: Are Synergistic Factors and Shifting Thresholds Changing the Landscape for Management? (Invited)

    NASA Astrophysics Data System (ADS)

    Eakin, C.; Donner, S. D.; Logan, C. A.; Gledhill, D. K.; Liu, G.; Heron, S. F.; Christensen, T.; Rauenzahn, J.; Morgan, J.; Parker, B. A.; Hoegh-Guldberg, O.; Skirving, W. J.; Strong, A. E.

    2010-12-01

    As carbon dioxide rises in the atmosphere, climate change and ocean acidification are modifying important physical and chemical parameters in the oceans with resulting impacts on coral reef ecosystems. Rising CO2 is warming the world’s oceans and causing corals to bleach, with both alarming frequency and severity. The frequent return of stressful temperatures has already resulted in major damage to many of the world’s coral reefs and is expected to continue in the foreseeable future. Warmer oceans also have contributed to a rise in coral infectious diseases. Both bleaching and infectious disease can result in coral mortality and threaten one of the most diverse ecosystems on Earth and the important ecosystem services they provide. Additionally, ocean acidification from rising CO2 is reducing the availability of carbonate ions needed by corals to build their skeletons and perhaps depressing the threshold for bleaching. While thresholds vary among species and locations, it is clear that corals around the world are already experiencing anomalous temperatures that are too high, too often, and that warming is exceeding the rate at which corals can adapt. This is despite a complex adaptive capacity that involves both the coral host and the zooxanthellae, including changes in the relative abundance of the latter in their coral hosts. The safe upper limit for atmospheric CO2 is probably somewhere below 350ppm, a level we passed decades ago, and for temperature is a sustained global temperature increase of less than 1.5°C above pre-industrial levels. How much can corals acclimate and/or adapt to the unprecedented fast changing environmental conditions? Any change in the threshold for coral bleaching as the result of acclimation and/or adaption may help corals to survive in the future but adaptation to one stress may be maladaptive to another. There also is evidence that ocean acidification and nutrient enrichment modify this threshold. What do shifting thresholds mean for identifying limits and taking management actions to adapt to climate change?

  15. Adaptive segmentation of nuclei in H&S stained tendon microscopy

    NASA Astrophysics Data System (ADS)

    Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien

    2015-12-01

    Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.

  16. An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification

    PubMed Central

    Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou

    2017-01-01

    In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference. PMID:29186075

  17. An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification.

    PubMed

    Li, Fangmin; Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou

    2017-11-29

    In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.

  18. A dual-adaptive support-based stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  19. Development of Rod Function in Term Born and Former Preterm Subjects

    PubMed Central

    Fulton, Anne B.; Hansen, Ronald M.; Moskowitz, Anne

    2009-01-01

    Purpose Provide an overview of some of our electroretinographic and psychophysical studies of normal development of rod function and their application to retinopathy of prematurity (ROP). Methods Electroretinographic (ERG) responses to full-field stimuli were recorded from dark adapted subjects. Rod photoreceptor sensitivity, SROD, was calculated by fit of a biochemical model of the activation of phototransduction to the ERG a-wave. Dark adapted psychophysical thresholds for detecting 2° spots in parafoveal (10° eccentric) and peripheral (30° eccentric) retina were measured and the difference between the thresholds, Δ10-30, was examined as a function of age. SROD and Δ10-30 in term born and former preterm subjects were compared. Results In term born infants, (1) the normal developmental increase in SROD changes proportionately with the amount of rod visual pigment, rhodopsin, and (2) rod mediated function in central retina is immature compared to that in peripheral retina. In subjects born prematurely, deficits in rod photoreceptor sensitivity persist long after active ROP has resolved. Maturation of rod mediated thresholds in the central retina is prolonged by mild ROP. Conclusions Characterization of the development of normal rod and rod mediated function provides a foundation for understanding ROP. PMID:19483509

  20. Olfactory Detection Thresholds and Adaptation in Adults with Autism Spectrum Condition

    ERIC Educational Resources Information Center

    Tavassoli, T.; Baron-Cohen, S.

    2012-01-01

    Sensory issues have been widely reported in Autism Spectrum Conditions (ASC). Since olfaction is one of the least investigated senses in ASC, the current studies explore olfactory detection thresholds and adaptation to olfactory stimuli in adults with ASC. 80 participants took part, 38 (18 females, 20 males) with ASC and 42 control participants…

  1. Hair segmentation using adaptive threshold from edge and branch length measures.

    PubMed

    Lee, Ian; Du, Xian; Anthony, Brian

    2017-10-01

    Non-invasive imaging techniques allow the monitoring of skin structure and diagnosis of skin diseases in clinical applications. However, hair in skin images hampers the imaging and classification of the skin structure of interest. Although many hair segmentation methods have been proposed for digital hair removal, a major challenge in hair segmentation remains in detecting hairs that are thin, overlapping, of similar contrast or color to underlying skin, or overlaid on highly-textured skin structure. To solve the problem, we present an automatic hair segmentation method that uses edge density (ED) and mean branch length (MBL) to measure hair. First, hair is detected by the integration of top-hat transform and modified second-order Gaussian filter. Second, we employ a robust adaptive threshold of ED and MBL to generate a hair mask. Third, the hair mask is refined by k-NN classification of hair and skin pixels. The proposed algorithm was tested using two datasets of healthy skin images and lesion images respectively. These datasets were taken from different imaging platforms in various illumination levels and varying skin colors. We compared the hair detection and segmentation results from our algorithm and six other hair segmentation methods of state of the art. Our method exhibits high value of sensitivity: 75% and specificity: 95%, which indicates significantly higher accuracy and better balance between true positive and false positive detection than the other methods. Published by Elsevier Ltd.

  2. Network analysis of a financial market based on genuine correlation and threshold method

    NASA Astrophysics Data System (ADS)

    Namaki, A.; Shirazi, A. H.; Raei, R.; Jafari, G. R.

    2011-10-01

    A financial market is an example of an adaptive complex network consisting of many interacting units. This network reflects market’s behavior. In this paper, we use Random Matrix Theory (RMT) notion for specifying the largest eigenvector of correlation matrix as the market mode of stock network. For a better risk management, we clean the correlation matrix by removing the market mode from data and then construct this matrix based on the residuals. We show that this technique has an important effect on correlation coefficient distribution by applying it for Dow Jones Industrial Average (DJIA). To study the topological structure of a network we apply the removing market mode technique and the threshold method to Tehran Stock Exchange (TSE) as an example. We show that this network follows a power-law model in certain intervals. We also show the behavior of clustering coefficients and component numbers of this network for different thresholds. These outputs are useful for both theoretical and practical purposes such as asset allocation and risk management.

  3. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  4. Fixed or adapted conditioning intensity for repeated conditioned pain modulation.

    PubMed

    Hoegh, M; Petersen, K K; Graven-Nielsen, T

    2017-12-29

    Aims Conditioned pain modulation (CPM) is used to assess descending pain modulation through a test stimulation (TS) and a conditioning stimulation (CS). Due to potential carry-over effects, sequential CPM paradigms might alter the intensity of the CS, which potentially can alter the CPM-effect. This study aimed to investigate the difference between a fixed and adaptive CS intensity on CPM-effect. Methods On the dominant leg of 20 healthy subjects the cuff pressure detection threshold (PDT) was recorded as TS and the pain tolerance threshold (PTT) was assessed on the non-dominant leg for estimating the CS. The difference in PDT before and during CS defined the CPM-effect. The CPM-effect was assessed four times using a CS with intensities of 70% of baseline PTT (fixed) or 70% of PTT measured throughout the session (adaptive). Pain intensity of the conditioning stimulus was assessed on a numeric rating scale (NRS). Data were analyzed with repeated-measures ANOVA. Results No difference was found comparing the four PDTs assessed before CSs for the fixed and the adaptive paradigms. The CS pressure intensity for the adaptive paradigm was increasing during the four repeated assessments (P < 0.01). The pain intensity was similar during the fixed (NRS: 5.8±0.5) and the adjusted paradigm (NRS: 6.0±0.4). The CPM-effect was higher using the fixed condition compared with the adaptive condition (P < 0.05). Conclusions The current study found that sequential CPM paradigms using a fixed conditioning stimulus produced an increased CPM-effect compared with adaptive and increasing conditioning intensities.

  5. Speech Perception with Music Maskers by Cochlear Implant Users and Normal-Hearing Listeners

    ERIC Educational Resources Information Center

    Eskridge, Elizabeth N.; Galvin, John J., III; Aronoff, Justin M.; Li, Tianhao; Fu, Qian-Jie

    2012-01-01

    Purpose: The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. Method: Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their…

  6. Wavelet tree structure based speckle noise removal for optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Liu, Xuan; Liu, Yang

    2018-02-01

    We report a new speckle noise removal algorithm in optical coherence tomography (OCT). Though wavelet domain thresholding algorithms have demonstrated superior advantages in suppressing noise magnitude and preserving image sharpness in OCT, the wavelet tree structure has not been investigated in previous applications. In this work, we propose an adaptive wavelet thresholding algorithm via exploiting the tree structure in wavelet coefficients to remove the speckle noise in OCT images. The threshold for each wavelet band is adaptively selected following a special rule to retain the structure of the image across different wavelet layers. Our results demonstrate that the proposed algorithm outperforms conventional wavelet thresholding, with significant advantages in preserving image features.

  7. 3D GGO candidate extraction in lung CT images using multilevel thresholding on supervoxels

    NASA Astrophysics Data System (ADS)

    Huang, Shan; Liu, Xiabi; Han, Guanghui; Zhao, Xinming; Zhao, Yanfeng; Zhou, Chunwu

    2018-02-01

    The earlier detection of ground glass opacity (GGO) is of great importance since GGOs are more likely to be malignant than solid nodules. However, the detection of GGO is a difficult task in lung cancer screening. This paper proposes a novel GGO candidate extraction method, which performs multilevel thresholding on supervoxels in 3D lung CT images. Firstly, we segment the lung parenchyma based on Otsu algorithm. Secondly, the voxels which are adjacent in 3D discrete space and sharing similar grayscale are clustered into supervoxels. This procedure is used to enhance GGOs and reduce computational complexity. Thirdly, Hessian matrix is used to emphasize focal GGO candidates. Lastly, an improved adaptive multilevel thresholding method is applied on segmented clusters to extract GGO candidates. The proposed method was evaluated on a set of 19 lung CT scans containing 166 GGO lesions from the Lung CT Imaging Signs (LISS) database. The experimental results show that our proposed GGO candidate extraction method is effective, with a sensitivity of 100% and 26.3 of false positives per scan (665 GGO candidates, 499 non-GGO regions and 166 GGO regions). It can handle both focal GGOs and diffuse GGOs.

  8. An adaptive approach to the physical annealing strategy for simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2013-02-01

    A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.

  9. When do Indians feel hot? Internet searches indicate seasonality suppresses adaptation to heat

    NASA Astrophysics Data System (ADS)

    Singh, Tanya; Siderius, Christian; Van der Velde, Ype

    2018-05-01

    In a warming world an increasing number of people are being exposed to heat, making a comfortable thermal environment an important need. This study explores the potential of using Regional Internet Search Frequencies (RISF) for air conditioning devices as an indicator for thermal discomfort (i.e. dissatisfaction with the thermal environment) with the aim to quantify the adaptation potential of individuals living across different climate zones and at the high end of the temperature range, in India, where access to health data is limited. We related RISF for the years 2011–2015 to daily daytime outdoor temperature in 17 states and determined at which temperature RISF for air conditioning starts to peak, i.e. crosses a ‘heat threshold’, in each state. Using the spatial variation in heat thresholds, we explored whether people continuously exposed to higher temperatures show a lower response to heat extremes through adaptation (e.g. physiological, behavioural or psychological). State-level heat thresholds ranged from 25.9 °C in Madhya Pradesh to 31.0 °C in Orissa. Local adaptation was found to occur at state level: the higher the average temperature in a state, the higher the heat threshold; and the higher the intra-annual temperature range (warmest minus coldest month) the lower the heat threshold. These results indicate there is potential within India to adapt to warmer temperatures, but that a large intra-annual temperature variability attenuates this potential to adapt to extreme heat. This winter ‘reset’ mechanism should be taken into account when assessing the impact of global warming, with changes in minimum temperatures being an important factor in addition to the change in maximum temperatures itself. Our findings contribute to a better understanding of local heat thresholds and people’s adaptive capacity, which can support the design of local thermal comfort standards and early heat warning systems.

  10. Gravity and Neuronal Adaptation. Neurophysiology of Reflexes from Hypo- to Hypergravity Conditions

    NASA Astrophysics Data System (ADS)

    Ritzmann, Ramona; Krause, Anne; Freyler, Kathrin; Gollhofer, Albert

    2017-02-01

    Introduction: For interplanetary and orbital missions in human space flight, knowledge about the gravity-sensitivity of the central nervous system (CNS) is required. The objective of this study was to assess neurophysiological correlates in variable hetero gravity conditions in regard to their timing and shaping. Methods: In ten subjects, peripheral nerve stimulation was used to elicit H-reflexes and M-waves in the M. soleus in Lunar, Martian, Earth and hypergravity. Gravity-dependencies were described by means of reflex latency, inter-peak-interval, duration, stimulation threshold and maximal amplitudes. Experiments were executed during the CNES/ESA/DLR JEPPFs. Results: H-reflex latency, inter-peak-interval and duration decreased with increasing gravitation (P<0.05); likewise, M-wave inter-peak-interval was diminished and latency prolonged with increasing gravity (P<0.05). Stimulation threshold of H-reflexes and M-waves decreased (P<0.05) while maximal amplitudes increased with an increase in gravitation (P<0.05). Conclusion: Adaptations in neurophysiological correlates in hetero gravity are associated with a shift in timing and shaping. For the first time, our results indicate that synaptic and axonal nerve conduction velocity as well as axonal and spinal excitability are diminished with reduced gravitational forces on the Moon and Mars and gradually increased when gravitation is progressively augmented up to hypergravity. Interrelated with the adaptation in threshold we conclude that neuronal circuitries are significantly affected by gravitation. As a consequence, movement control and countermeasures may be biased in extended space missions involving transitions between different force environments.

  11. Automatic luminous reflections detector using global threshold with increased luminosity contrast in images

    NASA Astrophysics Data System (ADS)

    Silva, Ricardo Petri; Naozuka, Gustavo Taiji; Mastelini, Saulo Martiello; Felinto, Alan Salvany

    2018-01-01

    The incidence of luminous reflections (LR) in captured images can interfere with the color of the affected regions. These regions tend to oversaturate, becoming whitish and, consequently, losing the original color information of the scene. Decision processes that employ images acquired from digital cameras can be impaired by the LR incidence. Such applications include real-time video surgeries, facial, and ocular recognition. This work proposes an algorithm called contrast enhancement of potential LR regions, which is a preprocessing to increase the contrast of potential LR regions, in order to improve the performance of automatic LR detectors. In addition, three automatic detectors were compared with and without the employment of our preprocessing method. The first one is a technique already consolidated in the literature called the Chang-Tseng threshold. We propose two automatic detectors called adapted histogram peak and global threshold. We employed four performance metrics to evaluate the detectors, namely, accuracy, precision, exactitude, and root mean square error. The exactitude metric is developed by this work. Thus, a manually defined reference model was created. The global threshold detector combined with our preprocessing method presented the best results, with an average exactitude rate of 82.47%.

  12. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  13. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics

    PubMed Central

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-01-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  14. Detection of immunocytological markers in photomicroscopic images

    NASA Astrophysics Data System (ADS)

    Friedrich, David; zur Jacobsmühlen, Joschka; Braunschweig, Till; Bell, André; Chaisaowong, Kraisorn; Knüchel-Clarke, Ruth; Aach, Til

    2012-03-01

    Early detection of cervical cancer can be achieved through visual analysis of cell anomalies. The established PAP smear achieves a sensitivity of 50-90%, most false negative results are caused by mistakes in the preparation of the specimen or reader variability in the subjective, visual investigation. Since cervical cancer is caused by human papillomavirus (HPV), the detection of HPV-infected cells opens new perspectives for screening of precancerous abnormalities. Immunocytochemical preparation marks HPV-positive cells in brush smears of the cervix with high sensitivity and specificity. The goal of this work is the automated detection of all marker-positive cells in microscopic images of a sample slide stained with an immunocytochemical marker. A color separation technique is used to estimate the concentrations of the immunocytochemical marker stain as well as of the counterstain used to color the nuclei. Segmentation methods based on Otsu's threshold selection method and Mean Shift are adapted to the task of segmenting marker-positive cells and their nuclei. The best detection performance of single marker-positive cells was achieved with the adapted thresholding method with a sensitivity of 95.9%. The contours differed by a modified Hausdorff Distance (MHD) of 2.8 μm. Nuclei of single marker positive cells were detected with a sensitivity of 95.9% and MHD = 1.02 μm.

  15. Image change detection using paradoxical theory for patient follow-up quantitation and therapy assessment.

    PubMed

    David, Simon; Visvikis, Dimitris; Quellec, Gwénolé; Le Rest, Catherine Cheze; Fernandez, Philippe; Allard, Michèle; Roux, Christian; Hatt, Mathieu

    2012-09-01

    In clinical oncology, positron emission tomography (PET) imaging can be used to assess therapeutic response by quantifying the evolution of semi-quantitative values such as standardized uptake value, early during treatment or after treatment. Current guidelines do not include metabolically active tumor volume (MATV) measurements and derived parameters such as total lesion glycolysis (TLG) to characterize the response to the treatment. To achieve automatic MATV variation estimation during treatment, we propose an approach based on the change detection principle using the recent paradoxical theory, which models imprecision, uncertainty, and conflict between sources. It was applied here simultaneously to pre- and post-treatment PET scans. The proposed method was applied to both simulated and clinical datasets, and its performance was compared to adaptive thresholding applied separately on pre- and post-treatment PET scans. On simulated datasets, the adaptive threshold was associated with significantly higher classification errors than the developed approach. On clinical datasets, the proposed method led to results more consistent with the known partial responder status of these patients. The method requires accurate rigid registration of both scans which can be obtained only in specific body regions and does not explicitly model uptake heterogeneity. In further investigations, the change detection of intra-MATV tracer uptake heterogeneity will be developed by incorporating textural features into the proposed approach.

  16. Trend-Residual Dual Modeling for Detection of Outliers in Low-Cost GPS Trajectories.

    PubMed

    Chen, Xiaojian; Cui, Tingting; Fu, Jianhong; Peng, Jianwei; Shan, Jie

    2016-12-01

    Low-cost GPS (receiver) has become a ubiquitous and integral part of our daily life. Despite noticeable advantages such as being cheap, small, light, and easy to use, its limited positioning accuracy devalues and hampers its wide applications for reliable mapping and analysis. Two conventional techniques to remove outliers in a GPS trajectory are thresholding and Kalman-based methods, which are difficult in selecting appropriate thresholds and modeling the trajectories. Moreover, they are insensitive to medium and small outliers, especially for low-sample-rate trajectories. This paper proposes a model-based GPS trajectory cleaner. Rather than examining speed and acceleration or assuming a pre-determined trajectory model, we first use cubic smooth spline to adaptively model the trend of the trajectory. The residuals, i.e., the differences between the trend and GPS measurements, are then further modeled by time series method. Outliers are detected by scoring the residuals at every GPS trajectory point. Comparing to the conventional procedures, the trend-residual dual modeling approach has the following features: (a) it is able to model trajectories and detect outliers adaptively; (b) only one critical value for outlier scores needs to be set; (c) it is able to robustly detect unapparent outliers; and (d) it is effective in cleaning outliers for GPS trajectories with low sample rates. Tests are carried out on three real-world GPS trajectories datasets. The evaluation demonstrates an average of 9.27 times better performance in outlier detection for GPS trajectories than thresholding and Kalman-based techniques.

  17. Application of automatic threshold in dynamic target recognition with low contrast

    NASA Astrophysics Data System (ADS)

    Miao, Hua; Guo, Xiaoming; Chen, Yu

    2014-11-01

    Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.

  18. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  19. Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui

    2018-05-01

    In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.

  20. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    PubMed

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  1. Temporal resolution in children.

    PubMed

    Wightman, F; Allen, P; Dolan, T; Kistler, D; Jamieson, D

    1989-06-01

    The auditory temporal resolving power of young children was measured using an adaptive forced-choice psychophysical paradigm that was disguised as a video game. 20 children between 3 and 7 years of age and 5 adults were asked to detect the presence of a temporal gap in a burst of half-octave-band noise at band center frequencies of 400 and 2,000 Hz. The minimum detectable gap (gap threshold) was estimated adaptively in 20-trial runs. The mean gap thresholds in the 400-Hz condition were higher for the younger children than for the adults, with the 3-year-old children producing the highest thresholds. Gap thresholds in the 2,000-Hz condition were generally lower than in the 400-Hz condition and showed a similar age effect. All the individual adaptive runs were "adult-like," suggesting that the children were generally attentive to the task during each run. However, the variability of threshold estimates from run to run was substantial, especially in the 3-5-year-old children. Computer simulations suggested that this large within-subjects variability could have resulted from frequent, momentary lapses of attention, which would lead to "guessing" on a substantial portion of the trials.

  2. A segmentation method for lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise

    PubMed Central

    Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian

    2017-01-01

    The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916

  3. A NDVI assisted remote sensing image adaptive scale segmentation method

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  4. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  5. Adaptive time-sequential binary sensing for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Hu, Chenhui; Lu, Yue M.

    2012-06-01

    We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.

  6. A generalized adaptive mathematical morphological filter for LIDAR data

    NASA Astrophysics Data System (ADS)

    Cui, Zheng

    Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.

  7. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  8. Zone-size nonuniformity of 18F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer.

    PubMed

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Lee, Li-yu; Chang, Joseph Tung-Chieh; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Liao, Chun-Ta; Yang, Lan-Yan; Hsu, Ching-Han; Yen, Tzu-Chen

    2015-03-01

    The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUVmax 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment (18)F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUVmax 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification.

  9. Comparative performance of two quantitative safety signalling methods: implications for use in a pharmacovigilance department.

    PubMed

    Almenoff, June S; LaCroix, Karol K; Yuen, Nancy A; Fram, David; DuMouchel, William

    2006-01-01

    There is increasing interest in using disproportionality-based signal detection methods to support postmarketing safety surveillance activities. Two commonly used methods, empirical Bayes multi-item gamma Poisson shrinker (MGPS) and proportional reporting ratio (PRR), perform differently with respect to the number and types of signals detected. The goal of this study was to compare and analyse the performance characteristics of these two methods, to understand why they differ and to consider the practical implications of these differences for a large, industry-based pharmacovigilance department. We compared the numbers and types of signals of disproportionate reporting (SDRs) obtained with MGPS and PRR using two postmarketing safety databases and a simulated database. We recorded signal counts and performed a qualitative comparison of the drug-event combinations signalled by the two methods as well as a sensitivity analysis to better understand how the thresholds commonly used for these methods impact their performance. PRR detected more SDRs than MGPS. We observed that MGPS is less subject to confounding by demographic factors because it employs stratification and is more stable than PRR when report counts are low. Simulation experiments performed using published empirical thresholds demonstrated that PRR detected false-positive signals at a rate of 1.1%, while MGPS did not detect any statistical false positives. In an attempt to separate the effect of choice of signal threshold from more fundamental methodological differences, we performed a series of experiments in which we modified the conventional threshold values for each method so that each method detected the same number of SDRs for the example drugs studied. This analysis, which provided quantitative examples of the relationship between the published thresholds for the two methods, demonstrates that the signalling criterion published for PRR has a higher signalling frequency than that published for MGPS. The performance differences between the PRR and MGPS methods are related to (i) greater confounding by demographic factors with PRR; (ii) a higher tendency of PRR to detect false-positive signals when the number of reports is small; and (iii) the conventional thresholds that have been adapted for each method. PRR tends to be more 'sensitive' and less 'specific' than MGPS. A high-specificity disproportionality method, when used in conjunction with medical triage and investigation of critical medical events, may provide an efficient and robust approach to applying quantitative methods in routine postmarketing pharmacovigilance.

  10. Evidence Accumulator or Decision Threshold – Which Cortical Mechanism are We Observing?

    PubMed Central

    Simen, Patrick

    2012-01-01

    Most psychological models of perceptual decision making are of the accumulation-to-threshold variety. The neural basis of accumulation in parietal and prefrontal cortex is therefore a topic of great interest in neuroscience. In contrast, threshold mechanisms have received less attention, and their neural basis has usually been sought in subcortical structures. Here I analyze a model of a decision threshold that can be implemented in the same cortical areas as evidence accumulators, and whose behavior bears on two open questions in decision neuroscience: (1) When ramping activity is observed in a brain region during decision making, does it reflect evidence accumulation? (2) Are changes in speed-accuracy tradeoffs and response biases more likely to be achieved by changes in thresholds, or in accumulation rates and starting points? The analysis suggests that task-modulated ramping activity, by itself, is weak evidence that a brain area mediates evidence accumulation as opposed to threshold readout; and that signs of modulated accumulation are as likely to indicate threshold adaptation as adaptation of starting points and accumulation rates. These conclusions imply that how thresholds are modeled can dramatically impact accumulator-based interpretations of this data. PMID:22737136

  11. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  12. Sign language spotting with a threshold model based on conditional random fields.

    PubMed

    Yang, Hee-Deok; Sclaroff, Stan; Lee, Seong-Whan

    2009-07-01

    Sign language spotting is the task of detecting and recognizing signs in a signed utterance, in a set vocabulary. The difficulty of sign language spotting is that instances of signs vary in both motion and appearance. Moreover, signs appear within a continuous gesture stream, interspersed with transitional movements between signs in a vocabulary and nonsign patterns (which include out-of-vocabulary signs, epentheses, and other movements that do not correspond to signs). In this paper, a novel method for designing threshold models in a conditional random field (CRF) model is proposed which performs an adaptive threshold for distinguishing between signs in a vocabulary and nonsign patterns. A short-sign detector, a hand appearance-based sign verification method, and a subsign reasoning method are included to further improve sign language spotting accuracy. Experiments demonstrate that our system can spot signs from continuous data with an 87.0 percent spotting rate and can recognize signs from isolated data with a 93.5 percent recognition rate versus 73.5 percent and 85.4 percent, respectively, for CRFs without a threshold model, short-sign detection, subsign reasoning, and hand appearance-based sign verification. Our system can also achieve a 15.0 percent sign error rate (SER) from continuous data and a 6.4 percent SER from isolated data versus 76.2 percent and 14.5 percent, respectively, for conventional CRFs.

  13. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  14. Boundary fitting based segmentation of fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2015-03-01

    Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.

  15. Thermal sensation and climate: a comparison of UTCI and PET thresholds in different climates

    NASA Astrophysics Data System (ADS)

    Pantavou, Katerina; Lykoudis, Spyridon; Nikolopoulou, Marialena; Tsiros, Ioannis X.

    2018-06-01

    The influence of physiological acclimatization and psychological adaptation on thermal perception is well documented and has revealed the importance of thermal experience and expectation in the evaluation of environmental stimuli. Seasonal patterns of thermal perception have been studied, and calibrated thermal indices' scales have been proposed to obtain meaningful interpretations of thermal sensation indices in different climate regions. The current work attempts to quantify the contribution of climate to the long-term thermal adaptation by examining the relationship between climate normal annual air temperature (1971-2000) and such climate-calibrated thermal indices' assessment scales. The thermal sensation ranges of two thermal indices, the Universal Thermal Climate Index (UTCI) and the Physiological Equivalent Temperature Index (PET), were calibrated for three warm temperate climate contexts (Cfa, Cfb, Csa), against the subjective evaluation of the thermal environment indicated by interviewees during field surveys conducted at seven European cities: Athens (GR), Thessaloniki (GR), Milan (IT), Fribourg (CH), Kassel (DE), Cambridge (UK), and Sheffield (UK), under the same research protocol. Then, calibrated scales for other climate contexts were added from the literature, and the relationship between the respective scales' thresholds and climate normal annual air temperature was examined. To maintain the maximum possible comparability, three methods were applied for the calibration, namely linear, ordinal, and probit regression. The results indicated that the calibrated UTCI and PET thresholds increase with the climate normal annual air temperature of the survey city. To investigate further climates, we also included in the analysis results of previous studies presenting only thresholds for neutral thermal sensation. The average increase of the respective thresholds in the case of neutral thermal sensation was about 0.6 °C for each 1 °C increase of the normal annual air temperature for both indices, statistically significant only for PET though.

  16. Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot

    PubMed Central

    Vanhoutte, Erik; Mafrica, Stefano; Ruffier, Franck; Bootsma, Reinoud J.; Serres, Julien

    2017-01-01

    For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M2APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6×10−7 to 1.6×10−2 W·cm−2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M2APix sensor. While both algorithms adequately measured optical flow between 25 ∘/s and 1000 ∘/s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. PMID:28287484

  17. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  18. A Well-Tempered Hybrid Method for Solving Challenging Time-Dependent Density Functional Theory (TDDFT) Systems.

    PubMed

    Kasper, Joseph M; Williams-Young, David B; Vecharynski, Eugene; Yang, Chao; Li, Xiaosong

    2018-04-10

    The time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) equations allow one to probe electronic resonances of a system quickly and inexpensively. However, the iterative solution of the eigenvalue problem can be challenging or impossible to converge, using standard methods such as the Davidson algorithm for spectrally dense regions in the interior of the spectrum, as are common in X-ray absorption spectroscopy (XAS). More robust solvers, such as the generalized preconditioned locally harmonic residual (GPLHR) method, can alleviate this problem, but at the expense of higher average computational cost. A hybrid method is proposed which adapts to the problem in order to maximize computational performance while providing the superior convergence of GPLHR. In addition, a modification to the GPLHR algorithm is proposed to adaptively choose the shift parameter to enforce a convergence of states above a predefined energy threshold.

  19. Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds

    NASA Astrophysics Data System (ADS)

    Johnson, C. E.

    2017-12-01

    Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.

  20. Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters

    NASA Astrophysics Data System (ADS)

    Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon

    2018-04-01

    In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.

  1. Matrix-normalised quantification of species by threshold-calibrated competitive real-time PCR: allergenic peanut in food as one example.

    PubMed

    Holzhauser, Thomas; Kleiner, Kornelia; Janise, Annabella; Röder, Martin

    2014-11-15

    A novel method to quantify species or DNA on the basis of a competitive quantitative real-time polymerase chain reaction (cqPCR) was developed. Potentially allergenic peanut in food served as one example. Based on an internal competitive DNA sequence for normalisation of DNA extraction and amplification, the cqPCR was threshold-calibrated against 100mg/kg incurred peanut in milk chocolate. No external standards were necessary. The competitive molecule successfully served as calibrator for quantification, matrix normalisation, and inhibition control. Although designed for verification of a virtual threshold of 100mg/kg, the method allowed quantification of 10-1,000 mg/kg peanut incurred in various food matrices and without further matrix adaption: On the basis of four PCR replicates per sample, mean recovery of 10-1,000 mg/kg peanut in chocolate, vanilla ice cream, cookie dough, cookie, and muesli was 87% (range: 39-147%) in comparison to 199% (range: 114-237%) by three commercial ELISA kits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  3. Data Transmission Signal Design and Analysis

    NASA Technical Reports Server (NTRS)

    Moore, J. D.

    1972-01-01

    The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.

  4. Epidemic spreading on preferred degree adaptive networks.

    PubMed

    Jolad, Shivakumar; Liu, Wenjia; Schmittmann, B; Zia, R K P

    2012-01-01

    We study the standard SIS model of epidemic spreading on networks where individuals have a fluctuating number of connections around a preferred degree κ. Using very simple rules for forming such preferred degree networks, we find some unusual statistical properties not found in familiar Erdös-Rényi or scale free networks. By letting κ depend on the fraction of infected individuals, we model the behavioral changes in response to how the extent of the epidemic is perceived. In our models, the behavioral adaptations can be either 'blind' or 'selective'--depending on whether a node adapts by cutting or adding links to randomly chosen partners or selectively, based on the state of the partner. For a frozen preferred network, we find that the infection threshold follows the heterogeneous mean field result λ(c)/μ = <κ>/<κ2> and the phase diagram matches the predictions of the annealed adjacency matrix (AAM) approach. With 'blind' adaptations, although the epidemic threshold remains unchanged, the infection level is substantially affected, depending on the details of the adaptation. The 'selective' adaptive SIS models are most interesting. Both the threshold and the level of infection changes, controlled not only by how the adaptations are implemented but also how often the nodes cut/add links (compared to the time scales of the epidemic spreading). A simple mean field theory is presented for the selective adaptations which capture the qualitative and some of the quantitative features of the infection phase diagram.

  5. Automated and Adaptable Quantification of Cellular Alignment from Microscopic Images for Tissue Engineering Applications

    PubMed Central

    Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan

    2011-01-01

    Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940

  6. Psychophysical measurements in children: challenges, pitfalls, and considerations.

    PubMed

    Witton, Caroline; Talcott, Joel B; Henning, G Bruce

    2017-01-01

    Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.

  7. Diving at altitude: from definition to practice.

    PubMed

    Egi, S Murat; Pieri, Massimo; Marroni, Alessandro

    2014-01-01

    Diving above sea level has different motivations for recreational, military, commercial and scientific activities. Despite the apparently wide practice of inland diving, there are three major discrepancies about diving at altitude: threshold elevation that requires changes in sea level procedures; upper altitude limit of the applicability of these modifications; and independent validation of altitude adaptation methods of decompression algorithms. The first problem is solved by converting the normal fluctuation in barometric pressure to an altitude equivalent. Based on the barometric variations recorded from a meteorological center, it is possible to suggest 600 meters as a threshold for classifying a dive as an "altitude" dive. The second problem is solved by proposing the threshold altitude of aviation (2,400 meters) to classify "high" altitude dives. The DAN (Divers Alert Network) Europe diving database (DB) is analyzed to solve the third problem. The database consists of 65,050 dives collected from different dive computers. A total of 1,467 dives were found to be classified as altitude dives. However, by checking the elevation according to the logged geographical coordinates, 1,284 dives were disqualified because the altitude setting had been used as a conservative setting by the dive computer despite the fact that the dive was made at sea level. Furthermore, according to the description put forward in this manuscript, 72 dives were disqualified because the surface level elevation is lower than 600 meters. The number of field data (111 dives) is still very low to use for the validation of any particular method of altitude adaptation concerning decompression algorithms.

  8. Trend-Residual Dual Modeling for Detection of Outliers in Low-Cost GPS Trajectories

    PubMed Central

    Chen, Xiaojian; Cui, Tingting; Fu, Jianhong; Peng, Jianwei; Shan, Jie

    2016-01-01

    Low-cost GPS (receiver) has become a ubiquitous and integral part of our daily life. Despite noticeable advantages such as being cheap, small, light, and easy to use, its limited positioning accuracy devalues and hampers its wide applications for reliable mapping and analysis. Two conventional techniques to remove outliers in a GPS trajectory are thresholding and Kalman-based methods, which are difficult in selecting appropriate thresholds and modeling the trajectories. Moreover, they are insensitive to medium and small outliers, especially for low-sample-rate trajectories. This paper proposes a model-based GPS trajectory cleaner. Rather than examining speed and acceleration or assuming a pre-determined trajectory model, we first use cubic smooth spline to adaptively model the trend of the trajectory. The residuals, i.e., the differences between the trend and GPS measurements, are then further modeled by time series method. Outliers are detected by scoring the residuals at every GPS trajectory point. Comparing to the conventional procedures, the trend-residual dual modeling approach has the following features: (a) it is able to model trajectories and detect outliers adaptively; (b) only one critical value for outlier scores needs to be set; (c) it is able to robustly detect unapparent outliers; and (d) it is effective in cleaning outliers for GPS trajectories with low sample rates. Tests are carried out on three real-world GPS trajectories datasets. The evaluation demonstrates an average of 9.27 times better performance in outlier detection for GPS trajectories than thresholding and Kalman-based techniques. PMID:27916944

  9. A SVM-based quantitative fMRI method for resting-state functional network detection.

    PubMed

    Song, Xiaomu; Chen, Nan-kuei

    2014-09-01

    Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Evaluation of Sensibility Threshold for Interocclusal Thickness of Patients Wearing Complete Dentures

    PubMed Central

    Shala, Kujtim Sh.; Ahmedi, Enis F.; Tmava-Dragusha, Arlinda

    2017-01-01

    Objective The aim of this study was to evaluate sensibility threshold for interocclusal thickness in experienced and nonexperienced denture wearers after the insertion of new complete dentures. Materials and Methods A total of 88 patients with complete dentures have participated in this study. The research was divided into two experimental groups, compared with the previous experience prosthetic dental treatment. The sensibility threshold for interocclusal thickness was measured with metal foil with 8 μm thickness and width of 8 mm, placed between the upper and lower incisor region. Statistical analysis was performed using standard software package BMDP (biomedical statistical package). Results Results suggest that time of measurement affects the average values of the sensibility threshold for interocclusal thickness (F = 242.68, p = 0.0000). Gender appeared to be a significant factor when it interacted with time measurement resulting in differences in sensibility threshold for interocclusal thickness (gender: F = 9.84, p = 0.018; F = 4.83, p = 0.0003). Conclusion The sensibility threshold for interocclusal thickness was the most important functional adaptation in patient with complete dentures. A unique trait of this indicator is the progressive reduction of initial values and a tendency to reestablish the stationary state in the fifteenth week after dentures is taken off. PMID:28702055

  11. Correlations among within-channel and between-channel auditory gap-detection thresholds in normal listeners.

    PubMed

    Phillips, Dennis P; Smith, Jennifer C

    2004-01-01

    We obtained data on within-channel and between-channel auditory temporal gap-detection acuity in the normal population. Ninety-five normal listeners were tested for gap-detection thresholds, for conditions in which the gap was bounded by spectrally identical, and by spectrally different, acoustic markers. Separate thresholds were obtained with the use of an adaptive tracking method, for gaps delimited by narrowband noise bursts centred on 1.0 kHz, noise bursts centred on 4.0 kHz, and for gaps bounded by a leading marker of 4.0 kHz noise and a trailing marker of 1.0 kHz noise. Gap thresholds were lowest for silent periods bounded by identical markers--'within-channel' stimuli. Gap thresholds were significantly longer for the between-channel stimulus--silent periods bounded by unidentical markers (p < 0.0001). Thresholds for the two within-channel tasks were highly correlated (R = 0.76). Thresholds for the between-channel stimulus were weakly correlated with thresholds for the within-channel stimuli (1.0 kHz, R = 0.39; and 4.0 kHz, R = 0.46). The relatively poor predictability of between-channel thresholds from the within-channel thresholds is new evidence on the separability of the mechanisms that mediate performance of the two tasks. The data confirm that the acuity difference for the tasks, which has previously been demonstrated in only small numbers of highly trained listeners, extends to a population of untrained listeners. The acuity of the between-channel mechanism may be relevant to the formation of voice-onset time-category boundaries in speech perception.

  12. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  13. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  14. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  15. Automated Segmentation of High-Resolution Photospheric Images of Active Regions

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Tian, Yu; Rao, Changhui

    2018-02-01

    Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).

  16. Real-Time Adaptive Control Allocation Applied to a High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Lallman, Frederick J.; Bundick, W. Thomas

    2001-01-01

    Abstract This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach.

  17. Planning Beyond the Next Trial in Adaptive Experiments: A Dynamic Programming Approach.

    PubMed

    Kim, Woojae; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I

    2017-11-01

    Experimentation is at the heart of scientific inquiry. In the behavioral and neural sciences, where only a limited number of observations can often be made, it is ideal to design an experiment that leads to the rapid accumulation of information about the phenomenon under study. Adaptive experimentation has the potential to accelerate scientific progress by maximizing inferential gain in such research settings. To date, most adaptive experiments have relied on myopic, one-step-ahead strategies in which the stimulus on each trial is selected to maximize inference on the next trial only. A lingering question in the field has been how much additional benefit would be gained by optimizing beyond the next trial. A range of technical challenges has prevented this important question from being addressed adequately. This study applies dynamic programming (DP), a technique applicable for such full-horizon, "global" optimization, to model-based perceptual threshold estimation, a domain that has been a major beneficiary of adaptive methods. The results provide insight into conditions that will benefit from optimizing beyond the next trial. Implications for the use of adaptive methods in cognitive science are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  18. Effect of eccentricity and light level on the timing of light adaptation mechanisms.

    PubMed

    Barrionuevo, Pablo A; Matesanz, Beatriz M; Gloriani, Alejandro H; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A

    2018-04-01

    We explored the complexity of the light adaptation process, assessing adaptation recovery (Ar) at different eccentricities and light levels. Luminance thresholds were obtained with transient background fields at mesopic and photopic light levels for temporal retinal eccentricities (0°-15°) with test/background stimulus size of 0.5°/1° using a staircase procedure in a two-channel Maxwellian view optical system. Ar was obtained in comparison with steady data [Vis. Res.125, 12 (2016)VISRAM0042-698910.1016/j.visres.2016.04.008]. Light level proportionally affects Ar only at fovea. Photopic extrafoveal thresholds were one log unit higher for transient conditions. Adaptation was equally fast at low light levels for different retinal locations with variations mainly affected by noise. These results evidence different timing in the mechanisms of adaptation involved.

  19. Resilience thinking: integrating resilience, adaptability and transformability

    Treesearch

    Carl Folke; Stephen R. Carpenter; Brian Walker; Marten Scheffer; Terry Chapin; Johan Rockstrom

    2010-01-01

    Resilience thinking addresses the dynamics and development of complex social-ecological systems (SES). Three aspects are central: resilience, adaptability and transformability. These aspects interrelate across multiple scales. Resilience in this context is the capacity of a SES to continually change and adapt yet remain within critical thresholds. Adaptability is part...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  1. The effect of spatial attention on invisible stimuli.

    PubMed

    Shin, Kilho; Stolte, Moritz; Chong, Sang Chul

    2009-10-01

    The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.

  2. Artifact suppression and analysis of brain activities with electroencephalography signals.

    PubMed

    Rashed-Al-Mahfuz, Md; Islam, Md Rabiul; Hirose, Keikichi; Molla, Md Khademul Islam

    2013-06-05

    Brain-computer interface is a communication system that connects the brain with computer (or other devices) but is not dependent on the normal output of the brain (i.e., peripheral nerve and muscle). Electro-oculogram is a dominant artifact which has a significant negative influence on further analysis of real electroencephalography data. This paper presented a data adaptive technique for artifact suppression and brain wave extraction from electroencephalography signals to detect regional brain activities. Empirical mode decomposition based adaptive thresholding approach was employed here to suppress the electro-oculogram artifact. Fractional Gaussian noise was used to determine the threshold level derived from the analysis data without any training. The purified electroencephalography signal was composed of the brain waves also called rhythmic components which represent the brain activities. The rhythmic components were extracted from each electroencephalography channel using adaptive wiener filter with the original scale. The regional brain activities were mapped on the basis of the spatial distribution of rhythmic components, and the results showed that different regions of the brain are activated in response to different stimuli. This research analyzed the activities of a single rhythmic component, alpha with respect to different motor imaginations. The experimental results showed that the proposed method is very efficient in artifact suppression and identifying individual motor imagery based on the activities of alpha component.

  3. Reactive power and voltage control strategy based on dynamic and adaptive segment for DG inverter

    NASA Astrophysics Data System (ADS)

    Zhai, Jianwei; Lin, Xiaoming; Zhang, Yongjun

    2018-03-01

    The inverter of distributed generation (DG) can support reactive power to help solve the problem of out-of-limit voltage in active distribution network (ADN). Therefore, a reactive voltage control strategy based on dynamic and adaptive segment for DG inverter is put forward to actively control voltage in this paper. The proposed strategy adjusts the segmented voltage threshold of Q(U) droop curve dynamically and adaptively according to the voltage of grid-connected point and the power direction of adjacent downstream line. And then the reactive power reference of DG inverter can be got through modified Q(U) control strategy. The reactive power of inverter is controlled to trace the reference value. The proposed control strategy can not only control the local voltage of grid-connected point but also help to maintain voltage within qualified range considering the terminal voltage of distribution feeder and the reactive support for adjacent downstream DG. The scheme using the proposed strategy is compared with the scheme without the reactive support of DG inverter and the scheme using the Q(U) control strategy with constant segmented voltage threshold. The simulation results suggest that the proposed method has a significant improvement on solving the problem of out-of-limit voltage, restraining voltage variation and improving voltage quality.

  4. a Method of Generating dem from Dsm Based on Airborne Insar Data

    NASA Astrophysics Data System (ADS)

    Lu, W.; Zhang, J.; Xue, G.; Wang, C.

    2018-04-01

    Traditional methods of terrestrial survey to acquire DEM cannot meet the requirement of acquiring large quantities of data in real time, but the DSM can be quickly obtained by using the dual antenna synthetic aperture radar interferometry and the DEM generated by the DSM is more fast and accurate. Therefore it is most important to acquire DEM from DSM based on airborne InSAR data. This paper aims to the method that generate DEM from DSM accurately. Two steps in this paper are applied to acquire accurate DEM. First of all, when the DSM is generated by interferometry, unavoidable factors such as overlay and shadow will produce gross errors to affect the data accuracy, so the adaptive threshold segmentation method is adopted to remove the gross errors and the threshold is selected according to the coherence of the interferometry. Secondly DEM will be generated by the progressive triangulated irregular network densification filtering algorithm. Finally, experimental results are compared with the existing high-precision DEM results. The results show that this method can effectively filter out buildings, vegetation and other objects to obtain the high-precision DEM.

  5. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  6. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  7. Utilising psychophysical techniques to investigate the effects of age, typeface design, size and display polarity on glance legibility

    PubMed Central

    Dobres, Jonathan; Chahine, Nadine; Reimer, Bryan; Gould, David; Mehler, Bruce; Coughlin, Joseph F.

    2016-01-01

    Abstract Psychophysical research on text legibility has historically investigated factors such as size, colour and contrast, but there has been relatively little direct empirical evaluation of typographic design itself, particularly in the emerging context of glance reading. In the present study, participants performed a lexical decision task controlled by an adaptive staircase method. Two typefaces, a ‘humanist’ and ‘square grotesque’ style, were tested. Study I examined positive and negative polarities, while Study II examined two text sizes. Stimulus duration thresholds were sensitive to differences between typefaces, polarities and sizes. Typeface also interacted significantly with age, particularly for conditions with higher legibility thresholds. These results are consistent with previous research assessing the impact of the same typefaces on interface demand in a simulated driving environment. This simplified methodology of assessing legibility differences can be adapted to investigate a wide array of questions relevant to typographic and interface designs. Practitioner Summary: A method is described for rapidly investigating relative legibility of different typographical features. Results indicate that during glance-like reading induced by the psychophysical technique and under the lighting conditions considered, humanist-style type is significantly more legible than a square grotesque style, and that black-on-white text is significantly more legible than white-on-black. PMID:26727912

  8. Utilising psychophysical techniques to investigate the effects of age, typeface design, size and display polarity on glance legibility.

    PubMed

    Dobres, Jonathan; Chahine, Nadine; Reimer, Bryan; Gould, David; Mehler, Bruce; Coughlin, Joseph F

    2016-10-01

    Psychophysical research on text legibility has historically investigated factors such as size, colour and contrast, but there has been relatively little direct empirical evaluation of typographic design itself, particularly in the emerging context of glance reading. In the present study, participants performed a lexical decision task controlled by an adaptive staircase method. Two typefaces, a 'humanist' and 'square grotesque' style, were tested. Study I examined positive and negative polarities, while Study II examined two text sizes. Stimulus duration thresholds were sensitive to differences between typefaces, polarities and sizes. Typeface also interacted significantly with age, particularly for conditions with higher legibility thresholds. These results are consistent with previous research assessing the impact of the same typefaces on interface demand in a simulated driving environment. This simplified methodology of assessing legibility differences can be adapted to investigate a wide array of questions relevant to typographic and interface designs. Practitioner Summary: A method is described for rapidly investigating relative legibility of different typographical features. Results indicate that during glance-like reading induced by the psychophysical technique and under the lighting conditions considered, humanist-style type is significantly more legible than a square grotesque style, and that black-on-white text is significantly more legible than white-on-black.

  9. The Sensory Difference Threshold of Menthol Odor in Flavored Tobacco Determined by Combining Sensory and Chemical Analysis.

    PubMed

    Krüsemann, Erna J Z; Cremers, Johannes W J M; Visser, Wouter F; Punter, Pieter H; Talhout, Reinskje

    2017-03-01

    Cigarettes are an often-used consumer product, and flavor is an important determinant of their product appeal. Cigarettes with strong nontobacco flavors are popular among young people, and may facilitate smoking initiation. Discriminating flavors in tobacco is important for regulation purposes, for instance to set upper limits to the levels of important flavor additives. We provide a simple and fast method to determine the human odor difference threshold for flavor additives in a tobacco matrix, using a combination of chemical and sensory analysis. For an example, the human difference threshold for menthol odor, one of the most frequently used tobacco flavors, was determined. A consumer panel consisting of 20 women compared different concentrations of menthol-flavored tobacco to unflavored cigarette tobacco using the 2-alternative forced choice method. Components contributing to menthol odor were quantified using headspace GC-MS. The sensory difference threshold of menthol odor corresponded to a mixture of 43 (37-50)% menthol-flavored tobacco, containing 1.8 (1.6-2.1) mg menthol, 2.7 (2.3-3.1) µg menthone, and 1.0 (0.9-1.2) µg neomenthyl acetate per gram of tobacco. Such a method is important in the context of the European Tobacco Product Directive, and the US Food and Drug Administration Tobacco Control Act, that both prohibit cigarettes and roll-your-own tobacco with a characterizing flavor other than tobacco. Our method can also be adapted for matrices other than tobacco, such as food. © The Author 2016. Published by Oxford University Press.

  10. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  11. Multiple targets detection method in detection of UWB through-wall radar

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Yang, Chuanfa; Zhao, Xingwen; Tian, Xianzhong

    2017-11-01

    In this paper, the problems and difficulties encountered in the detection of multiple moving targets by UWB radar are analyzed. The experimental environment and the penetrating radar system are established. An adaptive threshold method based on local area is proposed to effectively filter out clutter interference The objective of the moving target is analyzed, and the false target is further filtered out by extracting the target feature. Based on the correlation between the targets, the target matching algorithm is proposed to improve the detection accuracy. Finally, the effectiveness of the above method is verified by practical experiment.

  12. An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image

    NASA Astrophysics Data System (ADS)

    Yu, Zhijie; Yu, Hui; Wang, Chen-sheng

    2014-11-01

    Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.

  13. Phase coherence adaptive processor for automatic signal detection and identification

    NASA Astrophysics Data System (ADS)

    Wagstaff, Ronald A.

    2006-05-01

    A continuously adapting acoustic signal processor with an automatic detection/decision aid is presented. Its purpose is to preserve the signals of tactical interest, and filter out other signals and noise. It utilizes single sensor or beamformed spectral data and transforms the signal and noise phase angles into "aligned phase angles" (APA). The APA increase the phase temporal coherence of signals and leave the noise incoherent. Coherence thresholds are set, which are representative of the type of source "threat vehicle" and the geographic area or volume in which it is operating. These thresholds separate signals, based on the "quality" of their APA coherence. An example is presented in which signals from a submerged source in the ocean are preserved, while clutter signals from ships and noise are entirely eliminated. Furthermore, the "signals of interest" were identified by the processor's automatic detection aid. Similar performance is expected for air and ground vehicles. The processor's equations are formulated in such a manner that they can be tuned to eliminate noise and exploit signal, based on the "quality" of their APA temporal coherence. The mathematical formulation for this processor is presented, including the method by which the processor continuously self-adapts. Results show nearly complete elimination of noise, with only the selected category of signals remaining, and accompanying enhancements in spectral and spatial resolution. In most cases, the concept of signal-to-noise ratio looses significance, and "adaptive automated /decision aid" is more relevant.

  14. A novel fusion method of improved adaptive LTP and two-directional two-dimensional PCA for face feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming

    2018-03-01

    In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.

  15. Impairment of retinal increment thresholds in Huntington's disease.

    PubMed

    Paulus, W; Schwarz, G; Werner, A; Lange, H; Bayer, A; Hofschuster, M; Müller, N; Zrenner, E

    1993-10-01

    We have investigated detection thresholds for a foveal blue test light using a Maxwellian view system in 61 normal subjects, 19 patients with Huntington's chorea, 14 patients with Tourette's syndrome, and 20 patients with schizophrenia. Ten measurements were made: The blue test light (1 degree diameter, 500 msec duration) was presented either superimposed on a yellow adaptation field (5 degree diameter) or 500 msec after switching off this field (transient tritanopia effect). In both cases five different background intensities were presented. The only abnormality found was in patients with Huntington's chorea. During adaptation these patients' thresholds are significantly higher than normal (p < 0.005). No change was found in the transient tritanopia effect. Huntington's disease causes degeneration of several different transmitter systems in the brain. Increment threshold testing allows for noninvasive investigation of patients and confirms the involvement of the retina in the degenerative process in Huntington's chorea.

  16. Adaptive gain and filtering circuit for a sound reproduction system

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor)

    1998-01-01

    Adaptive compressive gain and level dependent spectral shaping circuitry for a hearing aid include a microphone to produce an input signal and a plurality of channels connected to a common circuit output. Each channel has a preset frequency response. Each channel includes a filter with a preset frequency response to receive the input signal and to produce a filtered signal, a channel amplifier to amplify the filtered signal to produce a channel output signal, a threshold register to establish a channel threshold level, and a gain circuit. The gain circuit increases the gain of the channel amplifier when the channel output signal falls below the channel threshold level and decreases the gain of the channel amplifier when the channel output signal rises above the channel threshold level. A transducer produces sound in response to the signal passed by the common circuit output.

  17. Development of a Voice Activity Controlled Noise Canceller

    PubMed Central

    Abid Noor, Ali O.; Samad, Salina Abdul; Hussain, Aini

    2012-01-01

    In this paper, a variable threshold voice activity detector (VAD) is developed to control the operation of a two-sensor adaptive noise canceller (ANC). The VAD prohibits the reference input of the ANC from containing some strength of actual speech signal during adaptation periods. The novelty of this approach resides in using the residual output from the noise canceller to control the decisions made by the VAD. Thresholds of full-band energy and zero-crossing features are adjusted according to the residual output of the adaptive filter. Performance evaluation of the proposed approach is quoted in terms of signal to noise ratio improvements as well mean square error (MSE) convergence of the ANC. The new approach showed an improved noise cancellation performance when tested under several types of environmental noise. Furthermore, the computational power of the adaptive process is reduced since the output of the adaptive filter is efficiently calculated only during non-speech periods. PMID:22778667

  18. Adaptive threshold determination for efficient channel sensing in cognitive radio network using mobile sensors

    NASA Astrophysics Data System (ADS)

    Morshed, M. N.; Khatun, S.; Kamarudin, L. M.; Aljunid, S. A.; Ahmad, R. B.; Zakaria, A.; Fakir, M. M.

    2017-03-01

    Spectrum saturation problem is a major issue in wireless communication systems all over the world. Huge number of users is joining each day to the existing fixed band frequency but the bandwidth is not increasing. These requirements demand for efficient and intelligent use of spectrum. To solve this issue, the Cognitive Radio (CR) is the best choice. Spectrum sensing of a wireless heterogeneous network is a fundamental issue to detect the presence of primary users' signals in CR networks. In order to protect primary users (PUs) from harmful interference, the spectrum sensing scheme is required to perform well even in low signal-to-noise ratio (SNR) environments. Meanwhile, the sensing period is usually required to be short enough so that secondary (unlicensed) users (SUs) can fully utilize the available spectrum. CR networks can be designed to manage the radio spectrum more efficiently by utilizing the spectrum holes in primary user's licensed frequency bands. In this paper, we have proposed an adaptive threshold detection method to detect presence of PU signal using free space path loss (FSPL) model in 2.4 GHz WLAN network. The model is designed for mobile sensors embedded in smartphones. The mobile sensors acts as SU while the existing WLAN network (channels) works as PU. The theoretical results show that the desired threshold range detection of mobile sensors mainly depends on the noise floor level of the location in consideration.

  19. Improved CEEMDAN-wavelet transform de-noising method and its application in well logging noise reduction

    NASA Astrophysics Data System (ADS)

    Zhang, Jingxia; Guo, Yinghai; Shen, Yulin; Zhao, Difei; Li, Mi

    2018-06-01

    The use of geophysical logging data to identify lithology is an important groundwork in logging interpretation. Inevitably, noise is mixed in during data collection due to the equipment and other external factors and this will affect the further lithological identification and other logging interpretation. Therefore, to get a more accurate lithological identification it is necessary to adopt de-noising methods. In this study, a new de-noising method, namely improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)-wavelet transform, is proposed, which integrates the superiorities of improved CEEMDAN and wavelet transform. Improved CEEMDAN, an effective self-adaptive multi-scale analysis method, is used to decompose non-stationary signals as the logging data to obtain the intrinsic mode function (IMF) of N different scales and one residual. Moreover, one self-adaptive scale selection method is used to determine the reconstruction scale k. Simultaneously, given the possible frequency aliasing problem between adjacent IMFs, a wavelet transform threshold de-noising method is used to reduce the noise of the (k-1)th IMF. Subsequently, the de-noised logging data are reconstructed by the de-noised (k-1)th IMF and the remaining low-frequency IMFs and the residual. Finally, empirical mode decomposition, improved CEEMDAN, wavelet transform and the proposed method are applied for analysis of the simulation and the actual data. Results show diverse performance of these de-noising methods with regard to accuracy for lithological identification. Compared with the other methods, the proposed method has the best self-adaptability and accuracy in lithological identification.

  20. Precise measurement of volume of eccrine sweat gland in mental sweating by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Sugawa, Yoshihiko; Fukuda, Akihiro; Ohmi, Masato

    2015-04-01

    We have demonstrated dynamic analysis of the physiological function of eccrine sweat glands underneath skin surface by optical coherence tomography (OCT). In this paper, we propose a method for extraction of the specific eccrine sweat gland by means of the connected component extraction process and the adaptive threshold method, where the en face OCT images are constructed by the swept-source OCT. In the experiment, we demonstrate precise measurement of the volume of the sweat gland in response to the external stimulus.

  1. Disturbances of rod threshold forced by briefly exposed luminous lines, edges, disks and annuli

    PubMed Central

    Hallett, P. E.

    1971-01-01

    1. When the dark-adapted eye is exposed to a brief duration (2 msec) luminous line the resulting threshold disturbance is much sharper (decay constant of ca. 10 min arc) than would be expected in a system which is known to integrate the effects of light quanta over a distance of 1 deg or so. 2. When the forcing input is a pair of brief duration parallel luminous lines the threshold disturbance falls off sharply at the outsides of the pattern but on the inside a considerable spread of threshold-raising effects may occur unless the lines are sufficiently far apart. 3. The threshold disturbance due to a briefly exposed edge shows an overshoot reminiscent of `lateral inhibition'. 4. If the threshold is measured at the centre of a black disk presented in a briefly lit surround then (a) the dependence of threshold on time interval between test and surround suggests that the threshold elevation is due to a non-optical effect which is not `metacontrast'; (b) the dependence of threshold on black disk diameter is consistent with the notion that the spatial threshold disturbance is progressively sharpened as the separation of luminous edges increases. 5. If the threshold is measured at the centre of briefly exposed luminous disks of various diameters one obtains the same evidence for an `antagonistic centre-surround' system as that produced by other workers (e.g. Westheimer, 1965) for the steadily light-adapted eye. 6. The previous paper (Hallett, 1971) showed that brief illumination of the otherwise dark-adapted eye can rapidly and substantially change the extent of spatial integration. The present paper shows that brief illumination leads to substantial `inhibitory' effects. 7. Earlier approaches are reviewed: (a) the linear system signal/noise theory of the time course of threshold disturbances (Hallett, 1969b) is illustrated by the case of a small subtense flash superimposed on a large oscillatory background; (b) the spatial weighting functions of some other authors are given. 8. A possible non-linear model is briefly described: the line weighting function for the receptive field centre is taken to be a single Gaussian, as is customary, but the line weighting function for the inhibitory surround is bimodal. PMID:5145728

  2. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  3. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  4. An artifacts removal post-processing for epiphyseal region-of-interest (EROI) localization in automated bone age assessment (BAA)

    PubMed Central

    2011-01-01

    Background Segmentation is the most crucial part in the computer-aided bone age assessment. A well-known type of segmentation performed in the system is adaptive segmentation. While providing better result than global thresholding method, the adaptive segmentation produces a lot of unwanted noise that could affect the latter process of epiphysis extraction. Methods A proposed method with anisotropic diffusion as pre-processing and a novel Bounded Area Elimination (BAE) post-processing algorithm to improve the algorithm of ossification site localization technique are designed with the intent of improving the adaptive segmentation result and the region-of interest (ROI) localization accuracy. Results The results are then evaluated by quantitative analysis and qualitative analysis using texture feature evaluation. The result indicates that the image homogeneity after anisotropic diffusion has improved averagely on each age group for 17.59%. Results of experiments showed that the smoothness has been improved averagely 35% after BAE algorithm and the improvement of ROI localization has improved for averagely 8.19%. The MSSIM has improved averagely 10.49% after performing the BAE algorithm on the adaptive segmented hand radiograph. Conclusions The result indicated that hand radiographs which have undergone anisotropic diffusion have greatly reduced the noise in the segmented image and the result as well indicated that the BAE algorithm proposed is capable of removing the artifacts generated in adaptive segmentation. PMID:21952080

  5. Addressing the limits to adaptation across four damage--response systems

    EPA Science Inventory

    Our ability to adapt to climate change is not boundless, and previous modeling shows that capacity limited adaptation will play a policy-significant role in future decisions about climate change. These limits are delineated by capacity thresholds, after which climate damages beg...

  6. Investigation of Adaptive-threshold Approaches for Determining Area-Time Integrals from Satellite Infrared Data to Estimate Convective Rain Volumes

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.; VonderHaar, Thomas H.

    1996-01-01

    The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.

  7. What Deters Students from Studying Abroad? Evidence from Four European Countries and Its Implications for Higher Education Policy

    ERIC Educational Resources Information Center

    Netz, Nicolai

    2015-01-01

    This study examines factors that deter students in Austria, Germany, Switzerland and the Netherlands from studying abroad. Using an adaptation of the Rubicon model of action phases, the path to gaining study abroad experience is conceptualised as a process involving two thresholds: the decision threshold and the realisation threshold. Theoretical…

  8. Step Detection Robust against the Dynamics of Smartphones

    PubMed Central

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  9. Evaluation of width and width uniformity of near-field electrospinning printed micro and sub-micrometer lines based on optical image processing

    NASA Astrophysics Data System (ADS)

    Zhao, Libo; Xia, Yong; Hebibul, Rahman; Wang, Jiuhong; Zhou, Xiangyang; Hu, Yingjie; Li, Zhikang; Luo, Guoxi; Zhao, Yulong; Jiang, Zhuangde

    2018-03-01

    This paper presents an experimental study using image processing to investigate width and width uniformity of sub-micrometer polyethylene oxide (PEO) lines fabricated by near-filed electrospinning (NFES) technique. An adaptive thresholding method was developed to determine the optimal gray values to accurately extract profiles of printed lines from original optical images. And it was proved with good feasibility. The mechanism of the proposed thresholding method was believed to take advantage of statistic property and get rid of halo induced errors. Triangular method and relative standard deviation (RSD) were introduced to calculate line width and width uniformity, respectively. Based on these image processing methods, the effects of process parameters including substrate speed (v), applied voltage (U), nozzle-to-collector distance (H), and syringe pump flow rate (Q) on width and width uniformity of printed lines were discussed. The research results are helpful to promote the NFES technique for fabricating high resolution micro and sub-micro lines and also helpful to optical image processing at sub-micro level.

  10. Why do shape aftereffects increase with eccentricity?

    PubMed

    Gheorghiu, Elena; Kingdom, Frederick A A; Bell, Jason; Gurnsey, Rick

    2011-12-20

    Studies have shown that spatial aftereffects increase with eccentricity. Here, we demonstrate that the shape-frequency and shape-amplitude aftereffects, which describe the perceived shifts in the shape of a sinusoidal-shaped contour following adaptation to a slightly different sinusoidal-shaped contour, also increase with eccentricity. Why does this happen? We first demonstrate that the perceptual shift increases with eccentricity for stimuli of fixed sizes. These shifts are not attenuated by variations in stimulus size; in fact, at each eccentricity the degree of perceptual shift is scale-independent. This scale independence is specific to the aftereffect because basic discrimination thresholds (in the absence of adaptation) decrease as size increases. Structural aspects of the displays were found to have a modest effect on the degree of perceptual shift; the degree of adaptation depends modestly on distance between stimuli during adaptation and post-adaptation testing. There were similar temporal rates of decline of adaptation across the visual field and higher post-adaptation discrimination thresholds in the periphery than in the center. The observed results are consistent with greater sensitivity reduction in adapted mechanisms following adaptation in the periphery or an eccentricity-dependent increase in the bandwidth of the shape-frequency- and shape-amplitude-selective mechanisms.

  11. A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection.

    PubMed

    Zhao, Zhishan; Zhao, Anbang; Hui, Juan; Hou, Baochun; Sotudeh, Reza; Niu, Fang

    2017-07-04

    The most classical detector of active sonar and radar is the matched filter (MF), which is the optimal processor under ideal conditions. Aiming at the problem of active sonar detection, we propose a frequency-domain adaptive matched filter (FDAMF) with the use of a frequency-domain adaptive line enhancer (ALE). The FDAMF is an improved MF. In the simulations in this paper, the signal to noise ratio (SNR) gain of the FDAMF is about 18.6 dB higher than that of the classical MF when the input SNR is -10 dB. In order to improve the performance of the FDAMF with a low input SNR, we propose a pre-processing method, which is called frequency-domain time reversal convolution and interference suppression (TRC-IS). Compared with the classical MF, the FDAMF combined with the TRC-IS method obtains higher SNR gain, a lower detection threshold, and a better receiver operating characteristic (ROC) in the simulations in this paper. The simulation results show that the FDAMF has higher processing gain and better detection performance than the classical MF under ideal conditions. The experimental results indicate that the FDAMF does improve the performance of the MF, and can adapt to actual interference in a way. In addition, the TRC-IS preprocessing method works well in an actual noisy ocean environment.

  12. Threshold magnitudes for a multichannel correlation detector in background seismicity

    DOE PAGES

    Carmichael, Joshua D.; Hartse, Hans

    2016-04-01

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  13. Speckle Noise Reduction in Optical Coherence Tomography Using Two-dimensional Curvelet-based Dictionary Learning.

    PubMed

    Esmaeili, Mahdad; Dehnavi, Alireza Mehri; Rabbani, Hossein; Hajizadeh, Fedra

    2017-01-01

    The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted due to the large speckle noise. To address this problem, this paper proposes a new method using two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and rotations are separately thresholded with an adaptive data-driven thresholding method, then, each thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial dictionary dependent on the size of curvelet coefficients' matrix in each sub-band. We also modify each coefficient matrix to enhance intra-retinal layers, with noise suppression at the same time. We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD), and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio from 3.20 to 14.41 are obtained.

  14. Impact of Fast Sodium Channel Inactivation on Spike Threshold Dynamics and Synaptic Integration

    PubMed Central

    Platkiewicz, Jonathan; Brette, Romain

    2011-01-01

    Neurons spike when their membrane potential exceeds a threshold value. In central neurons, the spike threshold is not constant but depends on the stimulation. Thus, input-output properties of neurons depend both on the effect of presynaptic spikes on the membrane potential and on the dynamics of the spike threshold. Among the possible mechanisms that may modulate the threshold, one strong candidate is Na channel inactivation, because it specifically impacts spike initiation without affecting the membrane potential. We collected voltage-clamp data from the literature and we found, based on a theoretical criterion, that the properties of Na inactivation could indeed cause substantial threshold variability by itself. By analyzing simple neuron models with fast Na inactivation (one channel subtype), we found that the spike threshold is correlated with the mean membrane potential and negatively correlated with the preceding depolarization slope, consistent with experiments. We then analyzed the impact of threshold dynamics on synaptic integration. The difference between the postsynaptic potential (PSP) and the dynamic threshold in response to a presynaptic spike defines an effective PSP. When the neuron is sufficiently depolarized, this effective PSP is briefer than the PSP. This mechanism regulates the temporal window of synaptic integration in an adaptive way. Finally, we discuss the role of other potential mechanisms. Distal spike initiation, channel noise and Na activation dynamics cannot account for the observed negative slope-threshold relationship, while adaptive conductances (e.g. K+) and Na inactivation can. We conclude that Na inactivation is a metabolically efficient mechanism to control the temporal resolution of synaptic integration. PMID:21573200

  15. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons

    PubMed Central

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-01-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter—describing somatic integration—and the spike-history filter—accounting for spike-frequency adaptation—dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations. PMID:26907675

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua D.; Hartse, Hans

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m b = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  17. Summer temperature metrics for predicting brook trout (Salvelinus fontinalis) distribution in streams

    USGS Publications Warehouse

    Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.

    2012-01-01

    We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.

  18. Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy?

    PubMed

    Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa

    2015-11-01

    To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Laying the Groundwork for NCLEX Success: An Exploration of Adaptive Quizzing as an Examination Preparation Method.

    PubMed

    Cox-Davenport, Rebecca A; Phelan, Julia C

    2015-05-01

    First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed.

  20. Thermal sensation and climate: a comparison of UTCI and PET thresholds in different climates.

    PubMed

    Pantavou, Katerina; Lykoudis, Spyridon; Nikolopoulou, Marialena; Tsiros, Ioannis X

    2018-06-07

    The influence of physiological acclimatization and psychological adaptation on thermal perception is well documented and has revealed the importance of thermal experience and expectation in the evaluation of environmental stimuli. Seasonal patterns of thermal perception have been studied, and calibrated thermal indices' scales have been proposed to obtain meaningful interpretations of thermal sensation indices in different climate regions. The current work attempts to quantify the contribution of climate to the long-term thermal adaptation by examining the relationship between climate normal annual air temperature (1971-2000) and such climate-calibrated thermal indices' assessment scales. The thermal sensation ranges of two thermal indices, the Universal Thermal Climate Index (UTCI) and the Physiological Equivalent Temperature Index (PET), were calibrated for three warm temperate climate contexts (Cfa, Cfb, Csa), against the subjective evaluation of the thermal environment indicated by interviewees during field surveys conducted at seven European cities: Athens (GR), Thessaloniki (GR), Milan (IT), Fribourg (CH), Kassel (DE), Cambridge (UK), and Sheffield (UK), under the same research protocol. Then, calibrated scales for other climate contexts were added from the literature, and the relationship between the respective scales' thresholds and climate normal annual air temperature was examined. To maintain the maximum possible comparability, three methods were applied for the calibration, namely linear, ordinal, and probit regression. The results indicated that the calibrated UTCI and PET thresholds increase with the climate normal annual air temperature of the survey city. To investigate further climates, we also included in the analysis results of previous studies presenting only thresholds for neutral thermal sensation. The average increase of the respective thresholds in the case of neutral thermal sensation was about 0.6 °C for each 1 °C increase of the normal annual air temperature for both indices, statistically significant only for PET though.

  1. Adaptive Optics Microperimetry and OCT Images Show Preserved Function and Recovery of Cone Visibility in Macular Telangiectasia Type 2 Retinal Lesions

    PubMed Central

    Wang, Qinyun; Tuten, William S.; Lujan, Brandon J.; Holland, Jennifer; Bernstein, Paul S.; Schwartz, Steven D.; Duncan, Jacque L.; Roorda, Austin

    2015-01-01

    Purpose. To evaluate visual function and disease progression in the retinal structural abnormalities of three patients from two unrelated families with macular telangiectasia (MacTel) type 2. Methods. Adaptive optics scanning laser ophthalmoscopy (AOSLO) and AOSLO microperimetry (AOMP) were used to evaluate the structure and function of macular cones in three eyes with MacTel type 2. Cone spacing was estimated using histogram analysis of intercone distances, and registered spectral-domain optical coherence tomography (SD-OCT) scans were used to evaluate retinal anatomy. AOMP was used to assess visual sensitivity in and around areas of apparent cone loss. Results. Although overall lesion surface area increased, some initially affected regions subsequently showed clear, contiguous, and normally spaced cone mosaics with recovered photoreceptor inner/outer segment (IS/OS) reflectivity (two of two eyes). The AOMP test sites fell within three categories: normal-appearing cones (N), dimly reflecting cones (D), and RPE cell mosaics (R). At N sites, AOMP threshold values (arbitrary units [au]) increased with increasing eccentricity (slope = 0.054 au/degree, r2 = 0.77). The N thresholds ranged from 0.04 to 0.27 au, D thresholds from 0.04 to 0.33 au, and R thresholds from 0.14 to 1.00 au. There was measurable visual sensitivity everywhere except areas without intact external limiting membrane (ELM) and with diffuse scattering in the IS/OS and posterior tips of the outer segments (PTOS) regions on OCT. Conclusions. Visual sensitivity and recovery of cone visibility in areas of apparent focal cone loss suggests that MacTel type 2 lesions with a preserved ELM may contain functioning cones with abnormal scattering and/or waveguiding characteristics. (ClinicalTrials.gov number, NCT00254605.) PMID:25587056

  2. Analysis of parenchymal patterns using conspicuous spatial frequency features in mammograms applied to the BI-RADS density rating scheme

    NASA Astrophysics Data System (ADS)

    Perconti, Philip; Loew, Murray

    2006-03-01

    Automatic classification of the density of breast parenchyma is shown using a measure that is correlated to the human observer performance, and compared against the BI-RADS density rating. Increasingly popular in the United States, the Breast Imaging Reporting and Data System (BI-RADS) is used to draw attention to the increased screening difficulty associated with greater breast density; however, the BI-RADS rating scheme is subjective and is not intended as an objective measure of breast density. So, while popular, BI-RADS does not define density classes using a standardized measure, which leads to increased variability among observers. The adaptive thresholding technique is a more quantitative approach for assessing the percentage breast density, but considerable reader interaction is required. We calculate an objective density rating that is derived using a measure of local feature salience. Previously, this measure was shown to correlate well with radiologists' localization and discrimination of true positive and true negative regions-of-interest. Using conspicuous spatial frequency features, an objective density rating is obtained and correlated with adaptive thresholding, and the subjectively ascertained BI-RADS density ratings. Using 100 cases, obtained from the University of South Florida's DDSM database, we show that an automated breast density measure can be derived that is correlated with the interactive thresholding method for continuous percentage breast density, but not with the BI-RADS density rating categories for the selected cases. Comparison between interactive thresholding and the new salience percentage density resulted in a Pearson correlation of 76.7%. Using a four-category scale equivalent to the BI-RADS density categories, a Spearman correlation coefficient of 79.8% was found.

  3. Robust crop and weed segmentation under uncontrolled outdoor illumination

    USDA-ARS?s Scientific Manuscript database

    A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...

  4. An evaluation of the effect of recent temperature variability on the prediction of coral bleaching events.

    PubMed

    Donner, Simon D

    2011-07-01

    Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.

  5. An adaptive detector and channel estimator for deep space optical communications

    NASA Technical Reports Server (NTRS)

    Mukai, R.; Arabshahi, P.; Yan, T. Y.

    2001-01-01

    This paper will discuss the design and testing of both the channel parameter identification system, and the adaptive threshold system, and illustrate their advantages and performance under simulated channel degradation conditions.

  6. Automatic segmentation of lung parenchyma based on curvature of ribs using HRCT images in scleroderma studies

    NASA Astrophysics Data System (ADS)

    Prasad, M. N.; Brown, M. S.; Ahmad, S.; Abtin, F.; Allen, J.; da Costa, I.; Kim, H. J.; McNitt-Gray, M. F.; Goldin, J. G.

    2008-03-01

    Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis. Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to perform segmentation of the lungs using a technique that selects an optimal threshold for a given scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build the model for selection of the best fitting lung boundary. The performance of the new technique was compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing. The two techniques were evaluated against manual reference segmentations using a volumetric overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the fixed threshold technique.

  7. THE EFFECTS OF VARIATIONS IN THE CONCENTRATION OF OXYGEN AND OF GLUCOSE ON DARK ADAPTATION

    PubMed Central

    McFarland, R. A.; Forbes, W. H.

    1940-01-01

    In this study we have analyzed the effects of variations in the concentrations of oxygen and of blood sugar on light sensitivity; i.e. dark adaptation. The experiments were carried out in an air-conditioned light-proof chamber where the concentrations of oxygen could be changed by dilution with nitrogen or by inhaling oxygen from a cylinder. The blood sugar was lowered by the injection of insulin and raised by the ingestion of glucose. The dark adaptation curves were plotted from data secured with an apparatus built according to specifications outlined by Hecht and Shlaer. During each experiment, observations were first made in normal air with the subject under basal conditions followed by one, and in most instances two, periods under the desired experimental conditions involving either anoxia or hyper- or hypoglycemia or variations in both the oxygen tension and blood sugar at the same time. 1. Dark adaptation curves were plotted (threshold against time) in normal air and compared with those obtained while inhaling lowered concentrations of oxygen. A decrease in sensitivity was observed with lowered oxygen tensions. Both the rod and cone portions of the curves were influenced in a similar way. These effects were counteracted by inhaling oxygen, the final rod thresholds returning to about the level of the normal base line in air or even below it within 2 to 3 minutes. The impairment was greatest for those with a poorer tolerance for low O2. Both the inter- and intra-individual variability in thresholds increased significantly at the highest altitude. 2. In a second series of tests control curves were obtained in normal air. Then while each subject remained dark adapted, the concentrations of oxygen were gradually decreased. The regeneration of visual purple was apparently complete during the 40 minutes of dark adaptation, yet in each case the thresholds continued to rise in direct proportion to the degree of anoxia. The inhalation of oxygen from a cylinder quickly counteracted the effects for the thresholds returned to the original control level within 2 to 3 minutes. 3. In experiments where the blood sugar was raised by the ingestion of glucose in normal air, no significant changes in the thresholds were observed except when the blood sugar was rapidly falling toward the end of the glucose tolerance tests. However, when glucose was ingested at the end of an experiment in low oxygen, while the subject remained dark adapted, the effects of the anoxia were largely counteracted within 6 to 8 minutes. 4. The influence of low blood sugar on light sensitivity was then studied by injecting insulin. The thresholds were raised as soon as the effects of the insulin produced a fall in the blood sugar. When the subjects inhaled oxygen the thresholds were lowered. Then when the oxygen was withdrawn so that the subject was breathing normal air, the thresholds rose again within 1 to 2 minutes. Finally, if the blood sugar was raised by ingesting glucose, the average threshold fell to the original control level or even below it. 5. The combined effects of low oxygen and low blood sugar on light sensitivity were studied in one subject (W. F.). These effects appeared to be greater than when a similar degree of anoxia or hypoglycemia was brought about separately. 6. In a series of experiments on ten subjects the dark adaptation curves were obtained both in the basal state and after a normal breakfast. In nine of the ten subjects, the food increased the sensitivity of the subjects to light. 7. The experiments reported above lend support to the hypothesis that both anoxia and hypoglycemia produce their effects on light sensitivity in essentially the same way; namely, by slowing the oxidative processes. Consequently the effects of anoxia may be ameliorated by giving glucose and the effects of hypoglycemia by inhaling oxygen. In our opinion, the changes may be attributed directly to the effects on the nervous tissue of the visual mechanism and the brain rather than on the photochemical processes of the retina. PMID:19873200

  8. Monopolar Detection Thresholds Predict Spatial Selectivity of Neural Excitation in Cochlear Implants: Implications for Speech Recognition

    PubMed Central

    2016-01-01

    The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658

  9. The uncertain response in humans and animals

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Shields, W. E.; Schull, J.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)

    1997-01-01

    There has been no comparative psychological study of uncertainty processes. Accordingly, the present experiments asked whether animals, like humans, escape adaptively when they are uncertain. Human and animal observers were given two primary responses in a visual discrimination task, and the opportunity to escape from some trials into easier ones. In one psychophysical task (using a threshold paradigm), humans escaped selectively the difficult trials that left them uncertain of the stimulus. Two rhesus monkeys (Macaca mulatta) also showed this pattern. In a second psychophysical task (using the method of constant stimuli), some humans showed this pattern but one escaped infrequently and nonoptimally. Monkeys showed equivalent individual differences. The data suggest that escapes by humans and monkeys are interesting cognitive analogs and may reflect controlled decisional processes prompted by the perceptual ambiguity at threshold.

  10. Pooled Genome-Wide Analysis to Identify Novel Risk Loci for Pediatric Allergic Asthma

    PubMed Central

    Ricci, Giampaolo; Astolfi, Annalisa; Remondini, Daniel; Cipriani, Francesca; Formica, Serena; Dondi, Arianna; Pession, Andrea

    2011-01-01

    Background Genome-wide association studies of pooled DNA samples were shown to be a valuable tool to identify candidate SNPs associated to a phenotype. No such study was up to now applied to childhood allergic asthma, even if the very high complexity of asthma genetics is an appropriate field to explore the potential of pooled GWAS approach. Methodology/Principal Findings We performed a pooled GWAS and individual genotyping in 269 children with allergic respiratory diseases comparing allergic children with and without asthma. We used a modular approach to identify the most significant loci associated with asthma by combining silhouette statistics and physical distance method with cluster-adapted thresholding. We found 97% concordance between pooled GWAS and individual genotyping, with 36 out of 37 top-scoring SNPs significant at individual genotyping level. The most significant SNP is located inside the coding sequence of C5, an already identified asthma susceptibility gene, while the other loci regulate functions that are relevant to bronchial physiopathology, as immune- or inflammation-mediated mechanisms and airway smooth muscle contraction. Integration with gene expression data showed that almost half of the putative susceptibility genes are differentially expressed in experimental asthma mouse models. Conclusion/Significance Combined silhouette statistics and cluster-adapted physical distance threshold analysis of pooled GWAS data is an efficient method to identify candidate SNP associated to asthma development in an allergic pediatric population. PMID:21359210

  11. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  12. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  13. A new time-adaptive discrete bionic wavelet transform for enhancing speech from adverse noise environment

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui

    2012-04-01

    Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.

  14. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  15. Methods of alleviation of ionospheric scintillation effects on digital communications

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1974-01-01

    The degradation of the performance of digital communication systems because of ionospheric scintillation effects can be reduced either by diversity techniques or by coding. The effectiveness of traditional space-diversity, frequency-diversity and time-diversity techniques is reviewed and design considerations isolated. Time-diversity signaling is then treated as an extremely simple form of coding. More advanced coding methods, such as diffuse threshold decoding and burst-trapping decoding, which appear attractive in combatting scintillation effects are discussed and design considerations noted. Finally, adaptive coding techniques appropriate when the general state of the channel is known are discussed.

  16. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  17. An Unsupervised Approach for Extraction of Blood Vessels from Fundus Images.

    PubMed

    Dash, Jyotiprava; Bhoi, Nilamani

    2018-04-26

    Pathological disorders may happen due to small changes in retinal blood vessels which may later turn into blindness. Hence, the accurate segmentation of blood vessels is becoming a challenging task for pathological analysis. This paper offers an unsupervised recursive method for extraction of blood vessels from ophthalmoscope images. First, a vessel-enhanced image is generated with the help of gamma correction and contrast-limited adaptive histogram equalization (CLAHE). Next, the vessels are extracted iteratively by applying an adaptive thresholding technique. At last, a final vessel segmented image is produced by applying a morphological cleaning operation. Evaluations are accompanied on the publicly available digital retinal images for vessel extraction (DRIVE) and Child Heart And Health Study in England (CHASE_DB1) databases using nine different measurements. The proposed method achieves average accuracies of 0.957 and 0.952 on DRIVE and CHASE_DB1 databases respectively.

  18. Feasibility and performance of an adaptive contrast-oriented FDG PET/CT quantification technique for global disease assessment of malignant pleural mesothelioma and a brief review of the literature.

    PubMed

    Marin-Oyaga, Victor A; Salavati, Ali; Houshmand, Sina; Pasha, Ahmed Khurshid; Gharavi, Mohammad; Saboury, Babak; Basu, Sandip; Torigian, Drew A; Alavi, Abass

    2015-01-01

    Treatment of malignant pleural mesothelioma (MPM) remains very challenging. Assessment of response to treatment is necessary for modifying treatment and using new drugs. Global disease assessment (GDA) by implementing image processing methods to extract more information out of positron emission tomography (PET) images may provide reliable information. In this study we show the feasibility of this method of semi-quantification in patients with mesothelioma, and compare it with the conventional methods. We also present a review of the literature about this topic. Nineteen subjects with histologically proven MPM who had undergone fluoride-18-fluorodeoxyglucose PET/computed tomography ((18)F-FDG PET/CT) before and after treatment were included in this study. An adaptive contrast-oriented thresholding algorithm was used for the image analysis and semi-quantification. Metabolic tumor volume (MTV), maximum and mean standardized uptake volume (SUVmax, SUVmean) and total lesion glycolysis (TLG) were calculated for each region of interest. The global tumor glycolysis (GTG) was obtained by summing up all TLG. Treatment response was assessed by the European Organisation for Research and Treatment of Cancer (EORTC) criteria and the changes of GTG. Agreement between global disease assessment and conventional method was also determined. In patients with progressive disease based on EORTC criteria, GTG showed an increase of 150.7 but in patients with stable or partial response, GTG showed a decrease of 433.1. The SUVmax of patients before treatment was 5.95 (SD: 2.93) and after the treatment it increased to 6.38 (SD: 3.19). Overall concordance of conventional method with GDA method was 57%. Concordance of progression of disease based on conventional method was 44%, stable disease was 85% and partial response was 33%. Discordance was 55%, 14% and 66%. Adaptive contrast-oriented thresholding algorithm is a promising method to quantify the whole tumor glycolysis in patients with mesothelioma. We are able to assess the total metabolic lesion volume, lesion glycolysis, SUVmax, tumor SUVmean and GTG for this particular tumor. Also we were able to demonstrate the potential use of this technique in the monitoring of treatment response. More studies comparing this technique with conventional and other global disease assessment methods are needed in order to clarify its role in the assessment of treatment response and prognosis of these patients.

  19. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  20. Fast online generalized multiscale finite element method using constraint energy minimization

    NASA Astrophysics Data System (ADS)

    Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat

    2018-02-01

    Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.

  1. Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm

    NASA Astrophysics Data System (ADS)

    Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter

    2004-05-01

    The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.

  2. Population control methods in stochastic extinction and outbreak scenarios.

    PubMed

    Segura, Juan; Hilker, Frank M; Franco, Daniel

    2017-01-01

    Adaptive limiter control (ALC) and adaptive threshold harvesting (ATH) are two related control methods that have been shown to stabilize fluctuating populations. Large variations in population abundance can threaten the constancy and the persistence stability of ecological populations, which may impede the success and efficiency of managing natural resources. Here, we consider population models that include biological mechanisms characteristic for causing extinctions on the one hand and pest outbreaks on the other hand. These models include Allee effects and the impact of natural enemies (as is typical of forest defoliating insects). We study the impacts of noise and different levels of biological parameters in three extinction and two outbreak scenarios. Our results show that ALC and ATH have an effect on extinction and outbreak risks only for sufficiently large control intensities. Moreover, there is a clear disparity between the two control methods: in the extinction scenarios, ALC can be effective and ATH can be counterproductive, whereas in the outbreak scenarios the situation is reversed, with ATH being effective and ALC being potentially counterproductive.

  3. A STATISTICAL MODELING METHODOLOGY FOR THE DETECTION, QUANTIFICATION, AND PREDICTION OF ECOLOGICAL THRESHOLDS

    EPA Science Inventory

    This study will provide a general methodology for integrating threshold information from multiple species ecological metrics, allow for prediction of changes of alternative stable states, and provide a risk assessment tool that can be applied to adaptive management. The integr...

  4. Free testosterone as marker of adaptation to medium-intensive exercise.

    PubMed

    Shkurnikov, M U; Donnikov, A E; Akimov, E B; Sakharov, D A; Tonevitsky, A G

    2008-09-01

    A 4-week study of adaptation reserves of the body was carried out during medium intensive exercise (medium intensive training: 60-80% threshold anaerobic metabolism). Two groups of athletes were singled out by the results of pulsometry analysis: with less than 20% work duration at the level above the 80% threshold anaerobic metabolism and with more than 20% work duration at the level above 80% threshold anaerobic metabolism. No appreciable differences between the concentrations of total testosterone, growth hormone, and cortisol before and after exercise in the groups with different percentage of anaerobic work duration were detected. In group 1 the concentrations of free testosterone did not change throughout the period of observation in comparison with the levels before training. In group 2, the level of free testosterone increased in comparison with the basal level: from 0.61+/-0.12 nmol/liter at the end of week 1 to 0.98+/-0.11 nmol/liter at the end of week 4 (p<0.01). The results indicate that the level of free testosterone can be used for evaluating the degree of athlete's adaptation to medium intensive exercise.

  5. Landscape genomics of Sphaeralcea ambigua in the Mojave Desert: a multivariate, spatially-explicit approach to guide ecological restoration

    USGS Publications Warehouse

    Shryock, Daniel F.; Havrilla, Caroline A.; DeFalco, Lesley; Esque, Todd C.; Custer, Nathan; Wood, Troy E.

    2015-01-01

    Local adaptation influences plant species’ responses to climate change and their performance in ecological restoration. Fine-scale physiological or phenological adaptations that direct demographic processes may drive intraspecific variability when baseline environmental conditions change. Landscape genomics characterize adaptive differentiation by identifying environmental drivers of adaptive genetic variability and mapping the associated landscape patterns. We applied such an approach to Sphaeralcea ambigua, an important restoration plant in the arid southwestern United States, by analyzing variation at 153 amplified fragment length polymorphism loci in the context of environmental gradients separating 47 Mojave Desert populations. We identified 37 potentially adaptive loci through a combination of genome scan approaches. We then used a generalized dissimilarity model (GDM) to relate variability in potentially adaptive loci with spatial gradients in temperature, precipitation, and topography. We identified non-linear thresholds in loci frequencies driven by summer maximum temperature and water stress, along with continuous variation corresponding to temperature seasonality. Two GDM-based approaches for mapping predicted patterns of local adaptation are compared. Additionally, we assess uncertainty in spatial interpolations through a novel spatial bootstrapping approach. Our study presents robust, accessible methods for deriving spatially-explicit models of adaptive genetic variability in non-model species that will inform climate change modelling and ecological restoration.

  6. Normal Perceptual Sensitivity Arising From Weakly Reflective Cone Photoreceptors

    PubMed Central

    Bruce, Kady S.; Harmening, Wolf M.; Langston, Bradley R.; Tuten, William S.; Roorda, Austin; Sincich, Lawrence C.

    2015-01-01

    Purpose To determine the light sensitivity of poorly reflective cones observed in retinas of normal subjects, and to establish a relationship between cone reflectivity and perceptual threshold. Methods Five subjects (four male, one female) with normal vision were imaged longitudinally (7–26 imaging sessions, representing 82–896 days) using adaptive optics scanning laser ophthalmoscopy (AOSLO) to monitor cone reflectance. Ten cones with unusually low reflectivity, as well as 10 normally reflective cones serving as controls, were targeted for perceptual testing. Cone-sized stimuli were delivered to the targeted cones and luminance increment thresholds were quantified. Thresholds were measured three to five times per session for each cone in the 10 pairs, all located 2.2 to 3.3° from the center of gaze. Results Compared with other cones in the same retinal area, three of 10 monitored dark cones were persistently poorly reflective, while seven occasionally manifested normal reflectance. Tested psychophysically, all 10 dark cones had thresholds comparable with those from normally reflecting cones measured concurrently (P = 0.49). The variation observed in dark cone thresholds also matched the wide variation seen in a large population (n = 56 cone pairs, six subjects) of normal cones; in the latter, no correlation was found between cone reflectivity and threshold (P = 0.0502). Conclusions Low cone reflectance cannot be used as a reliable indicator of cone sensitivity to light in normal retinas. To improve assessment of early retinal pathology, other diagnostic criteria should be employed along with imaging and cone-based microperimetry. PMID:26193919

  7. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; King, J.; Keiser, Jr., D.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  8. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; ...

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  9. Is the sky the limit? On the expansion threshold of a species' range.

    PubMed

    Polechová, Jitka

    2018-06-15

    More than 100 years after Grigg's influential analysis of species' borders, the causes of limits to species' ranges still represent a puzzle that has never been understood with clarity. The topic has become especially important recently as many scientists have become interested in the potential for species' ranges to shift in response to climate change-and yet nearly all of those studies fail to recognise or incorporate evolutionary genetics in a way that relates to theoretical developments. I show that range margins can be understood based on just two measurable parameters: (i) the fitness cost of dispersal-a measure of environmental heterogeneity-and (ii) the strength of genetic drift, which reduces genetic diversity. Together, these two parameters define an 'expansion threshold': adaptation fails when genetic drift reduces genetic diversity below that required for adaptation to a heterogeneous environment. When the key parameters drop below this expansion threshold locally, a sharp range margin forms. When they drop below this threshold throughout the species' range, adaptation collapses everywhere, resulting in either extinction or formation of a fragmented metapopulation. Because the effects of dispersal differ fundamentally with dimension, the second parameter-the strength of genetic drift-is qualitatively different compared to a linear habitat. In two-dimensional habitats, genetic drift becomes effectively independent of selection. It decreases with 'neighbourhood size'-the number of individuals accessible by dispersal within one generation. Moreover, in contrast to earlier predictions, which neglected evolution of genetic variance and/or stochasticity in two dimensions, dispersal into small marginal populations aids adaptation. This is because the reduction of both genetic and demographic stochasticity has a stronger effect than the cost of dispersal through increased maladaptation. The expansion threshold thus provides a novel, theoretically justified, and testable prediction for formation of the range margin and collapse of the species' range.

  10. Six weeks of a polarized training-intensity distribution leads to greater physiological and performance adaptations than a threshold model in trained cyclists.

    PubMed

    Neal, Craig M; Hunter, Angus M; Brennan, Lorraine; O'Sullivan, Aifric; Hamilton, D Lee; De Vito, Giuseppe; Galloway, Stuart D R

    2013-02-15

    This study was undertaken to investigate physiological adaptation with two endurance-training periods differing in intensity distribution. In a randomized crossover fashion, separated by 4 wk of detraining, 12 male cyclists completed two 6-wk training periods: 1) a polarized model [6.4 (±1.4 SD) h/wk; 80%, 0%, and 20% of training time in low-, moderate-, and high-intensity zones, respectively]; and 2) a threshold model [7.5 (±2.0 SD) h/wk; 57%, 43%, and 0% training-intensity distribution]. Before and after each training period, following 2 days of diet and exercise control, fasted skeletal muscle biopsies were obtained for mitochondrial enzyme activity and monocarboxylate transporter (MCT) 1 and 4 expression, and morning first-void urine samples were collected for NMR spectroscopy-based metabolomics analysis. Endurance performance (40-km time trial), incremental exercise, peak power output (PPO), and high-intensity exercise capacity (95% maximal work rate to exhaustion) were also assessed. Endurance performance, PPOs, lactate threshold (LT), MCT4, and high-intensity exercise capacity all increased over both training periods. Improvements were greater following polarized rather than threshold for PPO [mean (±SE) change of 8 (±2)% vs. 3 (±1)%, P < 0.05], LT [9 (±3)% vs. 2 (±4)%, P < 0.05], and high-intensity exercise capacity [85 (±14)% vs. 37 (±14)%, P < 0.05]. No changes in mitochondrial enzyme activities or MCT1 were observed following training. A significant multilevel, partial least squares-discriminant analysis model was obtained for the threshold model but not the polarized model in the metabolomics analysis. A polarized training distribution results in greater systemic adaptation over 6 wk in already well-trained cyclists. Markers of muscle metabolic adaptation are largely unchanged, but metabolomics markers suggest different cellular metabolic stress that requires further investigation.

  11. Adaptive 84.44-190 Mbit/s phosphor-LED wireless communication utilizing no blue filter at practical transmission distance.

    PubMed

    Yeh, C H; Chow, C W; Chen, H Y; Chen, J; Liu, Y L

    2014-04-21

    We propose and experimentally demonstrate a white-light phosphor-LED visible light communication (VLC) system with an adaptive 84.44 to 190 Mbit/s 16 quadrature-amplitude-modulation (QAM) orthogonal-frequency-division-multiplexing (OFDM) signal utilizing bit-loading method. Here, the optimal analogy pre-equalization design is performed at LED transmitter (Tx) side and no blue filter is used at the Rx side. Hence, the ~1 MHz modulation bandwidth of phosphor-LED could be extended to 30 MHz. In addition, the measured bit error rates (BERs) of < 3.8 × 10(-3) [forward error correction (FEC) threshold] at different measured data rates can be achieved at practical transmission distances of 0.75 to 2 m.

  12. Policy tree optimization for adaptive management of water resources systems

    NASA Astrophysics Data System (ADS)

    Herman, Jonathan; Giuliani, Matteo

    2017-04-01

    Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points" that suggest the need of updating the policy. However, there remains a need for a general method to optimize the choice of the signposts to be used and their threshold values. This work contributes a general framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. Given a set of feature variables (e.g., reservoir level, inflow observations, inflow forecasts), the resulting policy defines both the optimal reservoir operations and the conditions under which such operations should be triggered. We demonstrate the approach using Folsom Reservoir (California) as a case study, in which operating policies must balance the risk of both floods and droughts. Numerical results show that the tree-based policies outperform the ones designed via Dynamic Programming. In addition, they display good adaptive capacity to the changing climate, successfully adapting the reservoir operations across a large set of uncertain climate scenarios.

  13. On-line pulse control for structural and mechanical systems

    NASA Technical Reports Server (NTRS)

    Udwadia, F. E.; Garba, J. A.; Tabaie, S.

    1981-01-01

    This paper studies the feasibility of using open-loop adaptive on-line pulse control for limiting the response of large linear multidegree of freedom systems subjected to general dynamic loading environments. Pulses of short durations are used to control the system when the system response exceeds a given threshold level. The pulse magnitudes are obtained in closed form, leading to large computational efficiencies when compared with optimal control theoretic methods. The technique is illustrated for a structural system subjected to earthquake-like base excitations.

  14. Determine Optimal Stimulus Amplitude for Using Vestibular Stochastic Stimulation to Improve Balance Function

    NASA Technical Reports Server (NTRS)

    Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Cohen, H.; Bloomberg, J.J.; hide

    2015-01-01

    Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). Our previous work has shown the advantageous effects of VSR in a balance task of standing on an unstable surface [1]. This technique to improve detection of vestibular signals uses a stimulus delivery system that provides imperceptibly low levels of white noise-based binaural bipolar electrical stimulation of the vestibular system. The goal of this project is to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection. A series of experiments were carried out to determine a robust paradigm to identify a vestibular threshold that can then be used to recommend optimal stimulation levels for sensorimotor adaptability (SA) training applications customized to each crewmember. The amplitude of stimulation to be used in the VSR application has varied across studies in the literature such as 60% of nociceptive stimulus thresholds [2]. We compared subjects' perceptual threshold with that obtained from two measures of body sway. Each test session was 463s long and consisted of several 15s long sinusoidal stimuli, at different current amplitudes (0-2 mA), interspersed with 20-20.5s periods of no stimulation. Subjects sat on a chair with their eyes closed and had to report their perception of motion through a joystick. A force plate underneath the chair recorded medio-lateral shear forces and roll moments. Comparison of threshold of motion detection obtained from joystick data versus body sway suggests that perceptual thresholds were significantly lower. In the balance task, subjects stood on an unstable surface and had to maintain balance, and the stimulation was administered from 20-400% of subjects' vestibular threshold. Optimal stimulation amplitude was determined at which the balance performance was best compared to control (no stimulation). Preliminary results show that, in general, using stimulation amplitudes at 40-60% of perceptual motion threshold significantly improved the balance performance. We hypothesize that VSR stimulation will act synergistically with SA training to improve adaptability by increasing utilization of vestibular information and therefore will help us to optimize and personalize a SA countermeasure prescription. This combination may help to significantly reduce the number of days required to recover functional performance to preflight levels after long-duration spaceflight.

  15. Validation of the minimal citrate tube fill volume for routine coagulation tests on ACL TOP 500 CTS®.

    PubMed

    Ver Elst, K; Vermeiren, S; Schouwers, S; Callebaut, V; Thomson, W; Weekx, S

    2013-12-01

    CLSI recommends a minimal citrate tube fill volume of 90%. A validation protocol with clinical and analytical components was set up to determine the tube fill threshold for international normalized ratio of prothrombin time (PT-INR), activated partial thromboplastin time (aPTT) and fibrinogen. Citrated coagulation samples from 16 healthy donors and eight patients receiving vitamin K antagonists (VKA) were evaluated. Eighty-nine tubes were filled to varying volumes of >50%. Coagulation tests were performed on ACL TOP 500 CTS(®) . Receiver Operating Characteristic (ROC) plot, with Total error (TE) and critical difference (CD) as possible acceptance criteria, was used to determine the fill threshold. Receiving Operating Characteristic was the most accurate with CD for PT-INR and TE for aPTT resulting in thresholds of 63% for PT and 80% for aPTT. By adapted ROC, based on threshold setting at a point of 100% sensitivity at a maximum specificity, CD was best for PT and TE for aPTT resulting in thresholds of 73% for PT and 90% for aPTT. For fibrinogen, the method was only valid with the TE criterion at a 63% fill volume. In our study, we validated the minimal citrate tube fill volumes of 73%, 90% and 63% for PT-INR, aPTT and fibrinogen, respectively. © 2013 John Wiley & Sons Ltd.

  16. Retained energy-based coding for EEG signals.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-09-01

    The recent use of long-term records in electroencephalography is becoming more frequent due to its diagnostic potential and the growth of novel signal processing methods that deal with these types of recordings. In these cases, the considerable volume of data to be managed makes compression necessary to reduce the bit rate for transmission and storage applications. In this paper, a new compression algorithm specifically designed to encode electroencephalographic (EEG) signals is proposed. Cosine modulated filter banks are used to decompose the EEG signal into a set of subbands well adapted to the frequency bands characteristic of the EEG. Given that no regular pattern may be easily extracted from the signal in time domain, a thresholding-based method is applied for quantizing samples. The method of retained energy is designed for efficiently computing the threshold in the decomposition domain which, at the same time, allows the quality of the reconstructed EEG to be controlled. The experiments are conducted over a large set of signals taken from two public databases available at Physionet and the results show that the compression scheme yields better compression than other reported methods. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Investigation of the accuracy of breast tissue segmentation methods for the purpose of developing breast deformation models for use in adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Juneja, P.; Harris, E. J.; Evans, P. M.

    2014-03-01

    Realistic modelling of breast deformation requires the breast tissue to be segmented into fibroglandular and fatty tissue and assigned suitable material properties. There are a number of breast tissue segmentation methods proposed and used in the literature. The purpose of this study was to validate and compare the accuracy of various segmentation methods and to investigate the effect of the tissue distribution on the segmentation accuracy. Computed tomography (CT) data for 24 patients, both in supine and prone positions were segmented into fibroglandular and fatty tissue. The segmentation methods explored were: physical density thresholding; interactive thresholding; fuzzy c-means clustering (FCM) with three classes (FCM3) and four classes (FCM4); and k-means clustering. Validation was done in two-stages: firstly, a new approach, supine-prone validation based on the assumption that the breast composition should appear the same in the supine and prone scans was used. Secondly, outlines from three experts were used for validation. This study found that FCM3 gave the most accurate segmentation of breast tissue from CT data and that the segmentation accuracy is adversely affected by the sparseness of the fibroglandular tissue distribution.

  18. Orion MPCV Touchdown Detection Threshold Development and Testing

    NASA Technical Reports Server (NTRS)

    Daum, Jared; Gay, Robert

    2013-01-01

    A robust method of detecting Orion Multi-Purpose Crew Vehicle (MPCV) splashdown is necessary to ensure crew and hardware safety during descent and after touchdown. The proposed method uses a triple redundant system to inhibit Reaction Control System (RCS) thruster firings, detach parachute risers from the vehicle, and transition to the post-landing segment of the Flight Software (FSW). An in-depth trade study was completed to determine optimal characteristics of the touchdown detection method resulting in an algorithm monitoring filtered, lever-arm corrected, 200 Hz Inertial Measurement Unit (IMU) vehicle acceleration magnitude data against a tunable threshold using persistence counter logic. Following the design of the algorithm, high fidelity environment and vehicle simulations, coupled with the actual vehicle FSW, were used to tune the acceleration threshold and persistence counter value to result in adequate performance in detecting touchdown and sufficient safety margin against early detection while descending under parachutes. An analytical approach including Kriging and adaptive sampling allowed for a sufficient number of finite element analysis (FEA) impact simulations to be completed using minimal computation time. The combination of a persistence counter of 10 and an acceleration threshold of approximately 57.3 ft/s2 resulted in an impact performance factor of safety (FOS) of 1.0 and a safety FOS of approximately 2.6 for touchdown declaration. An RCS termination acceleration threshold of approximately 53.1 ft/s(exp)2 with a persistence counter of 10 resulted in an increased impact performance FOS of 1.2 at the expense of a lowered under-parachutes safety factor of 2.2. The resulting tuned algorithm was then tested on data from eight Capsule Parachute Assembly System (CPAS) flight tests, showing an experimental minimum safety FOS of 6.1. The formulated touchdown detection algorithm will be flown on the Orion MPCV FSW during the Exploration Flight Test 1 (EFT-1) mission in the second half of 2014.

  19. An improved finger-vein recognition algorithm based on template matching

    NASA Astrophysics Data System (ADS)

    Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping

    2016-10-01

    Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.

  20. Precise measurement of instantaneous volume of eccrine sweat gland in mental sweating by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Sugawa, Yoshihiko; Fukuda, Akihiro; Ohmi, Masato

    2015-03-01

    We have demonstrated dynamic analysis of the physiological function of eccrine sweat glands underneath skin surface by optical coherence tomography (OCT). We propose a method for extraction of the target eccrine sweat gland by use of the connected component extraction process and the adaptive threshold method, where the en-face OCT images are constructed by the SS-OCT. Furthermore, we demonstrate precise measurement of instantaneous volume of the sweat gland in response to the external stimulus. The dynamic change of instantaneous volume of eccrine sweat gland in mental sweating is performed by this method during the period of 300 sec with the frame intervals of 3.23 sec.

  1. An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT.

    PubMed

    Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao

    2017-12-09

    Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object.

  2. An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT

    PubMed Central

    Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao

    2017-01-01

    Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object. PMID:29232850

  3. Expanded envelope concepts for aircraft control-element failure detection and identification

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.; Hsu, John Y.

    1988-01-01

    The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.

  4. Adaptive sequential Bayesian classification using Page's test

    NASA Astrophysics Data System (ADS)

    Lynch, Robert S., Jr.; Willett, Peter K.

    2002-03-01

    In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.

  5. Inconsistent Effect of Arousal on Early Auditory Perception

    PubMed Central

    Bolders, Anna C.; Band, Guido P. H.; Stallen, Pieter Jan M.

    2017-01-01

    Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment (N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment (N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided. PMID:28424639

  6. The absolute threshold of cone vision

    PubMed Central

    Koeing, Darran; Hofer, Heidi

    2013-01-01

    We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115

  7. Is the bitter rejection response always adaptive?

    PubMed

    Glendinning, J I

    1994-12-01

    The bitter rejection response consists of a suite of withdrawal reflexes and negative affective responses. It is generally assumed to have evolved as a way to facilitate avoidance of foods that are poisonous because they usually taste bitter to humans. Using previously published studies, the present paper examines the relationship between bitterness and toxicity in mammals, and then assesses the ecological costs and benefits of the bitter rejection response in carnivorous, omnivorous, and herbivorous (grazing and browsing) mammals. If the bitter rejection response accurately predicts the potential toxicity of foods, then one would expect the threshold for the response to be lower for highly toxic compounds than for nontoxic compounds. The data revealed no such relationship. Bitter taste thresholds varied independently of toxicity thresholds, indicating that the bitter rejection response is just as likely to be elicited by a harmless bitter food as it is by a harmful one. Thus, it is not necessarily in an animal's best interest to have an extremely high or low bitter threshold. Based on this observation, it was hypothesized that the adaptiveness of the bitter rejection response depends upon the relative occurrence of bitter and potentially toxic compounds in an animal's diet. Animals with a relatively high occurrence of bitter and potentially toxic compounds in their diet (e.g., browsing herbivores) were predicted to have evolved a high bitter taste threshold and tolerance to dietary poisons. Such an adaptation would be necessary because a browser cannot "afford" to reject all foods that are bitter and potentially toxic without unduly restricting its dietary options. At the other extreme, animals that rarely encounter bitter and potentially toxic compounds in their diet (e.g., carnivores) were predicted to have evolved a low bitter threshold. Carnivores could "afford" to utilize such a stringent rejection mechanism because foods containing bitter and potentially toxic compounds constitute a small portion of their diet. Since the low bitter threshold would reduce substantially the risk of ingesting anything poisonous, carnivores were also expected to have a relatively low tolerance to dietary poisons. This hypothesis was supported by a comparison involving 30 mammal species, in which a suggestive relationship was found between quinine hydrochloride sensitivity and trophic group, with carnivores > omnivores > grazers > browsers. Further support for the hypothesis was provided by a comparison across browsers and grazers in terms of the production of tannin-binding salivary proteins, which probably represent an adaptation for reducing the bitterness and astringency of tannins.(ABSTRACT TRUNCATED AT 400 WORDS)

  8. MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, E.D.; Glide-Hurst, C.; Wayne State University, Detroit, MI

    2016-06-15

    Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveformmore » exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient-specific velocity-based 4DCT reconstruction is feasible. Image noise was reduced with AVB, suggesting potential applications for low-dose acquisitions and to improve 4DCT reconstruction for irregular breathing patients. The submitting institution holds research agreements with Philips Healthcare.« less

  9. Psychophysical chromatic mechanisms in macaque monkey.

    PubMed

    Stoughton, Cleo M; Lafer-Sousa, Rosa; Gagin, Galina; Conway, Bevil R

    2012-10-24

    Chromatic mechanisms have been studied extensively with psychophysical techniques in humans, but the number and nature of the mechanisms are still controversial. Appeals to monkey neurophysiology are often used to sort out the competing claims and to test hypotheses arising from the experiments in humans, but psychophysical chromatic mechanisms have never been assessed in monkeys. Here we address this issue by measuring color-detection thresholds in monkeys before and after chromatic adaptation, employing a standard approach used to determine chromatic mechanisms in humans. We conducted separate experiments using adaptation configured as either flickering full-field colors or heterochromatic gratings. Full-field colors would favor activity within the visual system at or before the arrival of retinal signals to V1, before the spatial transformation of color signals by the cortex. Conversely, gratings would favor activity within the cortex where neurons are often sensitive to spatial chromatic structure. Detection thresholds were selectively elevated for the colors of full-field adaptation when it modulated along either of the two cardinal chromatic axes that define cone-opponent color space [L vs M or S vs (L + M)], providing evidence for two privileged cardinal chromatic mechanisms implemented early in the visual-processing hierarchy. Adaptation with gratings produced elevated thresholds for colors of the adaptation regardless of its chromatic makeup, suggesting a cortical representation comprised of multiple higher-order mechanisms each selective for a different direction in color space. The results suggest that color is represented by two cardinal channels early in the processing hierarchy and many chromatic channels in brain regions closer to perceptual readout.

  10. Dark adaptation and the retinoid cycle of vision.

    PubMed

    Lamb, T D; Pugh, E N

    2004-05-01

    Following exposure of our eye to very intense illumination, we experience a greatly elevated visual threshold, that takes tens of minutes to return completely to normal. The slowness of this phenomenon of "dark adaptation" has been studied for many decades, yet is still not fully understood. Here we review the biochemical and physical processes involved in eliminating the products of light absorption from the photoreceptor outer segment, in recycling the released retinoid to its original isomeric form as 11-cis retinal, and in regenerating the visual pigment rhodopsin. Then we analyse the time-course of three aspects of human dark adaptation: the recovery of psychophysical threshold, the recovery of rod photoreceptor circulating current, and the regeneration of rhodopsin. We begin with normal human subjects, and then analyse the recovery in several retinal disorders, including Oguchi disease, vitamin A deficiency, fundus albipunctatus, Bothnia dystrophy and Stargardt disease. We review a large body of evidence showing that the time-course of human dark adaptation and pigment regeneration is determined by the local concentration of 11-cis retinal, and that after a large bleach the recovery is limited by the rate at which 11-cis retinal is delivered to opsin in the bleached rod outer segments. We present a mathematical model that successfully describes a wide range of results in human and other mammals. The theoretical analysis provides a simple means of estimating the relative concentration of free 11-cis retinal in the retina/RPE, in disorders exhibiting slowed dark adaptation, from analysis of psychophysical measurements of threshold recovery or from analysis of pigment regeneration kinetics.

  11. Adaptive Detection and ISI Mitigation for Mobile Molecular Communication.

    PubMed

    Chang, Ge; Lin, Lin; Yan, Hao

    2018-03-01

    Current studies on modulation and detection schemes in molecular communication mainly focus on the scenarios with static transmitters and receivers. However, mobile molecular communication is needed in many envisioned applications, such as target tracking and drug delivery. Until now, investigations about mobile molecular communication have been limited. In this paper, a static transmitter and a mobile bacterium-based receiver performing random walk are considered. In this mobile scenario, the channel impulse response changes due to the dynamic change of the distance between the transmitter and the receiver. Detection schemes based on fixed distance fail in signal detection in such a scenario. Furthermore, the intersymbol interference (ISI) effect becomes more complex due to the dynamic character of the signal which makes the estimation and mitigation of the ISI even more difficult. In this paper, an adaptive ISI mitigation method and two adaptive detection schemes are proposed for this mobile scenario. In the proposed scheme, adaptive ISI mitigation, estimation of dynamic distance, and the corresponding impulse response reconstruction are performed in each symbol interval. Based on the dynamic channel impulse response in each interval, two adaptive detection schemes, concentration-based adaptive threshold detection and peak-time-based adaptive detection, are proposed for signal detection. Simulations demonstrate that the ISI effect is significantly reduced and the adaptive detection schemes are reliable and robust for mobile molecular communication.

  12. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.

  13. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  14. Validation of a clinical assessment of spectral-ripple resolution for cochlear implant users.

    PubMed

    Drennan, Ward R; Anderson, Elizabeth S; Won, Jong Ho; Rubinstein, Jay T

    2014-01-01

    Nonspeech psychophysical tests of spectral resolution, such as the spectral-ripple discrimination task, have been shown to correlate with speech-recognition performance in cochlear implant (CI) users. However, these tests are best suited for use in the research laboratory setting and are impractical for clinical use. A test of spectral resolution that is quicker and could more easily be implemented in the clinical setting has been developed. The objectives of this study were (1) To determine whether this new clinical ripple test would yield individual results equivalent to the longer, adaptive version of the ripple-discrimination test; (2) To evaluate test-retest reliability for the clinical ripple measure; and (3) To examine the relationship between clinical ripple performance and monosyllabic word recognition in quiet for a group of CI listeners. Twenty-eight CI recipients participated in the study. Each subject was tested on both the adaptive and the clinical versions of spectral ripple discrimination, as well as consonant-nucleus-consonant word recognition in quiet. The adaptive version of spectral ripple used a two-up, one-down procedure for determining spectral ripple discrimination threshold. The clinical ripple test used a method of constant stimuli, with trials for each of 12 fixed ripple densities occurring six times in random order. Results from the clinical ripple test (proportion correct) were then compared with ripple-discrimination thresholds (in ripples per octave) from the adaptive test. The clinical ripple test showed strong concurrent validity, evidenced by a good correlation between clinical ripple and adaptive ripple results (r = 0.79), as well as a correlation with word recognition (r = 0.7). Excellent test-retest reliability was also demonstrated with a high test-retest correlation (r = 0.9). The clinical ripple test is a reliable nonlinguistic measure of spectral resolution, optimized for use with CI users in a clinical setting. The test might be useful as a diagnostic tool or as a possible surrogate outcome measure for evaluating treatment effects in hearing.

  15. Evaluation and Application of Enhancements to the Performance of the ASDE-3 Radar in Heavy Rain

    DOT National Transportation Integrated Search

    1982-03-01

    This report presents the results of a study performed by the Transportation Systems Center (TSC) to evaluate two proposed enhancements to the performance of the ASDE-3 Radar in heavy rain: Adaptive gain and adaptive clutter thresholding, (operating w...

  16. A Data Centred Method to Estimate and Map Changes in the Full Distribution of Daily Precipitation and Its Exceedances

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily temperature or precipitation. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by 'heavy tailed' distributed variables such as daily precipitation. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those extreme precipitation days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results identify regionally consistent patterns which, dependent on location, show systematic increase in precipitation on the wettest days, shifts in precipitation patterns to less moderate days and more heavy days, and drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013 Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, S. C. Chapman, N. W. Watkins, 2013 Environ. Res. Lett. 8, 034031 [2] Haylock et al. 2008 J. Geophys. Res (Atmospheres), 113, D20119

  17. Perception of Self-Motion and Regulation of Walking Speed in Young-Old Adults.

    PubMed

    Lalonde-Parsi, Marie-Jasmine; Lamontagne, Anouk

    2015-07-01

    Whether a reduced perception of self-motion contributes to poor walking speed adaptations in older adults is unknown. In this study, speed discrimination thresholds (perceptual task) and walking speed adaptations (walking task) were compared between young (19-27 years) and young-old individuals (63-74 years), and the relationship between the performance on the two tasks was examined. Participants were evaluated while viewing a virtual corridor in a helmet-mounted display. Speed discrimination thresholds were determined using a staircase procedure. Walking speed modulation was assessed on a self-paced treadmill while exposed to different self-motion speeds ranging from 0.25 to 2 times the participants' comfortable speed. For each speed, participants were instructed to match the self-motion speed described by the moving corridor. On the walking task, participants displayed smaller walking speed errors at comfortable walking speeds compared with slower of faster speeds. The young-old adults presented larger speed discrimination thresholds (perceptual experiment) and larger walking speed errors (walking experiment) compared with young adults. Larger walking speed errors were associated with higher discrimination thresholds. The enhanced performance on the walking task at comfortable speed suggests that intersensory calibration processes are influenced by experience, hence optimized for frequently encountered conditions. The altered performance of the young-old adults on the perceptual and walking tasks, as well as the relationship observed between the two tasks, suggest that a poor perception of visual motion information may contribute to the poor walking speed adaptations that arise with aging.

  18. Lane identification and path planning for autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    McKeon, Robert T.; Paulik, Mark; Krishnan, Mohan

    2006-10-01

    This work has been performed in conjunction with the University of Detroit Mercy's (UDM) ECE Department autonomous vehicle entry in the 2006 Intelligent Ground Vehicle Competition (www.igvc.org). The IGVC challenges engineering students to design autonomous vehicles and compete in a variety of unmanned mobility competitions. The course to be traversed in the competition consists of a lane demarcated by painted lines on grass with the possibility of one of the two lines being deliberately left out over segments of the course. The course also consists of other challenging artifacts such as sandpits, ramps, potholes, and colored tarps that alter the color composition of scenes, and obstacles set up using orange and white construction barrels. This paper describes a composite lane edge detection approach that uses three algorithms to implement noise filters enabling increased removal of noise prior to the application of image thresholding. The first algorithm uses a row-adaptive statistical filter to establish an intensity floor followed by a global threshold based on a reverse cumulative intensity histogram and a priori knowledge about lane thickness and separation. The second method first improves the contrast of the image by implementing an arithmetic combination of the blue plane (RGB format) and a modified saturation plane (HSI format). A global threshold is then applied based on the mean of the intensity image and a user-defined offset. The third method applies the horizontal component of the Sobel mask to a modified gray scale of the image, followed by a thresholding method similar to the one used in the second method. The Hough transform is applied to each of the resulting binary images to select the most probable line candidates. Finally, a heuristics-based confidence interval is determined, and the results sent on to a separate fuzzy polar-based navigation algorithm, which fuses the image data with that produced by a laser scanner (for obstacle detection).

  19. A spectral element method with adaptive segmentation for accurately simulating extracellular electrical stimulation of neurons.

    PubMed

    Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J

    2017-05-01

    The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.

  20. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  1. Partial photoionization cross sections of NH4 and H3O Rydberg radicals

    NASA Astrophysics Data System (ADS)

    Velasco, A. M.; Lavín, C.; Martín, I.; Melin, J.; Ortiz, J. V.

    2009-07-01

    Photoionization cross sections for various Rydberg series that correspond to ionization channels of ammonium and oxonium Rydberg radicals from the outermost, occupied orbitals of their respective ground states are reported. These properties are known to be relevant in photoelectron dynamics studies. For the present calculations, the molecular-adapted quantum defect orbital method has been employed. A Cooper minimum has been found in the 3sa1-kpt2 Rydberg channel of NH4 beyond the ionization threshold, which provides the main contribution to the photoionization of this radical. However, no net minimum is found in the partial cross section of H3O despite the presence of minima in the 3sa1-kpe and 3sa1-kpa1 Rydberg channels. The complete oscillator strength distributions spanning the discrete and continuous regions of both radicals exhibit the expected continuity across the ionization threshold.

  2. Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music

    PubMed Central

    Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia

    2012-01-01

    Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550

  3. Robust and efficient method for matching features in omnidirectional images

    NASA Astrophysics Data System (ADS)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  4. Detection of Orbital Debris Collision Risks for the Automated Transfer Vehicle

    NASA Technical Reports Server (NTRS)

    Peret, L.; Legendre, P.; Delavault, S.; Martin, T.

    2007-01-01

    In this paper, we present a general collision risk assessment method, which has been applied through numerical simulations to the Automated Transfer Vehicle (ATV) case. During ATV ascent towards the International Space Station, close approaches between the ATV and objects of the USSTRACOM catalog will be monitored through collision rosk assessment. Usually, collision risk assessment relies on an exclusion volume or a probability threshold method. Probability methods are more effective than exclusion volumes but require accurate covariance data. In this work, we propose to use a criterion defined by an adaptive exclusion area. This criterion does not require any probability calculation but is more effective than exclusion volume methods as demonstrated by our numerical experiments. The results of these studies, when confirmed and finalized, will be used for the ATV operations.

  5. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Central and rear-edge populations can be equally vulnerable to warming

    NASA Astrophysics Data System (ADS)

    Bennett, Scott; Wernberg, Thomas; Arackal Joy, Bijo; de Bettignies, Thibaut; Campbell, Alexandra H.

    2015-12-01

    Rear (warm) edge populations are often considered more susceptible to warming than central (cool) populations because of the warmer ambient temperatures they experience, but this overlooks the potential for local variation in thermal tolerances. Here we provide conceptual models illustrating how sensitivity to warming is affected throughout a species' geographical range for locally adapted and non-adapted populations. We test these models for a range-contracting seaweed using observations from a marine heatwave and a 12-month experiment, translocating seaweeds among central, present and historic range edge locations. Growth, reproductive development and survivorship display different temperature thresholds among central and rear-edge populations, but share a 2.5 °C anomaly threshold. Range contraction, therefore, reflects variation in local anomalies rather than differences in absolute temperatures. This demonstrates that warming sensitivity can be similar throughout a species geographical range and highlights the importance of incorporating local adaptation and acclimatization into climate change vulnerability assessments.

  7. Molecular Signaling Network Motifs Provide a Mechanistic Basis for Cellular Threshold Responses

    PubMed Central

    Bhattacharya, Sudin; Conolly, Rory B.; Clewell, Harvey J.; Kaminski, Norbert E.; Andersen, Melvin E.

    2014-01-01

    Background: Increasingly, there is a move toward using in vitro toxicity testing to assess human health risk due to chemical exposure. As with in vivo toxicity testing, an important question for in vitro results is whether there are thresholds for adverse cellular responses. Empirical evaluations may show consistency with thresholds, but the main evidence has to come from mechanistic considerations. Objectives: Cellular response behaviors depend on the molecular pathway and circuitry in the cell and the manner in which chemicals perturb these circuits. Understanding circuit structures that are inherently capable of resisting small perturbations and producing threshold responses is an important step towards mechanistically interpreting in vitro testing data. Methods: Here we have examined dose–response characteristics for several biochemical network motifs. These network motifs are basic building blocks of molecular circuits underpinning a variety of cellular functions, including adaptation, homeostasis, proliferation, differentiation, and apoptosis. For each motif, we present biological examples and models to illustrate how thresholds arise from specific network structures. Discussion and Conclusion: Integral feedback, feedforward, and transcritical bifurcation motifs can generate thresholds. Other motifs (e.g., proportional feedback and ultrasensitivity)produce responses where the slope in the low-dose region is small and stays close to the baseline. Feedforward control may lead to nonmonotonic or hormetic responses. We conclude that network motifs provide a basis for understanding thresholds for cellular responses. Computational pathway modeling of these motifs and their combinations occurring in molecular signaling networks will be a key element in new risk assessment approaches based on in vitro cellular assays. Citation: Zhang Q, Bhattacharya S, Conolly RB, Clewell HJ III, Kaminski NE, Andersen ME. 2014. Molecular signaling network motifs provide a mechanistic basis for cellular threshold responses. Environ Health Perspect 122:1261–1270; http://dx.doi.org/10.1289/ehp.1408244 PMID:25117432

  8. Measurand transient signal suppressor

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  9. A globally convergent MC algorithm with an adaptive learning rate.

    PubMed

    Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian

    2012-02-01

    This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.

  10. Reduction in Dynamic Visual Acuity Reveals Gaze Control Changes Following Spaceflight

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Miller, Chris; Lawrence, Emily L.; Mulavara Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    INTRODUCTION: Exposure to microgravity causes adaptive changes in eye-head coordination that can lead to altered gaze control. This could affect postflight visual acuity during head and body motion. The goal of this study was to characterize changes in dynamic visual acuity after long-duration spaceflight. METHODS: Dynamic Visual Acuity (DVA) data from 14 astro/cosmonauts were collected after long-duration (6 months) spaceflight. The difference in acuity between seated and walking conditions provided a metric of change in the subjects ability to maintain gaze fixation during self-motion. In each condition, a psychophysical threshold detection algorithm was used to display Landolt ring optotypes at a size that was near each subject s acuity threshold. Verbal responses regarding the orientation of the gap were recorded as the optotypes appeared sequentially on a computer display 4 meters away. During the walking trials, subjects walked at 6.4 km/h on a motorized treadmill. RESULTS: A decrement in mean postflight DVA was found, with mean values returning to baseline within 1 week. The population mean showed a consistent improvement in DVA performance, but it was accompanied by high variability. A closer examination of the individual subject s recovery curves revealed that many did not follow a pattern of continuous improvement with each passing day. When adjusted on the basis of previous long-duration flight experience, the population mean shows a "bounce" in the re-adaptation curve. CONCLUSION: Gaze control during self-motion is altered following long-duration spaceflight and changes in postflight DVA performance indicate that vestibular re-adaptation may be more complex than a gradual return to normal.

  11. Visual Function Metrics in Early and Intermediate Dry Age-related Macular Degeneration for Use as Clinical Trial Endpoints.

    PubMed

    Cocce, Kimberly J; Stinnett, Sandra S; Luhmann, Ulrich F O; Vajzovic, Lejla; Horne, Anupama; Schuman, Stefanie G; Toth, Cynthia A; Cousins, Scott W; Lad, Eleonora M

    2018-05-01

    To evaluate and quantify visual function metrics to be used as endpoints of age-related macular degeneration (AMD) stages and visual acuity (VA) loss in patients with early and intermediate AMD. Cross-sectional analysis of baseline data from a prospective study. One hundred and one patients were enrolled at Duke Eye Center: 80 patients with early AMD (Age-Related Eye Disease Study [AREDS] stage 2 [n = 33] and intermediate stage 3 [n = 47]) and 21 age-matched, normal controls. A dilated retinal examination, macular pigment optical density measurements, and several functional assessments (best-corrected visual acuity, macular integrity assessment mesopic microperimety, dark adaptometry, low-luminance visual acuity [LLVA] [standard using a log 2.0 neutral density filter and computerized method], and cone contrast test [CCT]) were performed. Low-luminance deficit (LLD) was defined as the difference in numbers of letters read at standard vs low luminance. Group comparisons were performed to evaluate differences between the control and the early and intermediate AMD groups using 2-sided significance tests. Functional measures that significantly distinguished between normal and intermediate AMD were standard and computerized (0.5 cd/m 2 ) LLVA, percent reduced threshold and average threshold on microperimetry, CCTs, and rod intercept on dark adaptation (P < .05). The intermediate group demonstrated deficits in microperimetry reduced threshhold, computerized LLD2, and dark adaptation (P < .05) relative to early AMD. Our study suggests that LLVA, microperimetry, CCT, and dark adaptation may serve as functional measures differentiating early-to-intermediate stages of dry AMD. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Impact of tumor size and tracer uptake heterogeneity in (18)F-FDG PET and CT non-small cell lung cancer tumor delineation.

    PubMed

    Hatt, Mathieu; Cheze-le Rest, Catherine; van Baardwijk, Angela; Lambin, Philippe; Pradier, Olivier; Visvikis, Dimitris

    2011-11-01

    The objectives of this study were to investigate the relationship between CT- and (18)F-FDG PET-based tumor volumes in non-small cell lung cancer (NSCLC) and the impact of tumor size and uptake heterogeneity on various approaches to delineating uptake on PET images. Twenty-five NSCLC cancer patients with (18)F-FDG PET/CT were considered. Seventeen underwent surgical resection of their tumor, and the maximum diameter was measured. Two observers manually delineated the tumors on the CT images and the tumor uptake on the corresponding PET images, using a fixed threshold at 50% of the maximum (T(50)), an adaptive threshold methodology, and the fuzzy locally adaptive Bayesian (FLAB) algorithm. Maximum diameters of the delineated volumes were compared with the histopathology reference when available. The volumes of the tumors were compared, and correlations between the anatomic volume and PET uptake heterogeneity and the differences between delineations were investigated. All maximum diameters measured on PET and CT images significantly correlated with the histopathology reference (r > 0.89, P < 0.0001). Significant differences were observed among the approaches: CT delineation resulted in large overestimation (+32% ± 37%), whereas all delineations on PET images resulted in underestimation (from -15% ± 17% for T(50) to -4% ± 8% for FLAB) except manual delineation (+8% ± 17%). Overall, CT volumes were significantly larger than PET volumes (55 ± 74 cm(3) for CT vs. from 18 ± 25 to 47 ± 76 cm(3) for PET). A significant correlation was found between anatomic tumor size and heterogeneity (larger lesions were more heterogeneous). Finally, the more heterogeneous the tumor uptake, the larger was the underestimation of PET volumes by threshold-based techniques. Volumes based on CT images were larger than those based on PET images. Tumor size and tracer uptake heterogeneity have an impact on threshold-based methods, which should not be used for the delineation of cases of large heterogeneous NSCLC, as these methods tend to largely underestimate the spatial extent of the functional tumor in such cases. For an accurate delineation of PET volumes in NSCLC, advanced image segmentation algorithms able to deal with tracer uptake heterogeneity should be preferred.

  13. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  14. The Role of Parametric Assumptions in Adaptive Bayesian Estimation

    ERIC Educational Resources Information Center

    Alcala-Quintana, Rocio; Garcia-Perez, Miguel A.

    2004-01-01

    Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter…

  15. An electrophysiological investigation of the receptor apparatus of the duck's bill

    PubMed Central

    Gregory, J. E.

    1973-01-01

    1. The properties of receptors in the duck's bill have been studied by recording from units isolated by dissecting fine filaments from the maxillary and ophthalmic nerves. 2. The units studied were divisible into three groups, phasic mechanoreceptors responsive to vibration, thermoreceptive units, and high threshold mechanoreceptors. 3. Vibration-sensitive mechanoreceptors (113 units) had small receptive fields, showed a rapidly adapting discharge to mechanical stimulation of the bill, were sensitive to vibratory but not to thermal stimuli and showed no background discharge. 4. Temperature receptors (twenty-one units) were insensitive to mechanical stimulation and showed a temperature-dependent background discharge. Sudden cooling produced a transient increase in discharge frequency. 5. High threshold mechanosensitive units (eight units) gave a slowly adapting discharge to strong mechanical stimulation and were insensitive to vibratory and thermal stimulation. 6. It is concluded that the low-threshold, vibration-sensitive responses come from Herbst corpuscles. No specific function can yet be assigned to the Grandry corpuscles. PMID:4689962

  16. The NTID speech recognition test: NSRT(®).

    PubMed

    Bochner, Joseph H; Garrison, Wayne M; Doherty, Karen A

    2015-07-01

    The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents' speech perception abilities.

  17. Searching for signposts: Adaptive planning thresholds in long-term water supply projections for the Western U.S.

    NASA Astrophysics Data System (ADS)

    Robinson, B.; Herman, J. D.

    2017-12-01

    Long-term water supply planning is challenged by highly uncertain streamflow projections across climate models and emissions scenarios. Recent studies have devised infrastructure and policy responses that can withstand or adapt to an ensemble of scenarios, particularly those outside the envelope of historical variability. An important aspect of this process is whether the proposed thresholds for adaptation (i.e., observations that trigger a response) truly represent a trend toward future change. Here we propose an approach to connect observations of annual mean streamflow with long-term projections by filtering GCM-based streamflow ensembles. Visualizations are developed to investigate whether observed changes in mean annual streamflow can be linked to projected changes in end-of-century mean and variance relative to the full ensemble. A key focus is identifying thresholds that point to significant long-term changes in the distribution of streamflow (+/- 20% or greater) as early as possible. The analysis is performed on 87 sites in the Western United States, using streamflow ensembles through 2100 from a recent study by the U.S. Bureau of Reclamation. Results focus on three primary questions: (1) how many years of observed data are needed to identify the most extreme scenarios, and by what year can they be identified? (2) are these features different between sites? and (3) using this analysis, do observed flows to date at each site point to significant long-term changes? This study addresses the challenge of severe uncertainty in long-term streamflow projections by identifying key thresholds that can be observed to support water supply planning.

  18. Lactate threshold responses to a season of professional British youth soccer

    PubMed Central

    McMillan, K; Helgerud, J; Grant, S; Newell, J; Wilson, J; Macdonald, R; Hoff, J

    2005-01-01

    Objective: To examine the changes in aerobic endurance performance of professional youth soccer players throughout the soccer season. Methods: Nine youth soccer players were tested at six different time points throughout the soccer season by sub-maximal blood lactate assessment, using an incremental treadmill protocol. Whole blood lactate concentration and heart frequency (Hf) were determined at each exercise stage. Running velocities at the first lactate inflection point (v-Tlac) and at a blood lactate concentration of 4 mmol l–1 (v-4mM) were determined. Results: Running velocity at the two lactate thresholds increased from the start of pre-season training to the early weeks of the competitive season, from 11.67 (0.29) to 12.96 (0.28) km h–1 for v-Tlac, and from 13.62 (0.25) to 14.67 (0.24) km h–1 for v-4mM (p<0.001). However, v-Tlac and v-4mM when expressed relative to maximum heart frequency (Hfmax) remained unchanged. The Hf to blood lactate concentration relationship was unchanged after the pre-season training period. The two expressions of lactate threshold did not reveal differences between each other. Conclusion: Running velocity at v-Tlac and v-4mM increased significantly over the pre-season period, but v-Tlac and v-4mM were unchanged when expressed relative to Hfmax. This finding may indicate that increased endurance performance may be mainly attributable to alterations in Vo2max. Although lactate assessment of soccer players is useful for determining endurance training adaptations in soccer players, additional assessment of the other two determinants of endurance performance (Vo2max and running economy) may provide more useful information for determining physiological adaptations resulting from soccer training and training interventions. PMID:15976165

  19. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    PubMed

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.

  20. Increasing the Accuracy of Volume and ADC Delineation for Heterogeneous Tumor on Diffusion-Weighted MRI: Correlation with PET/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Nan-Jie; Wong, Chun-Sing, E-mail: drcswong@gmail.com; Chu, Yiu-Ching

    2013-10-01

    Purpose: To improve the accuracy of volume and apparent diffusion coefficient (ADC) measurements in diffusion-weighted magnetic resonance imaging (MRI), we proposed a method based on thresholding both the b0 images and the ADC maps. Methods and Materials: In 21 heterogeneous lesions from patients with metastatic gastrointestinal stromal tumors (GIST), gross lesion were manually contoured, and corresponding volumes and ADCs were denoted as gross tumor volume (GTV) and gross ADC (ADC{sub g}), respectively. Using a k-means clustering algorithm, the probable high-cellularity tumor tissues were selected based on b0 images and ADC maps. ADC and volume of the tissues selected using themore » proposed method were denoted as thresholded ADC (ADC{sub thr}) and high-cellularity tumor volume (HCTV), respectively. The metabolic tumor volume (MTV) in positron emission tomography (PET)/computed tomography (CT) was measured using 40% maximum standard uptake value (SUV{sub max}) as the lower threshold, and corresponding mean SUV (SUV{sub mean}) was also measured. Results: HCTV had excellent concordance with MTV according to Pearson's correlation (r=0.984, P<.001) and linear regression (slope = 1.085, intercept = −4.731). In contrast, GTV overestimated the volume and differed significantly from MTV (P=.005). ADC{sub thr} correlated significantly and strongly with SUV{sub mean} (r=−0.807, P<.001) and SUV{sub max} (r=−0.843, P<.001); both were stronger than those of ADC{sub g}. Conclusions: The proposed lesion-adaptive semiautomatic method can help segment high-cellularity tissues that match hypermetabolic tissues in PET/CT and enables more accurate volume and ADC delineation on diffusion-weighted MR images of GIST.« less

  1. Adaptation of exercise ventilation during an actively-induced hyperthermia following passive heat acclimation.

    PubMed

    Beaudin, Andrew E; Clegg, Miriam E; Walsh, Michael L; White, Matthew D

    2009-09-01

    Hyperthermia-induced hyperventilation has been proposed to be a human thermolytic thermoregulatory response and to contribute to the disproportionate increase in exercise ventilation (VE) relative to metabolic needs during high-intensity exercise. In this study it was hypothesized that VE would adapt similar to human eccrine sweating (E(SW)) following a passive heat acclimation (HA). All participants performed an incremental exercise test on a cycle ergometer from rest to exhaustion before and after a 10-day passive exposure for 2 h/day to either 50 degrees C and 20% relative humidity (RH) (n = 8, Acclimation group) or 24 degrees C and 32% RH (n = 4, Control group). Attainment of HA was confirmed by a significant decrease (P = 0.025) of the esophageal temperature (T(es)) threshold for the onset of E(SW) and a significantly elevated E(SW) (P < or = 0.040) during the post-HA exercise tests. HA also gave a significant decrease in resting T(es) (P = 0.006) and a significant increase in plasma volume (P = 0.005). Ventilatory adaptations during exercise tests following HA included significantly decreased T(es) thresholds (P < or = 0.005) for the onset of increases in the ventilatory equivalents for O(2) (VE/VO(2)) and CO(2) (VE/VCO(2)) and a significantly increased VE (P < or = 0.017) at all levels of T(es). Elevated VE was a function of a significantly greater tidal volume (P = 0.003) at lower T(es) and of breathing frequency (P < or = 0.005) at higher T(es). Following HA, the ventilatory threshold was uninfluenced and the relationships between VO(2) and either VE/VO(2) or VE/VCO(2) did not explain the resulting hyperventilation. In conclusion, the results support that exercise VE following passive HA responds similarly to E(SW), and the mechanism accounting for this adaptation is independent of changes of the ventilatory threshold or relationships between VO(2) with each of VE/VO(2) and VE/VCO(2).

  2. Influence of Host Quality and Temperature on the Biology of Diaeretiella rapae (Hymenoptera: Braconidae, Aphidiinae).

    PubMed

    Souza, M F; Veloso, L F A; Sampaio, M V; Davis, J A

    2017-08-01

    Biological features of Diaeretiella rapae (McIntosh), an aphid parasitoid, are conditioned by temperature and host. However, studies of host quality changes due to temperature adaptability have not been performed previously. Therefore, this study evaluated the adaptability of Lipaphis pseudobrassicae (Davis) and Myzus persicae (Sulzer) to high temperature, high temperature effect on their quality as hosts for D. rapae, and on parasitoid's thermal threshold. Aphid development, survivorship, fecundity, and longevity were compared at 19 °C and 28 °C. Host quality in different temperatures was determined through evaluation of parasitoid biology. Thermal threshold of D. rapae was determined using development time data. At 28 °C, development time, rate of immature survival, and total fecundity rates were greater in L. pseudobrassicae than in M. persicae. Development time of D. rapae in L. pseudobrassicae was shorter than that in M. persicae at 28 °C and 31 °C for females and at 31 °C for males. The thermal threshold of D. rapae was 6.38 °C and 3.33 °C for females and 4.45 °C and 3.63 °C for males developed on L. pseudobrassicae and M. persicae, respectively. Diaeretiella rapae size gain was greater in L. pseudobrassicae than that in M. persicae at 25 °C and 28 °C. Lipaphis pseudobrassicae showed better adaptation than M. persicae to elevated temperatures, which resulted in a better quality host for D. rapae at temperatures of 28 °C and 31 °C and a higher lower thermal threshold when the parasitoid developed within L. pseudobrassicae. The host's adaptation to high temperatures is a determinant of host quality for the parasitoid at that same climatic condition. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Visual adaptation and the amplitude spectra of radiological images.

    PubMed

    Kompaniez-Dunigan, Elysse; Abbey, Craig K; Boone, John M; Webster, Michael A

    2018-01-01

    We examined how visual sensitivity and perception are affected by adaptation to the characteristic amplitude spectra of X-ray mammography images. Because of the transmissive nature of X-ray photons, these images have relatively more low-frequency variability than natural images, a difference that is captured by a steeper slope of the amplitude spectrum (~ - 1.5) compared to the ~ 1/f (slope of - 1) spectra common to natural scenes. Radiologists inspecting these images are therefore exposed to a different balance of spectral components, and we measured how this exposure might alter spatial vision. Observers (who were not radiologists) were adapted to images of normal mammograms or the same images sharpened by filtering the amplitude spectra to shallower slopes. Prior adaptation to the original mammograms significantly biased judgments of image focus relative to the sharpened images, demonstrating that the images are sufficient to induce substantial after-effects. The adaptation also induced strong losses in threshold contrast sensitivity that were selective for lower spatial frequencies, though these losses were very similar to the threshold changes induced by the sharpened images. Visual search for targets (Gaussian blobs) added to the images was also not differentially affected by adaptation to the original or sharper images. These results complement our previous studies examining how observers adapt to the textural properties or phase spectra of mammograms. Like the phase spectrum, adaptation to the amplitude spectrum of mammograms alters spatial sensitivity and visual judgments about the images. However, unlike the phase spectrum, adaptation to the amplitude spectra did not confer a selective performance advantage relative to more natural spectra.

  4. Magnetic Flux Leakage Sensing and Artificial Neural Network Pattern Recognition-Based Automated Damage Detection and Quantification for Wire Rope Non-Destructive Evaluation.

    PubMed

    Kim, Ju-Won; Park, Seunghee

    2018-01-02

    In this study, a magnetic flux leakage (MFL) method, known to be a suitable non-destructive evaluation (NDE) method for continuum ferromagnetic structures, was used to detect local damage when inspecting steel wire ropes. To demonstrate the proposed damage detection method through experiments, a multi-channel MFL sensor head was fabricated using a Hall sensor array and magnetic yokes to adapt to the wire rope. To prepare the damaged wire-rope specimens, several different amounts of artificial damages were inflicted on wire ropes. The MFL sensor head was used to scan the damaged specimens to measure the magnetic flux signals. After obtaining the signals, a series of signal processing steps, including the enveloping process based on the Hilbert transform (HT), was performed to better recognize the MFL signals by reducing the unexpected noise. The enveloped signals were then analyzed for objective damage detection by comparing them with a threshold that was established based on the generalized extreme value (GEV) distribution. The detected MFL signals that exceed the threshold were analyzed quantitatively by extracting the magnetic features from the MFL signals. To improve the quantitative analysis, damage indexes based on the relationship between the enveloped MFL signal and the threshold value were also utilized, along with a general damage index for the MFL method. The detected MFL signals for each damage type were quantified by using the proposed damage indexes and the general damage indexes for the MFL method. Finally, an artificial neural network (ANN) based multi-stage pattern recognition method using extracted multi-scale damage indexes was implemented to automatically estimate the severity of the damage. To analyze the reliability of the MFL-based automated wire rope NDE method, the accuracy and reliability were evaluated by comparing the repeatedly estimated damage size and the actual damage size.

  5. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    PubMed

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  6. Ecological genomics meets community-level modelling of biodiversity: mapping the genomic landscape of current and future environmental adaptation.

    PubMed

    Fitzpatrick, Matthew C; Keller, Stephen R

    2015-01-01

    Local adaptation is a central feature of most species occupying spatially heterogeneous environments, and may factor critically in responses to environmental change. However, most efforts to model the response of species to climate change ignore intraspecific variation due to local adaptation. Here, we present a new perspective on spatial modelling of organism-environment relationships that combines genomic data and community-level modelling to develop scenarios regarding the geographic distribution of genomic variation in response to environmental change. Rather than modelling species within communities, we use these techniques to model large numbers of loci across genomes. Using balsam poplar (Populus balsamifera) as a case study, we demonstrate how our framework can accommodate nonlinear responses of loci to environmental gradients. We identify a threshold response to temperature in the circadian clock gene GIGANTEA-5 (GI5), suggesting that this gene has experienced strong local adaptation to temperature. We also demonstrate how these methods can map ecological adaptation from genomic data, including the identification of predicted differences in the genetic composition of populations under current and future climates. Community-level modelling of genomic variation represents an important advance in landscape genomics and spatial modelling of biodiversity that moves beyond species-level assessments of climate change vulnerability. © 2014 John Wiley & Sons Ltd/CNRS.

  7. Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan

    In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.

  8. An evidence- and risk-based approach to a harmonized laboratory alert list in Australia and New Zealand.

    PubMed

    Campbell, Craig A; Lam, Que; Horvath, Andrea R

    2018-04-19

    Individual laboratories are required to compose an alert list for identifying critical and significant risk results. The high-risk result working party of the Royal College of Pathologists of Australasia (RCPA) and the Australasian Association of Clinical Biochemists (AACB) has developed a risk-based approach for a harmonized alert list for laboratories throughout Australia and New Zealand. The six-step process for alert threshold identification and assessment involves reviewing the literature, rating the available evidence, performing a risk analysis, assessing method transferability, considering workload implications and seeking endorsement from stakeholders. To demonstrate this approach, a worked example for deciding the upper alert threshold for potassium is described. The findings of the worked example are for infants aged 0-6 months, a recommended upper potassium alert threshold of >7.0 mmol/L in serum and >6.5 mmol/L in plasma, and for individuals older than 6 months, a threshold of >6.2 mmol/L in both serum and plasma. Limitations in defining alert thresholds include the lack of well-designed studies that measure the relationship between high-risk results and patient outcomes or the benefits of treatment to prevent harm, and the existence of a wide range of clinical practice guidelines with conflicting decision points at which treatment is required. The risk-based approach described presents a transparent, evidence- and consensus-based methodology that can be used by any laboratory when designing an alert list for local use. The RCPA-AACB harmonized alert list serves as a starter set for further local adaptation or adoption after consultation with clinical users.

  9. Choice of Grating Orientation for Evaluation of Peripheral Vision

    PubMed Central

    Venkataraman, Abinaya Priya; Winter, Simon; Rosén, Robert; Lundström, Linda

    2016-01-01

    ABSTRACT Purpose Peripheral resolution acuity depends on the orientation of the stimuli. However, it is uncertain if such a meridional effect also exists for peripheral detection tasks because they are affected by optical errors. Knowledge of the quantitative differences in acuity for different grating orientations is crucial for choosing the appropriate stimuli for evaluations of peripheral resolution and detection tasks. We assessed resolution and detection thresholds for different grating orientations in the peripheral visual field. Methods Resolution and detection thresholds were evaluated for gratings of four different orientations in eight different visual field meridians in the 20-deg visual field in white light. Detection measurements in monochromatic light (543 nm; bandwidth, 10 nm) were also performed to evaluate the effects of chromatic aberration on the meridional effect. A combination of trial lenses and adaptive optics system was used to correct the monochromatic lower- and higher-order aberrations. Results For both resolution and detection tasks, gratings parallel to the visual field meridian had better threshold compared with the perpendicular gratings, whereas the two oblique gratings had similar thresholds. The parallel and perpendicular grating acuity differences for resolution and detection tasks were 0.16 logMAR and 0.11 logMAD, respectively. Elimination of chromatic errors did not affect the meridional preference in detection acuity. Conclusions Similar to peripheral resolution, detection also shows a meridional effect that appears to have a neural origin. The threshold difference seen for parallel and perpendicular gratings suggests the use of two oblique gratings as stimuli in alternative forced-choice procedures for peripheral vision evaluation to reduce measurement variation. PMID:26889822

  10. Algorithm for improving psychophysical threshold estimates by detecting sustained inattention in experiments using PEST.

    PubMed

    Rinderknecht, Mike D; Ranzani, Raffaele; Popp, Werner L; Lambercy, Olivier; Gassert, Roger

    2018-05-10

    Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings.

  11. Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection

    NASA Astrophysics Data System (ADS)

    Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei

    Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.

  12. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.

  13. Image-adaptive and robust digital wavelet-domain watermarking for images

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  14. Localization Transition Induced by Learning in Random Searches

    NASA Astrophysics Data System (ADS)

    Falcón-Cortés, Andrea; Boyer, Denis; Giuggioli, Luca; Majumdar, Satya N.

    2017-10-01

    We solve an adaptive search model where a random walker or Lévy flight stochastically resets to previously visited sites on a d -dimensional lattice containing one trapping site. Because of reinforcement, a phase transition occurs when the resetting rate crosses a threshold above which nondiffusive stationary states emerge, localized around the inhomogeneity. The threshold depends on the trapping strength and on the walker's return probability in the memoryless case. The transition belongs to the same class as the self-consistent theory of Anderson localization. These results show that similarly to many living organisms and unlike the well-studied Markovian walks, non-Markov movement processes can allow agents to learn about their environment and promise to bring adaptive solutions in search tasks.

  15. Adaptive geodesic transform for segmentation of vertebrae on CT images

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  16. The role of glacier changes and threshold definition in the characterisation of future streamflow droughts in glacierised catchments

    NASA Astrophysics Data System (ADS)

    Van Tiel, Marit; Teuling, Adriaan J.; Wanders, Niko; Vis, Marc J. P.; Stahl, Kerstin; Van Loon, Anne F.

    2018-01-01

    Glaciers are essential hydrological reservoirs, storing and releasing water at various timescales. Short-term variability in glacier melt is one of the causes of streamflow droughts, here defined as deficiencies from the flow regime. Streamflow droughts in glacierised catchments have a wide range of interlinked causing factors related to precipitation and temperature on short and long timescales. Climate change affects glacier storage capacity, with resulting consequences for discharge regimes and streamflow drought. Future projections of streamflow drought in glacierised basins can, however, strongly depend on the modelling strategies and analysis approaches applied. Here, we examine the effect of different approaches, concerning the glacier modelling and the drought threshold, on the characterisation of streamflow droughts in glacierised catchments. Streamflow is simulated with the Hydrologiska Byråns Vattenbalansavdelning (HBV-light) model for two case study catchments, the Nigardsbreen catchment in Norway and the Wolverine catchment in Alaska, and two future climate change scenarios (RCP4.5 and RCP8.5). Two types of glacier modelling are applied, a constant and dynamic glacier area conceptualisation. Streamflow droughts are identified with the variable threshold level method and their characteristics are compared between two periods, a historical (1975-2004) and future (2071-2100) period. Two existing threshold approaches to define future droughts are employed: (1) the threshold from the historical period; (2) a transient threshold approach, whereby the threshold adapts every year in the future to the changing regimes. Results show that drought characteristics differ among the combinations of glacier area modelling and thresholds. The historical threshold combined with a dynamic glacier area projects extreme increases in drought severity in the future, caused by the regime shift due to a reduction in glacier area. The historical threshold combined with a constant glacier area results in a drastic decrease of the number of droughts. The drought characteristics between future and historical periods are more similar when the transient threshold is used, for both glacier area conceptualisations. With the transient threshold, factors causing future droughts can be analysed. This study revealed the different effects of methodological choices on future streamflow drought projections and it highlights how the options can be used to analyse different aspects of future droughts: the transient threshold for analysing future drought processes, the historical threshold to assess changes between periods, the constant glacier area to analyse the effect of short-term climate variability on droughts and the dynamic glacier area to model more realistic future discharges under climate change.

  17. Random access with adaptive packet aggregation in LTE/LTE-A.

    PubMed

    Zhou, Kaijie; Nikaein, Navid

    While random access presents a promising solution for efficient uplink channel access, the preamble collision rate can significantly increase when massive number of devices simultaneously access the channel. To address this issue and improve the reliability of the random access, an adaptive packet aggregation method is proposed. With the proposed method, a device does not trigger a random access for every single packet. Instead, it starts a random access when the number of aggregated packets reaches a given threshold. This method reduces the packet collision rate at the expense of an extra latency, which is used to accumulate multiple packets into a single transmission unit. Therefore, the tradeoff between packet loss rate and channel access latency has to be carefully selected. We use semi-Markov model to derive the packet loss rate and channel access latency as functions of packet aggregation number. Hence, the optimal amount of aggregated packets can be found, which keeps the loss rate below the desired value while minimizing the access latency. We also apply for the idea of packet aggregation for power saving, where a device aggregates as many packets as possible until the latency constraint is reached. Simulations are carried out to evaluate our methods. We find that the packet loss rate and/or power consumption are significantly reduced with the proposed method.

  18. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  19. Opportunistic Access in Frequency Hopping Cognitive Radio Networks

    DTIC Science & Technology

    2014-03-27

    thresholding MA multiple access MFSK M-ary frequency shift keying MIMO multiple-input/multiple-output OFDM orthogonal frequency-division multiplexing x...adaptive BER performance as a function of ISR with orthogonal frequency-division multiplexing ( OFDM ) interference present. . . . . . . . . . 41 4.15 Non...adaptive BER performance as a function of EB/N0 with OFDM interfer- ence present

  20. SU-F-J-27: Segmentation of Prostate CBCT Images with Implanted Calypso Transponders Using Double Haar Wavelet Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Saleh, Z; Tang, X

    Purpose: Segmentation of prostate CBCT images is an essential step towards real-time adaptive radiotherapy. It is challenging For Calypso patients, as more artifacts are generated by the beacon transponders. We herein propose a novel wavelet-based segmentation algorithm for rectum, bladder, and prostate of CBCT images with implanted Calypso transponders. Methods: Five hypofractionated prostate patients with daily CBCT were studied. Each patient had 3 Calypso transponder beacons implanted, and the patients were setup and treated with Calypso tracking system. Two sets of CBCT images from each patient were studied. The structures (i.e. rectum, bladder, and prostate) were contoured by a trainedmore » expert, and these served as ground truth. For a given CBCT, the moving window-based Double Haar transformation is applied first to obtain the wavelet coefficients. Based on a user defined point in the object of interest, a cluster algorithm based adaptive thresholding is applied to the low frequency components of the wavelet coefficients, and a Lee filter theory based adaptive thresholding is applied to the high frequency components. For the next step, the wavelet reconstruction is applied to the thresholded wavelet coefficients. A binary/segmented image of the object of interest is therefore obtained. DICE, sensitivity, inclusiveness and ΔV were used to evaluate the segmentation result. Results: Considering all patients, the bladder has the DICE, sensitivity, inclusiveness, and ΔV ranges of [0.81–0.95], [0.76–0.99], [0.83–0.94], [0.02–0.21]. For prostate, the ranges are [0.77–0.93], [0.84–0.97], [0.68–0.92], [0.1–0.46]. For rectum, the ranges are [0.72–0.93], [0.57–0.99], [0.73–0.98], [0.03–0.42]. Conclusion: The proposed algorithm appeared effective segmenting prostate CBCT images with the present of the Calypso artifacts. However, it is not robust in two scenarios: 1) rectum with significant amount of gas; 2) prostate with very low contrast. Model based algorithm might improve the segmentation in these two scenarios.« less

  1. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  2. Separate channels for the analysis of the shape and the movement of moving visual stimulus.

    PubMed

    Tolhurst, D J

    1973-06-01

    1. The effects of temporal modulation on the properties of spatial frequency channels have been investigated using adaptation.2. Adapting to drifting sinusoidal gratings caused threshold elevation that was both spatial frequency and direction specific. Little systematic difference was found between the band widths of the elevation curves for drifting and stationary gratings.3. It was confirmed that adaptation fails to reveal channels at low spatial frequencies when stationary gratings are used. However, channels were revealed at frequencies at least as low as 0.66 c/deg when the test gratings were made to move. These channels are adapted only a little by stationary gratings, confirming their dependence on movement.4. The existence of movement-sensitive channels at low spatial frequencies explains the well known observation that temporal modulation greatly increases the sensitivity of the visual system to low spatial frequencies.5. Temporal modulation was effective at revealing these channels only when the flicker or movement of the test patterns was apparent to the observer; only at low spatial frequencies did patterns, modulated at low rates, actually appear to be temporarily modulated at threshold. At higher spatial frequencies, they were indistinguishable from stationary patterns until the contrast was some way above the detection threshold.6. It is suggested, therefore, that the movement-sensitive channels are responsible for signalling the occurrence of movement; the channels at higher spatial frequencies give no information about temporal changes. These two systems of channels are compared to the Y- and X-cells respectively of the cat.

  3. Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds

    PubMed Central

    Deeks, J.J.; Martin, E.C.; Riley, R.D.

    2017-01-01

    Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347

  4. The Sustained Influence of an Error on Future Decision-Making.

    PubMed

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  5. Automated detection system for pulmonary emphysema on 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Hara, Takeshi; Yamamoto, Akira; Zhou, Xiangrong; Iwano, Shingo; Itoh, Shigeki; Fujita, Hiroshi; Ishigaki, Takeo

    2004-05-01

    An automatic extraction of pulmonary emphysema area on 3-D chest CT images was performed using an adaptive thresholding technique. We proposed a method to estimate the ratio of the emphysema area to the whole lung volume. We employed 32 cases (15 normal and 17 abnormal) which had been already diagnosed by radiologists prior to the study. The ratio in all the normal cases was less than 0.02, and in abnormal cases, it ranged from 0.01 to 0.26. The effectiveness of our approach was confirmed through the results of the present study.

  6. Optimal Stimulus Amplitude for Vestibular Stochastic Stimulation to Improve Sensorimotor Function

    NASA Technical Reports Server (NTRS)

    Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Cohen, H.; Bloomberg, J. J.; hide

    2014-01-01

    Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). Our previous work has shown the advantageous effects of VSR in a balance task of standing on an unstable surface. This technique to improve detection of vestibular signals uses a stimulus delivery system that is wearable or portable and provides imperceptibly low levels of white noise-based binaural bipolar electrical stimulation of the vestibular system. The goal of this project is to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection. A series of experiments were carried out to determine a robust paradigm to identify a vestibular threshold that can then be used to recommend optimal stimulation levels for SR training applications customized to each crewmember. Customizing stimulus intensity can maximize treatment effects. The amplitude of stimulation to be used in the VSR application has varied across studies in the literature such as 60% of nociceptive stimulus thresholds. We compared subjects' perceptual threshold with that obtained from two measures of body sway. Each test session was 463s long and consisted of several 15s sinusoidal stimuli, at different current amplitudes (0-2 mA), interspersed with 20-20.5s periods of no stimulation. Subjects sat on a chair with their eyes closed and had to report their perception of motion through a joystick. A force plate underneath the chair recorded medio-lateral shear forces and roll moments. First we determined the percent time during stimulation periods for which perception of motion (activity above a pre-defined threshold) was reported using the joystick, and body sway (two standard deviation of the noise level in the baseline measurement) was detected by the sensors. The percentage time at each stimulation level for motion detection was normalized with respect to the largest value and a logistic regression curve fit was applied to these data. The threshold was defined at the 50% probability of motion detection. Comparison of threshold of motion detection obtained from joystick data versus body sway suggests that perceptual thresholds were significantly lower, and were not impacted by system noise. Further, in order to determine optimal stimulation amplitude to improve balance, two sets of experiments were carried out. In the first set of experiments, all subjects received the same level of stimuli and the intensity of optimal performance was projected back on subjects' vestibular threshold curve. In the second set of experiments, on different subjects, stimulation was administered from 20-400% of subjects' vestibular threshold obtained from joystick data. Preliminary results of our study show that, in general, using stimulation amplitudes at 40-60% of perceptual motion threshold improved balance performance significantly compared to control (no stimulation). The amplitude of vestibular stimulation that improved balance function was predominantly in the range of +/- 100 to +/- 400 micro A. We hypothesize that VSR stimulation will act synergistically with sensorimotor adaptability (SA) training to improve adaptability by increasing utilization of vestibular information and therefore will help us to optimize and personalize a SA countermeasure prescription. This combination will help to significantly reduce the number of days required to recover functional performance to preflight levels after long-duration spaceflight.

  7. Real-time anomaly detection for very short-term load forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Jian; Hong, Tao; Yue, Meng

    Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less

  8. Real-time anomaly detection for very short-term load forecasting

    DOE PAGES

    Luo, Jian; Hong, Tao; Yue, Meng

    2018-01-06

    Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less

  9. Simulation of Healing Threshold in Strain-Induced Inflammation Through a Discrete Informatics Model.

    PubMed

    Ibrahim, Israr Bin M; Sarma O V, Sanjay; Pidaparti, Ramana M

    2018-05-01

    Respiratory diseases such as asthma and acute respiratory distress syndrome as well as acute lung injury involve inflammation at the cellular level. The inflammation process is very complex and is characterized by the emergence of cytokines along with other changes in cellular processes. Due to the complexity of the various constituents that makes up the inflammation dynamics, it is necessary to develop models that can complement experiments to fully understand inflammatory diseases. In this study, we developed a discrete informatics model based on cellular automata (CA) approach to investigate the influence of elastic field (stretch/strain) on the dynamics of inflammation and account for probabilistic adaptation based on statistical interpretation of existing experimental data. Our simulation model investigated the effects of low, medium, and high strain conditions on inflammation dynamics. Results suggest that the model is able to indicate the threshold of innate healing of tissue as a response to strain experienced by the tissue. When strain is under the threshold, the tissue is still capable of adapting its structure to heal the damaged part. However, there exists a strain threshold where healing capability breaks down. The results obtained demonstrate that the developed discrete informatics based CA model is capable of modeling and giving insights into inflammation dynamics parameters under various mechanical strain/stretch environments.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Michael A.; School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland 4072; Dawson, Christopher M.

    The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which showmore » that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.« less

  11. Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment

    NASA Astrophysics Data System (ADS)

    David, S.; Visvikis, D.; Roux, C.; Hatt, M.

    2011-09-01

    In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.

  12. Transdiagnostic culturally adapted CBT with Farsi-speaking refugees: a pilot study.

    PubMed

    Kananian, Schahryar; Ayoughi, Sarah; Farugie, Arieja; Hinton, Devon; Stangier, Ulrich

    2017-01-01

    Background : Approximately half of all asylum seekers suffer from trauma-related disorders requiring treatment, among them Posttraumatic Stress Disorder (PTSD), depression, anxiety, and somatic symptoms. There is a lack of easily accessible, low-threshold treatments taking the cultural background into account. Culturally Adapted CBT (CA CBT) is a well evaluated, transdiagnostic group intervention for refugees, using psychoeducation, meditation, and Yoga-like exercises. Objective: An uncontrolled pilot study with male Farsi-speaking refugees from Afghanistan and Iran was conducted to investigate feasibility with this ethnic group; a group for which no previous CBT trials have been reported. Method : The participants were nine Farsi-speaking, male refugees with M.I.N.I./DSM-IV diagnoses comprising PTSD, major depressive disorder, and anxiety disorders. Treatment components were adapted to the specific cultural framework of perception of symptoms, causes, ideas of healing, and local therapeutic processes. Before and after 12 weeks of treatment, the primary outcome was assessed using the General Health Questionnaire (GHQ-28). Secondary outcome measures were the Posttraumatic Checklist, Patient Health Questionnaire, Somatic Symptom Scale, World Health Organization Quality of Life Questionnaire (WHOQOL-BREF), Affective Style Questionnaire (ASQ), and Emotion Regulation Scale (ERS). Results : Seven participants completed treatment. In the completer analysis, improvements were found on almost all questionnaires. Large effect sizes were seen for the GHQ-28 ( d  = 2.0), WHOQOL-BREF scales ( d  = 1.0-2.3), ASQ tolerating subscale ( d  = 2.2), and ERS ( d  = 1.7). With respect to feasibility, cultural adaptation seemed to be a crucial means to promote effectiveness. Conclusion : CA CBT may reduce general psychopathological distress and improve quality of life. Improvement in emotion regulation strategies may mediate treatment effects. More support should be provided to enhance coping with the uncertainty of asylum status and stressful housing conditions. CA CBT appears to be a promising transdiagnostic treatment, serving as an initial low-threshold therapy in a stepped care approach.

  13. Transdiagnostic culturally adapted CBT with Farsi-speaking refugees: a pilot study

    PubMed Central

    Kananian, Schahryar; Ayoughi, Sarah; Farugie, Arieja; Hinton, Devon; Stangier, Ulrich

    2017-01-01

    ABSTRACT Background: Approximately half of all asylum seekers suffer from trauma-related disorders requiring treatment, among them Posttraumatic Stress Disorder (PTSD), depression, anxiety, and somatic symptoms. There is a lack of easily accessible, low-threshold treatments taking the cultural background into account. Culturally Adapted CBT (CA CBT) is a well evaluated, transdiagnostic group intervention for refugees, using psychoeducation, meditation, and Yoga-like exercises. Objective: An uncontrolled pilot study with male Farsi-speaking refugees from Afghanistan and Iran was conducted to investigate feasibility with this ethnic group; a group for which no previous CBT trials have been reported. Method: The participants were nine Farsi-speaking, male refugees with M.I.N.I./DSM-IV diagnoses comprising PTSD, major depressive disorder, and anxiety disorders. Treatment components were adapted to the specific cultural framework of perception of symptoms, causes, ideas of healing, and local therapeutic processes. Before and after 12 weeks of treatment, the primary outcome was assessed using the General Health Questionnaire (GHQ-28). Secondary outcome measures were the Posttraumatic Checklist, Patient Health Questionnaire, Somatic Symptom Scale, World Health Organization Quality of Life Questionnaire (WHOQOL-BREF), Affective Style Questionnaire (ASQ), and Emotion Regulation Scale (ERS). Results: Seven participants completed treatment. In the completer analysis, improvements were found on almost all questionnaires. Large effect sizes were seen for the GHQ-28 (d = 2.0), WHOQOL-BREF scales (d = 1.0–2.3), ASQ tolerating subscale (d = 2.2), and ERS (d = 1.7). With respect to feasibility, cultural adaptation seemed to be a crucial means to promote effectiveness. Conclusion: CA CBT may reduce general psychopathological distress and improve quality of life. Improvement in emotion regulation strategies may mediate treatment effects. More support should be provided to enhance coping with the uncertainty of asylum status and stressful housing conditions. CA CBT appears to be a promising transdiagnostic treatment, serving as an initial low-threshold therapy in a stepped care approach. PMID:29163870

  14. Anti-dynamic-crosstalk method for single photon LIDAR detection

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Qiang; Gong, Mali; Fu, Xing

    2017-11-01

    With increasing number of vehicles equipped with light detection and ranging (LIDAR), crosstalk is identified as a critical and urgent issue in the range detection for active collision avoidance. Chaotic pulse position modulation (CPPM) applied in the transmitting pulse train has been shown to prevent crosstalk as well as range ambiguity. However, static and unified strategy on discrimination threshold and the number of accumulated pulse is not valid against crosstalk with varying number of sources and varying intensity of each source. This paper presents an adaptive algorithm to distinguish the target echo from crosstalk with dynamic and unknown level of intensity in the context of intelligent vehicles. New strategy is given based on receiver operating characteristics (ROC) curves that consider the detection requirements of the probability of detection and false alarm for the scenario with varying crosstalk. In the adaptive algorithm, the detected results are compared by the new strategy with both the number of accumulated pulses and the threshold being raised step by step, so that the target echo can be exactly identified from crosstalk with the dynamic and unknown level of intensity. The validity of the algorithm has been verified through the experiments with a single photon detector and the time correlated single photo counting (TCSPC) technique, demonstrating a marked drop in required shots for identifying the target compared with static and unified strategy

  15. In-Air Evoked Potential Audiometry of Grey Seals (Halichoerus grypus) from the North and Baltic Seas

    PubMed Central

    Ruser, Andreas; Dähne, Michael; Sundermeyer, Janne; Lucke, Klaus; Houser, Dorian S.; Finneran, James J.; Driver, Jörg; Pawliczka, Iwona; Rosenberger, Tanja; Siebert, Ursula

    2014-01-01

    In-air anthropogenic sound has the potential to affect grey seal (Halichoerus grypus) behaviour and interfere with acoustic communication. In this study, a new method was used to deliver acoustic signals to grey seals as part of an in-air hearing assessment. Using in-ear headphones with adapted ear inserts allowed for the measurement of auditory brainstem responses (ABR) on sedated grey seals exposed to 5-cycle (2-1-2) tone pips. Thresholds were measured at 10 frequencies between 1–20 kHz. Measurements were made using subcutaneous electrodes on wild seals from the Baltic and North Seas. Thresholds were determined by both visual and statistical approaches (single point F-test) and good agreement was obtained between the results using both methods. The mean auditory thresholds were ≤40 dB re 20 µPa peak equivalent sound pressure level (peSPL) between 4–20 kHz and showed similar patterns to in-air behavioural hearing tests of other phocid seals between 3 and 20 kHz. Below 3 kHz, a steep reduction in hearing sensitivity was observed, which differed from the rate of decline in sensitivity obtained in behavioural studies on other phocids. Differences in the rate of decline may reflect influence of the ear inserts on the ability to reliably transmit lower frequencies or interference from the structure of the distal end of the ear canal. PMID:24632891

  16. Salt taste adaptation: the psychophysical effects of adapting solutions and residual stimuli from prior tastings on the taste of sodium chloride.

    PubMed

    O'Mahony, M

    1979-01-01

    The paper reviews how adaptation to sodium chloride, changing in concentration as a result of various experimental procedures, affects measurements of the sensitivity, intensity, and quality of the salt taste. The development of and evidence for the current model that the salt taste depends on an adaptation level (taste zero) determined by the sodium cation concentration is examined and found to be generally supported, despite great methodological complications. It would seem that lower adaptation levels elicit lower thresholds, higher intensity estimates, and altered quality descriptions with predictable effects on psychophysical measures.

  17. Using Critical Thresholds to Customize Climate Projections of Extreme Events to User Needs and Support Decisions

    NASA Astrophysics Data System (ADS)

    Garfin, G. M.; Petersen, A.; Shafer, M.; MacClune, K.; Hayhoe, K.; Riley, R.; Nasser, E.; Kos, L.; Allan, C.; Stults, M.; LeRoy, S. R.

    2016-12-01

    Many communities in the United States are already vulnerable to extreme events; many of these vulnerabilities are likely to increase with climate change. In order to promote the development of effective community responses to climate change, we tested a participatory process for developing usable climate science, in which our project team worked with decision-makers to identify extreme event parameters and critical thresholds associated with policy development and adaptation actions. Our hypothesis is that conveying climate science and data through user-defined parameters and thresholds will help develop capacity to streamline the use of climate projections in developing strategies and actions, and motivate participation by a variety of preparedness planners. Our team collaborated with urban decision-makers, in departments that included resilience, planning, public works, public health, emergency management, and others, in four cities in the semi-arid south-central plains and intermountain areas of Colorado, New Mexico, Oklahoma, and Texas. Through an iterative process, we homed in on both simple and hybrid indicators for which we could develop credible city-specific projections, to stimulate discussion about adaptation actions; throughout the process, we communicated information about confidence and uncertainty, in order to develop a blend of historic and projected climate data, as appropriate, depending on levels of uncertainty. Our collaborations have resulted in (a) the identification of more than 50 unique indicators and thresholds across the four communities, (b) the development of adaptation action strategies in each community, and (c) the implementation of actions, ranging from a climate leadership training program for city staff members, to a rainwater capture project to improve responses to expected increases in both stormwater runoff and water capture for drought episodes.

  18. Influence of background size, luminance and eccentricity on different adaptation mechanisms

    PubMed Central

    Gloriani, Alejandro H.; Matesanz, Beatriz M.; Barrionuevo, Pablo A.; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A.

    2016-01-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06–110 cd/m2) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5 cd/m2. In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. PMID:27210038

  19. Influence of background size, luminance and eccentricity on different adaptation mechanisms.

    PubMed

    Gloriani, Alejandro H; Matesanz, Beatriz M; Barrionuevo, Pablo A; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A

    2016-08-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06-110cd/m(2)) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5cd/m(2). In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Automatic segmentation and classification of mycobacterium tuberculosis with conventional light microscopy

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui

    2015-12-01

    This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.

  1. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  2. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  3. Adaptive quantification and longitudinal analysis of pulmonary emphysema with a hidden Markov measure field model.

    PubMed

    Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F

    2014-07-01

    The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.

  4. Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type.

    PubMed

    Zhou, Douglas; Sun, Yi; Rangan, Aaditya V; Cai, David

    2010-04-01

    We discuss how to characterize long-time dynamics of non-smooth dynamical systems, such as integrate-and-fire (I&F) like neuronal network, using Lyapunov exponents and present a stable numerical method for the accurate evaluation of the spectrum of Lyapunov exponents for this large class of dynamics. These dynamics contain (i) jump conditions as in the firing-reset dynamics and (ii) degeneracy such as in the refractory period in which voltage-like variables of the network collapse to a single constant value. Using the networks of linear I&F neurons, exponential I&F neurons, and I&F neurons with adaptive threshold, we illustrate our method and discuss the rich dynamics of these networks.

  5. Kmeans-ICA based automatic method for ocular artifacts removal in a motorimagery classification.

    PubMed

    Bou Assi, Elie; Rihana, Sandy; Sawan, Mohamad

    2014-01-01

    Electroencephalogram (EEG) recordings aroused as inputs of a motor imagery based BCI system. Eye blinks contaminate the spectral frequency of the EEG signals. Independent Component Analysis (ICA) has been already proved for removing these artifacts whose frequency band overlap with the EEG of interest. However, already ICA developed methods, use a reference lead such as the ElectroOculoGram (EOG) to identify the ocular artifact components. In this study, artifactual components were identified using an adaptive thresholding by means of Kmeans clustering. The denoised EEG signals have been fed into a feature extraction algorithm extracting the band power, the coherence and the phase locking value and inserted into a linear discriminant analysis classifier for a motor imagery classification.

  6. An infrared small target detection method based on multiscale local homogeneity measure

    NASA Astrophysics Data System (ADS)

    Nie, Jinyan; Qu, Shaocheng; Wei, Yantao; Zhang, Liming; Deng, Lizhen

    2018-05-01

    Infrared (IR) small target detection plays an important role in the field of image detection area owing to its intrinsic characteristics. This paper presents a multiscale local homogeneity measure (MLHM) for infrared small target detection, which can enhance the performance of IR small target detection system. Firstly, intra-patch homogeneity of the target itself and the inter-patch heterogeneity between target and the local background regions are integrated to enhance the significant of small target. Secondly, a multiscale measure based on local regions is proposed to obtain the most appropriate response. Finally, an adaptive threshold method is applied to small target segmentation. Experimental results on three different scenarios indicate that the MLHM has good performance under the interference of strong noise.

  7. Target matching based on multi-view tracking

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Zhou, Changsheng

    2011-01-01

    A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.

  8. VALIDATION OF A CLINICAL ASSESSMENT OF SPECTRAL RIPPLE RESOLUTION FOR COCHLEAR-IMPLANT USERS

    PubMed Central

    Drennan, Ward. R.; Anderson, Elizabeth S.; Won, Jong Ho; Rubinstein, Jay T.

    2013-01-01

    Objectives Non-speech psychophysical tests of spectral resolution, such as the spectral-ripple discrimination task, have been shown to correlate with speech recognition performance in cochlear implant (CI) users (Henry et al., 2005; Won et al. 2007, 2011; Drennan et al. 2008; Anderson et al. 2011). However, these tests are best suited for use in the research laboratory setting and are impractical for clinical use. A test of spectral resolution that is quicker and could more easily be implemented in the clinical setting has been developed. The objectives of this study were 1) To determine if this new clinical ripple test would yield individual results equivalent to the longer, adaptive version of the ripple discrimination test; 2) To evaluate test-retest reliability for the clinical ripple measure; and 3) To examine the relationship between clinical ripple performance and monosyllabic word recognition in quiet for a group of CI listeners. Design Twenty-eight CI recipients participated in the study. Each subject was tested on both the adaptive and the clinical versions of spectral ripple discrimination, as well as CNC word recognition in quiet. The adaptive version of spectral ripple employed a 2-up, 1-down procedure for determining spectral ripple discrimination threshold. The clinical ripple test used a method of constant stimuli, with trials for each of 12 fixed ripple densities occurring six times in random order. Results from the clinical ripple test (proportion correct) were then compared to ripple discrimination thresholds (in ripples per octave) from the adaptive test. Results The clinical ripple test showed strong concurrent validity, evidenced by a good correlation between clinical ripple and adaptive ripple results (r=0.79), as well as a correlation with word recognition (r = 0.7). Excellent test-retest reliability was also demonstrated with a high test-retest correlation (r = 0.9). Conclusions The clinical ripple test is a reliable non-linguistic measure of spectral resolution, optimized for use with cochlear implant users in a clinical setting. The test might be useful as a diagnostic tool or as a possible surrogate outcome measure for evaluating treatment effects in hearing. PMID:24552679

  9. Correspondence between evoked vocal responses and auditory thresholds in Pleurodema thaul (Amphibia; Leptodactylidae).

    PubMed

    Penna, Mario; Velásquez, Nelson; Solís, Rigoberto

    2008-04-01

    Thresholds for evoked vocal responses and thresholds of multiunit midbrain auditory responses to pure tones and synthetic calls were investigated in males of Pleurodema thaul, as behavioral thresholds well above auditory sensitivity have been reported for other anurans. Thresholds for evoked vocal responses to synthetic advertisement calls played back at increasing intensity averaged 43 dB RMS SPL (range 31-52 dB RMS SPL), measured at the subjects' position. Number of pulses increased with stimulus intensities, reaching a plateau at about 18-39 dB above threshold and decreased at higher intensities. Latency to call followed inverse trends relative to number of pulses. Neural audiograms yielded an average best threshold in the high frequency range of 46.6 dB RMS SPL (range 41-51 dB RMS SPL) and a center frequency of 1.9 kHz (range 1.7-2.6 kHz). Auditory thresholds for a synthetic call having a carrier frequency of 2.1 kHz averaged 44 dB RMS SPL (range 39-47 dB RMS SPL). The similarity between thresholds for advertisement calling and auditory thresholds for the advertisement call indicates that male P. thaul use the full extent of their auditory sensitivity in acoustic interactions, likely an evolutionary adaptation allowing chorusing activity in low-density aggregations.

  10. Single and Multiple Visual Systems in Arthropods

    PubMed Central

    Wald, George

    1968-01-01

    Extraction of two visual pigments from crayfish eyes prompted an electrophysiological examination of the role of visual pigments in the compound eyes of six arthropods. The intact animals were used; in crayfishes isolated eyestalks also. Thresholds were measured in terms of the absolute or relative numbers of photons per flash at various wavelengths needed to evoke a constant amplitude of electroretinogram, usually 50 µv. Two species of crayfish, as well as the green crab, possess blue- and red-sensitive receptors apparently arranged for color discrimination. In the northern crayfish, Orconectes virilis, the spectral sensitivity of the dark-adapted eye is maximal at about 550 mµ, and on adaptation to bright red or blue lights breaks into two functions with λmax respectively at about 435 and 565 mµ, apparently emanating from different receptors. The swamp crayfish, Procambarus clarkii, displays a maximum sensitivity when dark-adapted at about 570 mµ, that breaks on color adaptation into blue- and red-sensitive functions with λmax about 450 and 575 mµ, again involving different receptors. Similarly the green crab, Carcinides maenas, presents a dark-adapted sensitivity maximal at about 510 mµ that divides on color adaptation into sensitivity curves maximal near 425 and 565 mµ. Each of these organisms thus possesses an apparatus adequate for at least two-color vision, resembling that of human green-blinds (deuteranopes). The visual pigments of the red-sensitive systems have been extracted from the crayfish eyes. The horse-shoe crab, Limulus, and the lobster each possesses a single visual system, with λmax respectively at 520 and 525 mµ. Each of these is invariant with color adaptation. In each case the visual pigment had already been identified in extracts. The spider crab, Libinia emarginata, presents another variation. It possesses two visual systems apparently differentiated, not for color discrimination but for use in dim and bright light, like vertebrate rods and cones. The spectral sensitivity of the dark-adapted eye is maximal at about 490 mµ and on light adaptation, whether to blue, red, or white light, is displaced toward shorter wavelengths in what is essentially a reverse Purkinje shift. In all these animals dark adaptation appears to involve two phases: a rapid, hyperbolic fall of log threshold associated probably with visual pigment regeneration, followed by a slow, almost linear fall of log threshold that may be associated with pigment migration. PMID:5641632

  11. Genetic variation in threshold reaction norms for alternative reproductive tactics in male Atlantic salmon, Salmo salar.

    PubMed

    Piché, Jacinthe; Hutchings, Jeffrey A; Blanchard, Wade

    2008-07-07

    Alternative reproductive tactics may be a product of adaptive phenotypic plasticity, such that discontinuous variation in life history depends on both the genotype and the environment. Phenotypes that fall below a genetically determined threshold adopt one tactic, while those exceeding the threshold adopt the alternative tactic. We report evidence of genetic variability in maturation thresholds for male Atlantic salmon (Salmo salar) that mature either as large (more than 1 kg) anadromous males or as small (10-150 g) parr. Using a common-garden experimental protocol, we find that the growth rate at which the sneaker parr phenotype is expressed differs among pure- and mixed-population crosses. Maturation thresholds of hybrids were intermediate to those of pure crosses, consistent with the hypothesis that the life-history switch points are heritable. Our work provides evidence, for a vertebrate, that thresholds for alternative reproductive tactics differ genetically among populations and can be modelled as discontinuous reaction norms for age and size at maturity.

  12. Thresholds for Shifting Visually Perceived Eye Level Due to Incremental Pitches

    NASA Technical Reports Server (NTRS)

    Scott, Donald M.; Welch, Robert; Cohen, M. M.; Hill, Cyndi

    2001-01-01

    Visually perceived eye level (VPEL) was judged by subjects as they viewed a luminous grid pattern that was pitched by 2 or 5 deg increments between -20 deg and +20 deg. Subjects were dark adapted for 20 min and indicated--VPEL by directing the beam of a laser pointer to the rear wall of a 1.25 m cubic pitch box that rotated about a horizontal axis midpoint on the rear wall. Data were analyzed by ANOVA and the Tukey HSD procedure. Results showed a 10.0 deg threshold for pitches P(sub i) above the reference pitch P(sub 0), and a -10.3 deg threshold for pitches P(sub i) below-the reference-pitch P(sub 0). Threshold data for pitches P(sub i) < P(sub 0) suggest an asymmetric threshold for VPEL below and above physical eye level.

  13. AREA RADIATION MONITOR

    DOEpatents

    Manning, F.W.; Groothuis, S.E.; Lykins, J.H.; Papke, D.M.

    1962-06-12

    S>An improved area radiation dose monitor is designed which is adapted to compensate continuously for background radiation below a threshold dose rate and to give warning when the dose integral of the dose rate of an above-threshold radiation excursion exceeds a selected value. This is accomplished by providing means for continuously charging an ionization chamber. The chamber provides a first current proportional to the incident radiation dose rate. Means are provided for generating a second current including means for nulling out the first current with the second current at all values of the first current corresponding to dose rates below a selected threshold dose rate value. The second current has a maximum value corresponding to that of the first current at the threshold dose rate. The excess of the first current over the second current, which occurs above the threshold, is integrated and an alarm is given at a selected integrated value of the excess corresponding to a selected radiation dose. (AEC)

  14. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  15. Segmentation of fluorescence microscopy cell images using unsupervised mining.

    PubMed

    Du, Xian; Dua, Sumeet

    2010-05-28

    The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.

  16. Differential effect of visual motion adaption upon visual cortical excitability.

    PubMed

    Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer

    2017-03-01

    The objectives of this study were 1 ) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2 ) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing. NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency. Copyright © 2017 the American Physiological Society.

  17. The feasibility of a fluidic respiratory flow meter

    NASA Technical Reports Server (NTRS)

    Neradka, V. F.; Bray, H. C., Jr.

    1974-01-01

    A study was undertaken to determine the feasibility of adapting a fluidic airspeed sensor for use as a respiratory flowmeter. A Pulmonary Function Testing Flowmeter was developed which should prove useful for mass screening applications. The fluidic sensor threshold level was not reduced sufficiently to permit its adaptation to measuring the low respiratory flow rates encountered in many respiratory disorders.

  18. VirSSPA- a virtual reality tool for surgical planning workflow.

    PubMed

    Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T

    2009-03-01

    A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.

  19. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  20. Human sensitivity to vertical self-motion.

    PubMed

    Nesti, Alessandro; Barnett-Cowan, Michael; Macneilage, Paul R; Bülthoff, Heinrich H

    2014-01-01

    Perceiving vertical self-motion is crucial for maintaining balance as well as for controlling an aircraft. Whereas heave absolute thresholds have been exhaustively studied, little work has been done in investigating how vertical sensitivity depends on motion intensity (i.e., differential thresholds). Here we measure human sensitivity for 1-Hz sinusoidal accelerations for 10 participants in darkness. Absolute and differential thresholds are measured for upward and downward translations independently at 5 different peak amplitudes ranging from 0 to 2 m/s(2). Overall vertical differential thresholds are higher than horizontal differential thresholds found in the literature. Psychometric functions are fit in linear and logarithmic space, with goodness of fit being similar in both cases. Differential thresholds are higher for upward as compared to downward motion and increase with stimulus intensity following a trend best described by two power laws. The power laws' exponents of 0.60 and 0.42 for upward and downward motion, respectively, deviate from Weber's Law in that thresholds increase less than expected at high stimulus intensity. We speculate that increased sensitivity at high accelerations and greater sensitivity to downward than upward self-motion may reflect adaptations to avoid falling.

  1. Cross counter-based adaptive assembly scheme in optical burst switching networks

    NASA Astrophysics Data System (ADS)

    Zhu, Zhi-jun; Dong, Wen; Le, Zi-chun; Chen, Wan-jun; Sun, Xingshu

    2009-11-01

    A novel adaptive assembly algorithm called Cross-counter Balance Adaptive Assembly Period (CBAAP) is proposed in this paper. The major difference between CBAAP and other adaptive assembly algorithms is that the threshold of CBAAP can be dynamically adjusted according to the cross counter and step length value. In terms of assembly period and the burst loss probability, we compare the performance of CBAAP with those of three typical algorithms FAP (Fixed Assembly Period), FBL (Fixed Burst Length) and MBMAP (Min-Burst length-Max-Assembly-Period) in the simulation part. The simulation results demonstrate the effectiveness of our algorithm.

  2. Metal Artifact Reduction in X-ray Computed Tomography Using Computer-Aided Design Data of Implants as Prior Information.

    PubMed

    Ruth, Veikko; Kolditz, Daniel; Steiding, Christian; Kalender, Willi A

    2017-06-01

    The performance of metal artifact reduction (MAR) methods in x-ray computed tomography (CT) suffers from incorrect identification of metallic implants in the artifact-affected volumetric images. The aim of this study was to investigate potential improvements of state-of-the-art MAR methods by using prior information on geometry and material of the implant. The influence of a novel prior knowledge-based segmentation (PS) compared with threshold-based segmentation (TS) on 2 MAR methods (linear interpolation [LI] and normalized-MAR [NORMAR]) was investigated. The segmentation is the initial step of both MAR methods. Prior knowledge-based segmentation uses 3-dimensional registered computer-aided design (CAD) data as prior knowledge to estimate the correct position and orientation of the metallic objects. Threshold-based segmentation uses an adaptive threshold to identify metal. Subsequently, for LI and NORMAR, the selected voxels are projected into the raw data domain to mark metal areas. Attenuation values in these areas are replaced by different interpolation schemes followed by a second reconstruction. Finally, the previously selected metal voxels are replaced by the metal voxels determined by PS or TS in the initial reconstruction. First, we investigated in an elaborate phantom study if the knowledge of the exact implant shape extracted from the CAD data provided by the manufacturer of the implant can improve the MAR result. Second, the leg of a human cadaver was scanned using a clinical CT system before and after the implantation of an artificial knee joint. The results were compared regarding segmentation accuracy, CT number accuracy, and the restoration of distorted structures. The use of PS improved the efficacy of LI and NORMAR compared with TS. Artifacts caused by insufficient segmentation were reduced, and additional information was made available within the projection data. The estimation of the implant shape was more exact and not dependent on a threshold value. Consequently, the visibility of structures was improved when comparing the new approach to the standard method. This was further confirmed by improved CT value accuracy and reduced image noise. The PS approach based on prior implant information provides image quality which is superior to TS-based MAR, especially when the shape of the metallic implant is complex. The new approach can be useful for improving MAR methods and dose calculations within radiation therapy based on the MAR corrected CT images.

  3. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  4. The perception of verticality in lunar and Martian gravity conditions.

    PubMed

    de Winkel, Ksander N; Clément, Gilles; Groen, Eric L; Werkhoven, Peter J

    2012-10-31

    Although the mechanisms of neural adaptation to weightlessness and re-adaptation to Earth-gravity have received a lot of attention since the first human space flight, there is as yet little knowledge about how spatial orientation is affected by partial gravity, such as lunar gravity of 0.16 g or Martian gravity of 0.38 g. Up to now twelve astronauts have spent a cumulated time of approximately 80 h on the lunar surface, but no psychophysical experiments were conducted to investigate their perception of verticality. We investigated how the subjective vertical (SV) was affected by reduced gravity levels during the first European Parabolic Flight Campaign of Partial Gravity. In normal and hypergravity, subjects accurately aligned their SV with the gravitational vertical. However, when gravity was below a certain threshold, subjects aligned their SV with their body longitudinal axis. The value of the threshold varied considerably between subjects, ranging from 0.03 to 0.57 g. Despite the small number of subjects, there was a significant positive correlation of the threshold with subject age, which calls for further investigation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Assessment of the 2016 National Institute for Health and Care Excellence high-sensitivity troponin rule-out strategy

    PubMed Central

    Greenslade, Jaimi; Cullen, Louise; Than, Martin; Kendall, Jason; Body, Richard; Parsonage, William A; Khattab, Ahmed

    2018-01-01

    Objective We aimed to evaluate the limit of detection of high-sensitivity troponin (hs-cTn) and Thrombolysis In Myocardial Infarction (TIMI) score combination rule-out strategy suggested within the 2016 National Institute for Health and Care Excellence (NICE) Chest Pain of Recent Onset guidelines and establish the optimal TIMI score threshold for clinical use. Methods A pooled analysis of adult patients presenting to the emergency department with chest pain and a non-ischaemic ECG, recruited into six prospective studies, from Australia, New Zealand and the UK. We evaluated the sensitivity of TIMI score thresholds from 0 to 2 alongside hs-cTnT or hs-cTnI for the primary outcome of major adverse cardiac events within 30 days. Results Data were available for 3159 patients for hs-cTnT and 4532 for hs-cTnI, of these 376 (11.9%) and 445 (9.8%) had major adverse cardiac events, respectively. Using a TIMI score of 0, the sensitivity for the primary outcome was 99.5% (95% CI 98.1% to 99.9%) alongside hs-cTnT and 98.9% (97.4% to 99.6%)%) alongside hs-cTnI, identifying 17.9% and 21.0% of patients as low risk, respectively. For a TIMI score ≤1 sensitivity was 98.9% (97.3% to 99.7%)%) alongside hs-cTnT and 98.4% (96.8% to 99.4%)%) alongside hs-cTnI, identifying 28.1% and 35.7% as low risk, respectively. For TIMI≤2, meta-sensitivity was <98% with either assay. Conclusions Our findings support the rule-out strategy suggested by NICE. The TIMI score threshold suggested for clinical use is 0. The proportion of patients identified as low risk (18%–21%) and suitable for early discharge using this threshold may be sufficient to encourage change of practice. Trial registration numbers ADAPT observational study/IMPACT intervention trial ACTRN12611001069943. ADAPT-ADP randomised controlled trial ACTRN12610000766011. EDACS-ADP randomised controlled trial ACTRN12613000745741. TRUST observational study ISRCTN no. 21109279. PMID:28864718

  6. Perimetry update.

    PubMed

    Leydhecker, W

    1983-06-01

    The possible applications of computer-assisted static perimetry are examined after five years of personal controlled studies of different types of computerized perimeters. The common advantage of all computer-assisted perimeters is the elimination of the influence of the perimetrist on the results. However, some perimeters are fully computer-assisted and some are only partially so. Even after complete elimination of the perimetrist's influence, some physiological and psychological influences remain due to the patient and will cause fluctuations of the results. In perimeters, the density of the grid and the adaptive strategy of exact threshold measurement are important in obtaining reproducible results. A compromise between duration of the test and exactness has to be found. The most acceptable compromise seems to be an uneven distribution of stimuli, which form a denser grid in areas of special interest and a wider grid in areas less likely to be involved, combined with exact threshold measurements only in suspicious areas. Multiple stimulus presentation is not adequate. High sensitivity of screening is not a great advantage, unless combined with a high specificity. We have shown that using the same stimulus luminosity for center and periphery (a nonadaptive strategy) produces nonspecific results. Only an adaptive strategy can result in high sensitivity and specificity. Adaptive strategy means that the luminosity of the stimulus is adapted to the individual threshold curve of the visual field. In addition, the exact individual thresholds are bracketed by small up and down steps of variation in luminosity. In some cases, scanning programs with two levels of adaptation can be sufficient. The user of modern perimeters must understand such terms as: asb, dB, presentation time, and diameter of stimuli. Projection of stimuli is preferred to light emitting diodes or glass fiber optics. The programs (software) of the modern instruments are of the greatest importance, because the clinical experience that the perimetrist had to acquire in previous manual perimetry is incorporated in these programs. In the Octopus perimeter a delta program is available that differentiates patient fluctuations that may be insignificant from directed significant alterations of the field which might require alteration of therapy. The programs are listed for different computer-assisted perimeters, and their choice is described. The costs of the perimeters are also given. Many controlled clinical studies are quoted briefly where they are useful for understanding the discussion. A brief chapter deals with the reliability of the perimetric test.(ABSTRACT TRUNCATED AT 400 WORDS)

  7. Speeding Up Chemical Searches Using the Inverted Index: the Convergence of Chemoinformatics and Text Search Methods

    PubMed Central

    Nasr, Ramzi; Vernica, Rares; Li, Chen; Baldi, Pierre

    2012-01-01

    In ligand-based screening, retrosynthesis, and other chemoinformatics applications, one of-ten seeks to search large databases of molecules in order to retrieve molecules that are similar to a given query. With the expanding size of molecular databases, the efficiency and scalability of data structures and algorithms for chemical searches are becoming increasingly important. Remarkably, both the chemoinformatics and information retrieval communities have converged on similar solutions whereby molecules or documents are represented by binary vectors, or fingerprints, indexing their substructures such as labeled paths for molecules and n-grams for text, with the same Jaccard-Tanimoto similarity measure. As a result, similarity search methods from one field can be adapted to the other. Here we adapt recent, state-of-the-art, inverted index methods from information retrieval to speed up similarity searches in chemoinformatics. Our results show a several-fold speed-up improvement over previous methods for both thresh-old searches and top-K searches. We also provide a mathematical analysis that allows one to predict the level of pruning achieved by the inverted index approach, and validate the quality of these predictions through simulation experiments. All results can be replicated using data freely downloadable from http://cdb.ics.uci.edu/. PMID:22462644

  8. Methods of Muscle Activation Onset Timing Recorded During Spinal Manipulation.

    PubMed

    Currie, Stuart J; Myers, Casey A; Krishnamurthy, Ashok; Enebo, Brian A; Davidson, Bradley S

    2016-05-01

    The purpose of this study was to determine electromyographic threshold parameters that most reliably characterize the muscular response to spinal manipulation and compare 2 methods that detect muscle activity onset delay: the double-threshold method and cross-correlation method. Surface and indwelling electromyography were recorded during lumbar side-lying manipulations in 17 asymptomatic participants. Muscle activity onset delays in relation to the thrusting force were compared across methods and muscles using a generalized linear model. The threshold combinations that resulted in the lowest Detection Failures were the "8 SD-0 milliseconds" threshold (Detection Failures = 8) and the "8 SD-10 milliseconds" threshold (Detection Failures = 9). The average muscle activity onset delay for the double-threshold method across all participants was 149 ± 152 milliseconds for the multifidus and 252 ± 204 milliseconds for the erector spinae. The average onset delay for the cross-correlation method was 26 ± 101 for the multifidus and 67 ± 116 for the erector spinae. There were no statistical interactions, and a main effect of method demonstrated that the delays were higher when using the double-threshold method compared with cross-correlation. The threshold parameters that best characterized activity onset delays were an 8-SD amplitude and a 10-millisecond duration threshold. The double-threshold method correlated well with visual supervision of muscle activity. The cross-correlation method provides several advantages in signal processing; however, supervision was required for some results, negating this advantage. These results help standardize methods when recording neuromuscular responses of spinal manipulation and improve comparisons within and across investigations. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  9. Bayesian methods for estimating GEBVs of threshold traits

    PubMed Central

    Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q

    2013-01-01

    Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458

  10. Seeing visual word forms: spatial summation, eccentricity and spatial configuration.

    PubMed

    Kao, Chien-Hui; Chen, Chien-Chung

    2012-06-01

    We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Comparison of five segmentation tools for 18F-fluoro-deoxy-glucose-positron emission tomography-based target volume definition in head and neck cancer.

    PubMed

    Schinagl, Dominic A X; Vogel, Wouter V; Hoffmann, Aswin L; van Dalen, Jorn A; Oyen, Wim J; Kaanders, Johannes H A M

    2007-11-15

    Target-volume delineation for radiation treatment to the head and neck area traditionally is based on physical examination, computed tomography (CT), and magnetic resonance imaging. Additional molecular imaging with (18)F-fluoro-deoxy-glucose (FDG)-positron emission tomography (PET) may improve definition of the gross tumor volume (GTV). In this study, five methods for tumor delineation on FDG-PET are compared with CT-based delineation. Seventy-eight patients with Stages II-IV squamous cell carcinoma of the head and neck area underwent coregistered CT and FDG-PET. The primary tumor was delineated on CT, and five PET-based GTVs were obtained: visual interpretation, applying an isocontour of a standardized uptake value of 2.5, using a fixed threshold of 40% and 50% of the maximum signal intensity, and applying an adaptive threshold based on the signal-to-background ratio. Absolute GTV volumes were compared, and overlap analyses were performed. The GTV method of applying an isocontour of a standardized uptake value of 2.5 failed to provide successful delineation in 45% of cases. For the other PET delineation methods, volume and shape of the GTV were influenced heavily by the choice of segmentation tool. On average, all threshold-based PET-GTVs were smaller than on CT. Nevertheless, PET frequently detected significant tumor extension outside the GTV delineated on CT (15-34% of PET volume). The choice of segmentation tool for target-volume definition of head and neck cancer based on FDG-PET images is not trivial because it influences both volume and shape of the resulting GTV. With adequate delineation, PET may add significantly to CT- and physical examination-based GTV definition.

  12. Parallel Density-Based Clustering for Discovery of Ionospheric Phenomena

    NASA Astrophysics Data System (ADS)

    Pankratius, V.; Gowanlock, M.; Blair, D. M.

    2015-12-01

    Ionospheric total electron content maps derived from global networks of dual-frequency GPS receivers can reveal a plethora of ionospheric features in real-time and are key to space weather studies and natural hazard monitoring. However, growing data volumes from expanding sensor networks are making manual exploratory studies challenging. As the community is heading towards Big Data ionospheric science, automation and Computer-Aided Discovery become indispensable tools for scientists. One problem of machine learning methods is that they require domain-specific adaptations in order to be effective and useful for scientists. Addressing this problem, our Computer-Aided Discovery approach allows scientists to express various physical models as well as perturbation ranges for parameters. The search space is explored through an automated system and parallel processing of batched workloads, which finds corresponding matches and similarities in empirical data. We discuss density-based clustering as a particular method we employ in this process. Specifically, we adapt Density-Based Spatial Clustering of Applications with Noise (DBSCAN). This algorithm groups geospatial data points based on density. Clusters of points can be of arbitrary shape, and the number of clusters is not predetermined by the algorithm; only two input parameters need to be specified: (1) a distance threshold, (2) a minimum number of points within that threshold. We discuss an implementation of DBSCAN for batched workloads that is amenable to parallelization on manycore architectures such as Intel's Xeon Phi accelerator with 60+ general-purpose cores. This manycore parallelization can cluster large volumes of ionospheric total electronic content data quickly. Potential applications for cluster detection include the visualization, tracing, and examination of traveling ionospheric disturbances or other propagating phenomena. Acknowledgments. We acknowledge support from NSF ACI-1442997 (PI V. Pankratius).

  13. Infrared small target detection based on directional zero-crossing measure

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyue; Ding, Qinghai; Luo, Haibo; Hui, Bin; Chang, Zheng; Zhang, Junchao

    2017-12-01

    Infrared small target detection under complex background and low signal-to-clutter ratio (SCR) condition is of great significance to the development on precision guidance and infrared surveillance. In order to detect targets precisely and extract targets from intricate clutters effectively, a detection method based on zero-crossing saliency (ZCS) map is proposed. The original map is first decomposed into different first-order directional derivative (FODD) maps by using FODD filters. Then the ZCS map is obtained by fusing all directional zero-crossing points. At last, an adaptive threshold is adopted to segment targets from the ZCS map. Experimental results on a series of images show that our method is effective and robust for detection under complex backgrounds. Moreover, compared with other five state-of-the-art methods, our method achieves better performance in terms of detection rate, SCR gain and background suppression factor.

  14. Assessment of body fat based on potential function clustering segmentation of computed tomography images

    NASA Astrophysics Data System (ADS)

    Zhang, Lixin; Lin, Min; Wan, Baikun; Zhou, Yu; Wang, Yizhong

    2005-01-01

    In this paper, a new method of body fat and its distribution testing is proposed based on CT image processing. As it is more sensitive to slight differences in attenuation than standard radiography, CT depicts the soft tissues with better clarity. And body fat has a distinct grayness range compared with its neighboring tissues in a CT image. An effective multi-thresholds image segmentation method based on potential function clustering is used to deal with multiple peaks in the grayness histogram of a CT image. The CT images of abdomens of 14 volunteers with different fatness are processed with the proposed method. Not only can the result of total fat area be got, but also the differentiation of subcutaneous fat from intra-abdominal fat has been identified. The results show the adaptability and stability of the proposed method, which will be a useful tool for diagnosing obesity.

  15. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  16. A long-term target detection approach in infrared image sequence

    NASA Astrophysics Data System (ADS)

    Li, Hang; Zhang, Qi; Li, Yuanyuan; Wang, Liqiang

    2015-12-01

    An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on non-linear histogram equalization, target candidates are coarse-to-fine segmented by using two self-adapt thresholds generated in the intensity space. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to iteratively estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.

  17. A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Yu, Haiyan; Fan, Jiulun

    2017-12-01

    Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.

  18. An Unorthodox Sensory Adaptation Site in the Escherichia coli Serine Chemoreceptor

    PubMed Central

    Han, Xue-Sheng

    2014-01-01

    The serine chemoreceptor of Escherichia coli contains four canonical methylation sites for sensory adaptation that lie near intersubunit helix interfaces of the Tsr homodimer. An unexplored fifth methylation site, E502, lies at an intrasubunit helix interface closest to the HAMP domain that controls input-output signaling in methyl-accepting chemotaxis proteins. We analyzed, with in vivo Förster resonance energy transfer (FRET) kinase assays, the serine thresholds and response cooperativities of Tsr receptors with different mutationally imposed modifications at sites 1 to 4 and/or at site 5. Tsr variants carrying E or Q at residue 502, in combination with unmodifiable D and N replacements at adaptation sites 1 to 4, underwent both methylation and demethylation/deamidation, although detection of the latter modifications required elevated intracellular levels of CheB. These Tsr variants could not mediate a chemotactic response to serine spatial gradients, demonstrating that adaptational modifications at E502 alone are not sufficient for Tsr function. Moreover, E502 is not critical for Tsr function, because only two amino acid replacements at this residue abrogated serine chemotaxis: Tsr-E502P had extreme kinase-off output and Tsr-E502I had extreme kinase-on output. These large threshold shifts are probably due to the unique HAMP-proximal location of methylation site 5. However, a methylation-mimicking glutamine at any Tsr modification site raised the serine response threshold, suggesting that all sites influence signaling by the same general mechanism, presumably through changes in packing stability of the methylation helix bundle. These findings are consistent with control of input-output signaling in Tsr through dynamic interplay of the structural stabilities of the HAMP and methylation bundles. PMID:24272777

  19. Maximal lipidic power in high competitive level triathletes and cyclists

    PubMed Central

    González‐Haro, C; Galilea, P A; González‐de‐Suso, J M; Drobnic, F; Escanero, J F

    2007-01-01

    Objective To describe the fat‐oxidation rate in triathlon and different modalities of endurance cycling. Methods 34 endurance athletes (15 male triathletes, 4 female triathletes, 11 road cyclists and 4 male mountain bikers) underwent a progressive cycloergometer test until exhaustion. Relative work intensity (VO2max), minimal lactate concentration (La−min), lactic threshold, individual lactic threshold (ILT), maximal fat‐oxidation rate (Fatmax, Fatmax zone) and minimal fat‐oxidation rate (Fatmin) were determined in each of the groups and were compared by means of one‐way analysis of variance. Results No significant differences were found for Fatmax, Fatmin or for the Fatmax zone expressed as fat oxidation rate (g/min). Intensities −20%, −10% and −5% Fatmax were significantly lower for mountain bikers with respect to road cyclists and female triathletes, expressed as % VO2max. Intensities 20%, 10% and 5% Fatmax were significantly lower for mountain bikers with respect to male triathletes and female triathletes, and for male triathletes in comparison with female triathletes, expressed as % VO2max. Lactic threshold and La−min did not show significant differences with respect to Fatmax. Lactic threshold was found at the same VO2max with respect to the higher part of the Fatmax zone, and La−min at the same VO2max with respect to the lower part of the Fatmax zone. Conclusions The VO2max of Fatmax and the Fatmax zone may explain the different endurance adaptations of the athletes according to their sporting discipline. Lactic threshold and La−min were found at different relative work intensities with respect to those of Fatmax even though they belonged to the Fatmax zone. PMID:17062656

  20. Prediction of pKa Values for Neutral and Basic Drugs based on Hybrid Artificial Intelligence Methods.

    PubMed

    Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin

    2018-03-05

    The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.

  1. Derivation of groundwater threshold values for analysis of impacts predicted at potential carbon sequestration sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Last, G. V.; Murray, C. J.; Bott, Y.

    2016-06-01

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts to groundwater quality due to carbon dioxide (CO 2) or brine leakage, should it occur from deep CO 2 storage reservoirs. These efforts targeted two classes of aquifer – an unconfined fractured carbonate aquifer based on the Edwards Aquifer in Texas, and a confined alluvium aquifer based on the High Plains Aquifer in Kansas. Hypothetical leakage scenarios focus on wellbores as the most likely conduits from the storage reservoir to an underground source of drinking water (USDW). To facilitate evaluationmore » of potential degradation of the USDWs, threshold values, below which there would be no predicted impacts, were determined for each of these two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities. Results demonstrate the importance of establishing baseline groundwater quality conditions that capture the spatial and temporal variability of the USDWs prior to CO 2 injection and storage.« less

  2. The Harvard Beat Assessment Test (H-BAT): a battery for assessing beat perception and production and their dissociation.

    PubMed

    Fujii, Shinya; Schlaug, Gottfried

    2013-01-01

    Humans have the abilities to perceive, produce, and synchronize with a musical beat, yet there are widespread individual differences. To investigate these abilities and to determine if a dissociation between beat perception and production exists, we developed the Harvard Beat Assessment Test (H-BAT), a new battery that assesses beat perception and production abilities. H-BAT consists of four subtests: (1) music tapping test (MTT), (2) beat saliency test (BST), (3) beat interval test (BIT), and (4) beat finding and interval test (BFIT). MTT measures the degree of tapping synchronization with the beat of music, whereas BST, BIT, and BFIT measure perception and production thresholds via psychophysical adaptive stair-case methods. We administered the H-BAT on thirty individuals and investigated the performance distribution across these individuals in each subtest. There was a wide distribution in individual abilities to tap in synchrony with the beat of music during the MTT. The degree of synchronization consistency was negatively correlated with thresholds in the BST, BIT, and BFIT: a lower degree of synchronization was associated with higher perception and production thresholds. H-BAT can be a useful tool in determining an individual's ability to perceive and produce a beat within a single session.

  3. The Harvard Beat Assessment Test (H-BAT): a battery for assessing beat perception and production and their dissociation

    PubMed Central

    Fujii, Shinya; Schlaug, Gottfried

    2013-01-01

    Humans have the abilities to perceive, produce, and synchronize with a musical beat, yet there are widespread individual differences. To investigate these abilities and to determine if a dissociation between beat perception and production exists, we developed the Harvard Beat Assessment Test (H-BAT), a new battery that assesses beat perception and production abilities. H-BAT consists of four subtests: (1) music tapping test (MTT), (2) beat saliency test (BST), (3) beat interval test (BIT), and (4) beat finding and interval test (BFIT). MTT measures the degree of tapping synchronization with the beat of music, whereas BST, BIT, and BFIT measure perception and production thresholds via psychophysical adaptive stair-case methods. We administered the H-BAT on thirty individuals and investigated the performance distribution across these individuals in each subtest. There was a wide distribution in individual abilities to tap in synchrony with the beat of music during the MTT. The degree of synchronization consistency was negatively correlated with thresholds in the BST, BIT, and BFIT: a lower degree of synchronization was associated with higher perception and production thresholds. H-BAT can be a useful tool in determining an individual's ability to perceive and produce a beat within a single session. PMID:24324421

  4. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  5. Heat-related deaths in hot cities: estimates of human tolerance to high temperature thresholds.

    PubMed

    Harlan, Sharon L; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B; Morales Butler, Emmanuel J; Ruddell, Benjamin L; Ruddell, Darren M

    2014-03-20

    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages <65 and ≥ 65 during the months May-October for years 2000-2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90-97 °F; 32.2-36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males <65 years (ATmax = 102 °F; 38.9 °C). Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide.

  6. Proposal on Calculation of Ventilation Threshold Using Non-contact Respiration Measurement with Pattern Light Projection

    NASA Astrophysics Data System (ADS)

    Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo

    We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.

  7. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  8. Thermal detection thresholds in 5-year-old preterm born children; IQ does matter.

    PubMed

    de Graaf, Joke; Valkenburg, Abraham J; Tibboel, Dick; van Dijk, Monique

    2012-07-01

    Experiencing pain at newborn age may have consequences on one's somatosensory perception later in life. Children's perception for cold and warm stimuli may be determined with the Thermal Sensory Analyzer (TSA) device by two different methods. This pilot study in 5-year-old children born preterm aimed at establishing whether the TSA method of limits, which is dependent of reaction time, and the method of levels, which is independent of reaction time, would yield different cold and warm detection thresholds. The second aim was to establish possible associations between intellectual ability and the detection thresholds obtained with either method. A convenience sample was drawn from the participants in an ongoing 5-year follow-up study of a randomized controlled trial on effects of morphine during mechanical ventilation. Thresholds were assessed using both methods and statistically compared. Possible associations between the child's intelligence quotient (IQ) and threshold levels were analyzed. The method of levels yielded more sensitive thresholds than did the method of limits, i.e. mean (SD) cold detection thresholds: 30.3 (1.4) versus 28.4 (1.7) (Cohen'sd=1.2, P=0.001) and warm detection thresholds; 33.9 (1.9) versus 35.6 (2.1) (Cohen's d=0.8, P=0.04). IQ was statistically significantly associated only with the detection thresholds obtained with the method of limits (cold: r=0.64, warm: r=-0.52). The TSA method of levels, is to be preferred over the method of limits in 5-year-old preterm born children, as it establishes more sensitive detection thresholds and is independent of IQ. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. S-cone discrimination in the presence of two adapting fields: data and model

    PubMed Central

    Cao, Dingcai

    2014-01-01

    This study investigated S-cone discrimination using a test annulus surrounded by an inner and outer adapting field with systematic manipulation of the adapting l = L/(L + M) or s = S/(L + M) chromaticities. The results showed that different adapting l chromaticities altered S-cone discrimination for a high adapting s chromaticity due to parvocellular input to the koniocellular pathway. In addition, S-cone discrimination was determined by the combined spectral signals arising from both adapting fields. The “white” adapting field or an adapting field with a different l chromaticity from the other fields was more likely to have a stronger influence on discrimination thresholds. These results indicated that the two cardinal axes are not independent in S-cone discrimination, and the two adapting fields jointly contribute to S-cone discrimination through a cortical summation mechanism. PMID:24695204

  10. The Regularity of Sustained Firing Reveals Two Populations of Slowly Adapting Touch Receptors in Mouse Hairy Skin

    PubMed Central

    Wellnitz, Scott A.; Lesniak, Daine R.; Gerling, Gregory J.

    2010-01-01

    Touch is initiated by diverse somatosensory afferents that innervate the skin. The ability to manipulate and classify receptor subtypes is prerequisite for elucidating sensory mechanisms. Merkel cell–neurite complexes, which distinguish shapes and textures, are experimentally tractable mammalian touch receptors that mediate slowly adapting type I (SAI) responses. The assessment of SAI function in mutant mice has been hindered because previous studies did not distinguish SAI responses from slowly adapting type II (SAII) responses, which are thought to arise from different end organs, such as Ruffini endings. Thus we sought methods to discriminate these afferent types. We developed an epidermis-up ex vivo skin–nerve chamber to record action potentials from afferents while imaging Merkel cells in intact receptive fields. Using model-based cluster analysis, we found that two types of slowly adapting receptors were readily distinguished based on the regularity of touch-evoked firing patterns. We identified these clusters as SAI (coefficient of variation = 0.78 ± 0.09) and SAII responses (0.21 ± 0.09). The identity of SAI afferents was confirmed by recording from transgenic mice with green fluorescent protein–expressing Merkel cells. SAI receptive fields always contained fluorescent Merkel cells (n = 10), whereas SAII receptive fields lacked these cells (n = 5). Consistent with reports from other vertebrates, mouse SAI and SAII responses arise from afferents exhibiting similar conduction velocities, receptive field sizes, mechanical thresholds, and firing rates. These results demonstrate that mice, like other vertebrates, have two classes of slowly adapting light-touch receptors, identify a simple method to distinguish these populations, and extend the utility of skin–nerve recordings for genetic dissection of touch receptor mechanisms. PMID:20393068

  11. Sensitivity Differences in Fish Offer Near-Infrared Vision as an Adaptable Evolutionary Trait

    PubMed Central

    Shcherbakov, Denis; Knörzer, Alexandra; Espenhahn, Svenja; Hilbig, Reinhard; Haas, Ulrich; Blum, Martin

    2013-01-01

    Near-infrared (NIR) light constitutes an integrated part of solar radiation. The principal ability to sense NIR under laboratory conditions has previously been demonstrated in fish. The availability of NIR in aquatic habitats, and thus its potential use as a cue for distinct behaviors such as orientation and detection of prey, however, depends on physical and environmental parameters. In clear water, blue and green light represents the dominating part of the illumination. In turbid waters, in contrast, the relative content of red and NIR radiation is enhanced, due to increased scattering and absorption of short and middle range wavelengths by suspended particles and dissolved colored materials. We have studied NIR detection thresholds using a phototactic swimming assay in five fish species, which are exposed to different NIR conditions in their natural habitats. Nile and Mozambique tilapia, which inhabit waters with increased turbidity, displayed the highest spectral sensitivity, with thresholds at wavelengths above 930 nm. Zebrafish, guppy and green swordtail, which prefer clearer waters, revealed significantly lower thresholds of spectral sensitivity with 825–845 nm for green swordtail and 845–910 nm for zebrafish and guppy. The present study revealed a clear correlation between NIR sensation thresholds and availability of NIR in the natural habitats, suggesting that NIR vision, as an integral part of the whole spectrum of visual abilities, can serve as an evolutionarily adaptable trait in fish. PMID:23691215

  12. An automatic segmentation method of a parameter-adaptive PCNN for medical images.

    PubMed

    Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide

    2017-09-01

    Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.

  13. A comparison of viscous-plastic sea ice solvers with and without replacement pressure

    NASA Astrophysics Data System (ADS)

    Kimmritz, Madlen; Losch, Martin; Danilov, Sergey

    2017-07-01

    Recent developments of the explicit elastic-viscous-plastic (EVP) solvers call for a new comparison with implicit solvers for the equations of viscous-plastic sea ice dynamics. In Arctic sea ice simulations, the modified and the adaptive EVP solvers, and the implicit Jacobian-free Newton-Krylov (JFNK) solver are compared against each other. The adaptive EVP method shows convergence rates that are generally similar or even better than those of the modified EVP method, but the convergence of the EVP methods is found to depend dramatically on the use of the replacement pressure (RP). Apparently, using the RP can affect the pseudo-elastic waves in the EVP methods by introducing extra non-physical oscillations so that, in the extreme case, convergence to the VP solution can be lost altogether. The JFNK solver also suffers from higher failure rates with RP implying that with RP the momentum equations are stiffer and more difficult to solve. For practical purposes, both EVP methods can be used efficiently with an unexpectedly low number of sub-cycling steps without compromising the solutions. The differences between the RP solutions and the NoRP solutions (when the RP is not being used) can be reduced with lower thresholds of viscous regularization at the cost of increasing stiffness of the equations, and hence the computational costs of solving them.

  14. Models of energy homeostasis in response to maintenance of reduced body weight

    PubMed Central

    Rosenbaum, Michael; Leibel, Rudolph L.

    2016-01-01

    Objective To test 3 proposed models for adaptive thermogenesis in compartments of energy expenditure following different degrees of weight loss. Specifically, 1.) There is no adaptive thermogenesis (constant relationship of energy expenditure (EE) to metabolic mass). 2.) There is a fixed degree of adaptive thermogenesis once fat stores are below a “threshold”. 3.) The degree of adaptive thermogenesis is proportional to weight loss. Methods The relationship between weight loss and EE was examined in seventeen weight stable in-patient subjects with obesity studied at usual weight and again following a 10% and a 20% weight loss. Results Following initial weight loss (10%), resting (REE) and non-resting (NREE) EE were significantly below those predicted on the basis of the amount and composition of weight lost. Further reductions below predicted values of NREE but not REE occurred following an additional 10% weight loss. Changes in body weight, composition, and/or energy stores were significantly correlated with changes in EE. Conclusion All models are applicable to the decline in EE following weight loss. The disproportionate decline in REE is consistent with a threshold model (no change with further weight loss) while the disproportionate decline in NREE is largely reflective of the degree of weight loss. PMID:27460711

  15. Focusing Resource Allocation-Wellbeing as a Tool for Prioritizing Interventions for Communities at Risk

    PubMed Central

    Hogan, Anthony; Tanton, Robert; Lockie, Stewart; May, Sarah

    2013-01-01

    Objective: This study examined whether a wellbeing approach to resilience and adaptation would provide practical insights for prioritizing support to communities experiencing environmental and socio-economic stressors. Methods: A cross-sectional survey, based on a purposive sample of 2,196 stakeholders (landholders, hobby farmers, town resident and change agents) from three irrigation-dependent communities in Australia’s Murray-Darling Basin. Respondents’ adaptive capacity and wellbeing (individual and collective adaptive capacity, subjective wellbeing, social support, community connectivity, community leadership, in the context of known life stressors) were examined using chi-square, comparison of mean scores, hierarchical regression and factor-cluster analysis. Results: Statistically significant correlations (p < 0.05) were observed between individual (0.331) and collective (0.318) adaptive capacity and wellbeing. Taking into account respondents’ self-assessed health and socio-economic circumstances, perceptions of individual (15%) and collective adaptive capacity (10%) as well as community connectivity (13%) were associated with wellbeing (R2 = 0.36; F (9, 2099) = 132.9; p < 0.001). Cluster analysis found that 11% of respondents were particularly vulnerable, reporting below average scores on all indicators, with 56% of these reporting below threshold scores on subjective wellbeing. Conclusions: Addressing the capacity of individuals to work with others and to adapt to change, serve as important strategies in maintaining wellbeing in communities under stress. The human impacts of exogenous stressors appear to manifest themselves in poorer health outcomes; addressing primary stressors may in turn aid wellbeing. Longitudinal studies are indicated to verify these findings. Wellbeing may serve as a useful and parsimonious proxy measure for resilience and adaptive capacity. PMID:23924885

  16. Rejection of the maternal electrocardiogram in the electrohysterogram signal.

    PubMed

    Leman, H; Marque, C

    2000-08-01

    The electrohysterogram (EHG) signal is mainly corrupted by the mother's electrocardiogram (ECG), which remains present despite analog filtering during acquisition. Wavelets are a powerful denoising tool and have already proved their efficiency on the EHG. In this paper, we propose a new method that employs the redundant wavelet packet transform. We first study wavelet packet coefficient histograms and propose an algorithm to automatically detect the histogram mode number. Using a new criterion, we compute a best basis adapted to the denoising. After EHG wavelet packet coefficient thresholding in the selected basis, the inverse transform is applied. The ECG seems to be very efficiently removed.

  17. [Investigation of fast filter of ECG signals with lifting wavelet and smooth filter].

    PubMed

    Li, Xuefei; Mao, Yuxing; He, Wei; Yang, Fan; Zhou, Liang

    2008-02-01

    The lifting wavelet is used to decompose the original ECG signals and separate them into the approach signals with low frequency and the detail signals with high frequency, based on frequency characteristic. Parts of the detail signals are ignored according to the frequency characteristic. To avoid the distortion of QRS Complexes, the approach signals are filtered by an adaptive smooth filter with a proper threshold value. Through the inverse transform of the lifting wavelet, the reserved approach signals are reconstructed, and the three primary kinds of noise are limited effectively. In addition, the method is fast and there is no time delay between input and output.

  18. Cold adaptation, aging, and Korean women divers haenyeo.

    PubMed

    Lee, Joo-Young; Park, Joonhee; Kim, Siyeon

    2017-08-08

    We have been studying the thermoregulatory responses of Korean breath-hold women divers, called haenyeo, in terms of aging and cold adaptation. During the 1960s to the 1980s, haenyeos received attention from environmental physiologists due to their unique ability to endure cold water while wearing only a thin cotton bathing suit. However, their overall cold-adaptive traits have disappeared since they began to wear wetsuits and research has waned since the 1980s. For social and economic reasons, the number of haenyeos rapidly decreased to 4005 in 2015 from 14,143 in 1970 and the average age of haenyeos is about 75 years old at present. For the past several years, we revisited and explored older haenyeos in terms of environmental physiology, beginning with questionnaire and field studies and later advancing to thermal tolerance tests in conjunction with cutaneous thermal threshold tests in a climate chamber. As control group counterparts, older non-diving females and young non-diving females were compared with older haenyeos in the controlled experiments. Our findings were that older haenyeos still retain local cold tolerance on the extremities despite their aging. Finger cold tests supported more superior local cold tolerance for older haenyeos than for older non-diving females. However, thermal perception in cold reflected aging effects rather than local cold acclimatization. An interesting finding was the possibility of positive cross-adaptation which might be supported by greater heat tolerance and cutaneous warm perception thresholds of older haenyeos who adapted to cold water. It was known that cold-adaptive traits of haenyeos disappeared, but we confirmed that cold-adaptive traits are still retained on the face and hands which could be interpreted by a mode switch to local adaptation from the overall adaptation to cold. Further studies on cross-adaptation between chronic cold stress and heat tolerance are needed.

  19. Numerical simulation of inductive method for determining spatial distribution of critical current density

    NASA Astrophysics Data System (ADS)

    Kamitani, A.; Takayama, T.; Tanaka, A.; Ikuno, S.

    2010-11-01

    The inductive method for measuring the critical current density jC in a high-temperature superconducting (HTS) thin film has been investigated numerically. In order to simulate the method, a non-axisymmetric numerical code has been developed for analyzing the time evolution of the shielding current density. In the code, the governing equation of the shielding current density is spatially discretized with the finite element method and the resulting first-order ordinary differential system is solved by using the 5th-order Runge-Kutta method with an adaptive step-size control algorithm. By using the code, the threshold current IT is evaluated for various positions of a coil. The results of computations show that, near a film edge, the accuracy of the estimating formula for jC is remarkably degraded. Moreover, even the proportional relationship between jC and IT will be lost there. Hence, the critical current density near a film edge cannot be estimated by using the inductive method.

  20. Meet EPA Scientist Jordan West, Ph.D.

    EPA Pesticide Factsheets

    Jordan West, Ph.D. is an aquatic ecologist at EPA. Her areas of expertise include freshwater & marine ecology, climate change impacts and adaptation, resilience and threshold theory, environmental risk assessment, expert elicitation & stakeholder processes

  1. Error diffusion concept for multi-level quantization

    NASA Astrophysics Data System (ADS)

    Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof

    1990-11-01

    The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.

  2. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.

    PubMed

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei

    2018-04-08

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  3. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm

    PubMed Central

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei

    2018-01-01

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry–Perot (F–P) filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed. PMID:29642507

  4. Is the goal of mastication reached in young dentates, aged dentates and aged denture wearers?

    PubMed

    Mishellany-Dutour, Anne; Renaud, Johanne; Peyron, Marie-Agnès; Rimek, Frank; Woda, Alain

    2008-01-01

    The objective of the present study was to assess the impact of age and dentition status on masticatory function. A three-arm case-control study was performed. Group 1 (n 14) was composed of young fully dentate subjects (age 35.6 +/- 10.6 years), group 2 (n 14) of aged fully dentate subjects (age 68.8 +/- 7.0 years) and group 3 (n 14) of aged full denture wearers (age 68.1 +/- 7.2 years). Mastication adaptation was assessed in the course of chewing groundnuts and carrots to swallowing threshold. Particle size distribution of the chewed food, electromyographic (EMG) activity of the masseter and temporalis muscles during chewing, and resting and stimulated whole saliva rates were measured. Aged dentate subjects used significantly more chewing strokes to reach swallowing threshold than younger dentate subjects (P < 0.05), with increased particle size reduction, longer chewing sequence duration (P < 0.05) and greater total EMG activity (P < 0.05) for both groundnuts and carrots. In addition, aged denture wearers made significantly more chewing strokes than aged dentate subjects (P < 0.001) to reach swallowing threshold for groundnuts. Particle size reduction at time of swallowing was significantly poorer for denture wearers than for their aged dentate counterparts, despite an increase in chewing strokes, sequence duration and EMG activity per sequence. Masticatory function was thus adapted to ageing, but was impaired in denture wearers, who failed to adapt fully to their deficient masticatory apparatus.

  5. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  6. Welding studs detection based on line structured light

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Wang, Jia; Wang, Wen; Xiao, Zhitao

    2018-01-01

    The quality of welding studs is significant for installation and localization of components of car in the process of automobile general assembly. A welding stud detection method based on line structured light is proposed. Firstly, the adaptive threshold is designed to calculate the binary images. Then, the light stripes of the image are extracted after skeleton line extraction and morphological filtering. The direction vector of the main light stripe is calculated using the length of the light stripe. Finally, the gray projections along the orientation of the main light stripe and the vertical orientation of the main light stripe are computed to obtain curves of gray projection, which are used to detect the studs. Experimental results demonstrate that the error rate of proposed method is lower than 0.1%, which is applied for automobile manufacturing.

  7. Segmentation of Retinal Blood Vessels Based on Cake Filter

    PubMed Central

    Bao, Xi-Rong; Ge, Xin; She, Li-Huang; Zhang, Shi

    2015-01-01

    Segmentation of retinal blood vessels is significant to diagnosis and evaluation of ocular diseases like glaucoma and systemic diseases such as diabetes and hypertension. The retinal blood vessel segmentation for small and low contrast vessels is still a challenging problem. To solve this problem, a new method based on cake filter is proposed. Firstly, a quadrature filter band called cake filter band is made up in Fourier field. Then the real component fusion is used to separate the blood vessel from the background. Finally, the blood vessel network is got by a self-adaption threshold. The experiments implemented on the STARE database indicate that the new method has a better performance than the traditional ones on the small vessels extraction, average accuracy rate, and true and false positive rate. PMID:26636095

  8. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods.

    PubMed

    Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano

    2018-04-11

    Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with < 90% accuracy were excluded. Standardised image features were calculated, and a series of prognostic models were developed using identical clinical data. The proportion of patients changing risk classification group were calculated. Out of nine PET segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.

  9. Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars

    NASA Astrophysics Data System (ADS)

    Ruml, Mirjana; Vuković, Ana; Milatović, Dragan

    2010-07-01

    The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.

  10. Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.

    PubMed

    Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari

    2014-07-01

    [Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.

  11. Structural interpretation in composite systems using powder X-ray diffraction: applications of error propagation to the pair distribution function.

    PubMed

    Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D

    2010-12-01

    To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.

  12. Preceding Vehicle Detection and Tracking Adaptive to Illumination Variation in Night Traffic Scenes Based on Relevance Analysis

    PubMed Central

    Guo, Junbin; Wang, Jianqiang; Guo, Xiaosong; Yu, Chuanqiang; Sun, Xiaoyan

    2014-01-01

    Preceding vehicle detection and tracking at nighttime are challenging problems due to the disturbance of other extraneous illuminant sources coexisting with the vehicle lights. To improve the detection accuracy and robustness of vehicle detection, a novel method for vehicle detection and tracking at nighttime is proposed in this paper. The characteristics of taillights in the gray level are applied to determine the lower boundary of the threshold for taillights segmentation, and the optimal threshold for taillight segmentation is calculated using the OTSU algorithm between the lower boundary and the highest grayscale of the region of interest. The candidate taillight pairs are extracted based on the similarity between left and right taillights, and the non-vehicle taillight pairs are removed based on the relevance analysis of vehicle location between frames. To reduce the false negative rate of vehicle detection, a vehicle tracking method based on taillights estimation is applied. The taillight spot candidate is sought in the region predicted by Kalman filtering, and the disturbed taillight is estimated based on the symmetry and location of the other taillight of the same vehicle. Vehicle tracking is completed after estimating its location according to the two taillight spots. The results of experiments on a vehicle platform indicate that the proposed method could detect vehicles quickly, correctly and robustly in the actual traffic environments with illumination variation. PMID:25195855

  13. Preceding vehicle detection and tracking adaptive to illumination variation in night traffic scenes based on relevance analysis.

    PubMed

    Guo, Junbin; Wang, Jianqiang; Guo, Xiaosong; Yu, Chuanqiang; Sun, Xiaoyan

    2014-08-19

    Preceding vehicle detection and tracking at nighttime are challenging problems due to the disturbance of other extraneous illuminant sources coexisting with the vehicle lights. To improve the detection accuracy and robustness of vehicle detection, a novel method for vehicle detection and tracking at nighttime is proposed in this paper. The characteristics of taillights in the gray level are applied to determine the lower boundary of the threshold for taillights segmentation, and the optimal threshold for taillight segmentation is calculated using the OTSU algorithm between the lower boundary and the highest grayscale of the region of interest. The candidate taillight pairs are extracted based on the similarity between left and right taillights, and the non-vehicle taillight pairs are removed based on the relevance analysis of vehicle location between frames. To reduce the false negative rate of vehicle detection, a vehicle tracking method based on taillights estimation is applied. The taillight spot candidate is sought in the region predicted by Kalman filtering, and the disturbed taillight is estimated based on the symmetry and location of the other taillight of the same vehicle. Vehicle tracking is completed after estimating its location according to the two taillight spots. The results of experiments on a vehicle platform indicate that the proposed method could detect vehicles quickly, correctly and robustly in the actual traffic environments with illumination variation.

  14. Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2002-01-01

    A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.

  15. Downscaling Land Surface Temperature in Complex Regions by Using Multiple Scale Factors with Adaptive Thresholds

    PubMed Central

    Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen

    2017-01-01

    Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301

  16. Application of Multi-Threshold NULL Convention Logic to Adaptive Beamforming Circuits for Ultra-Low Power

    DTIC Science & Technology

    2016-03-31

    Abstract: With the decrease of transistor feature sizes into the ultra-deep submicron range, leakage power becomes an important design challenge for...MTNCL design showed substantial improvements in terms of active energy and leakage power compared to the equivalent synchronous design. Keywords...switching could use a large portion of power. Additionally, leakage power has come to dominate power consumption as process sizes shrink. Adaptive

  17. Preliminay Investigation of Variation in Some Dark Adaptation Aspects fo Possible Relevance to Military Helicopter Aircrew.

    DTIC Science & Technology

    1983-06-01

    Niven, J.I., McFarland, R.A., and Roughton, F.J. Variations in Visual Thresholds During Carbon Monoxide and Hypoxic Anoxia (abstract). Fed. Proc...and Niven, J.I. Visual Thresholds as an Index of the Modification of the Effects of Anoxia by Glucose. Am. J. Physiol. 144:378-88. 1945. 71... Diphosphoglycerate and Night Vision. Aviat. Space Environ. Med. 52(1):41-44. 1981. 100. Sexton, M., Malone, F. and Farnsworth, D. The Effect of Ultra- violet

  18. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  19. Is there a minimum intensity threshold for resistance training-induced hypertrophic adaptations?

    PubMed

    Schoenfeld, Brad J

    2013-12-01

    In humans, regimented resistance training has been shown to promote substantial increases in skeletal muscle mass. With respect to traditional resistance training methods, the prevailing opinion is that an intensity of greater than ~60 % of 1 repetition maximum (RM) is necessary to elicit significant increases in muscular size. It has been surmised that this is the minimum threshold required to activate the complete spectrum of fiber types, particularly those associated with the largest motor units. There is emerging evidence, however, that low-intensity resistance training performed with blood flow restriction (BFR) can promote marked increases in muscle hypertrophy, in many cases equal to that of traditional high-intensity exercise. The anabolic effects of such occlusion-based training have been attributed to increased levels of metabolic stress that mediate hypertrophy at least in part by enhancing recruitment of high-threshold motor units. Recently, several researchers have put forth the theory that low-intensity exercise (≤50 % 1RM) performed without BFR can promote increases in muscle size equal, or perhaps even superior, to that at higher intensities, provided training is carried out to volitional muscular failure. Proponents of the theory postulate that fatiguing contractions at light loads is simply a milder form of BFR and thus ultimately results in maximal muscle fiber recruitment. Current research indicates that low-load exercise can indeed promote increases in muscle growth in untrained subjects, and that these gains may be functionally, metabolically, and/or aesthetically meaningful. However, whether hypertrophic adaptations can equal that achieved with higher intensity resistance exercise (≤60 % 1RM) remains to be determined. Furthermore, it is not clear as to what, if any, hypertrophic effects are seen with low-intensity exercise in well-trained subjects as experimental studies on the topic in this population are lacking. Practical implications of these findings are discussed.

  20. Proprioceptive loss and the perception, control and learning of arm movements in humans: evidence from sensory neuronopathy.

    PubMed

    Miall, R Chris; Kitchen, Nick M; Nam, Se-Ho; Lefumat, Hannah; Renault, Alix G; Ørstavik, Kristin; Cole, Jonathan D; Sarlegna, Fabrice R

    2018-05-19

    It is uncertain how vision and proprioception contribute to adaptation of voluntary arm movements. In normal participants, adaptation to imposed forces is possible with or without vision, suggesting that proprioception is sufficient; in participants with proprioceptive loss (PL), adaptation is possible with visual feedback, suggesting that proprioception is unnecessary. In experiment 1 adaptation to, and retention of, perturbing forces were evaluated in three chronically deafferented participants. They made rapid reaching movements to move a cursor toward a visual target, and a planar robot arm applied orthogonal velocity-dependent forces. Trial-by-trial error correction was observed in all participants. Such adaptation has been characterized with a dual-rate model: a fast process that learns quickly, but retains poorly and a slow process that learns slowly and retains well. Experiment 2 showed that the PL participants had large individual differences in learning and retention rates compared to normal controls. Experiment 3 tested participants' perception of applied forces. With visual feedback, the PL participants could report the perturbation's direction as well as controls; without visual feedback, thresholds were elevated. Experiment 4 showed, in healthy participants, that force direction could be estimated from head motion, at levels close to the no-vision threshold for the PL participants. Our results show that proprioceptive loss influences perception, motor control and adaptation but that proprioception from the moving limb is not essential for adaptation to, or detection of, force fields. The differences in learning and retention seen between the three deafferented participants suggest that they achieve these tasks in idiosyncratic ways after proprioceptive loss, possibly integrating visual and vestibular information with individual cognitive strategies.

  1. An integrative perspective of the anaerobic threshold.

    PubMed

    Sales, Marcelo Magalhães; Sousa, Caio Victor; da Silva Aguiar, Samuel; Knechtle, Beat; Nikolaidis, Pantelis Theodoros; Alves, Polissandro Mortoza; Simões, Herbert Gustavo

    2017-12-14

    The concept of anaerobic threshold (AT) was introduced during the nineteen sixties. Since then, several methods to identify the anaerobic threshold (AT) have been studied and suggested as novel 'thresholds' based upon the variable used for its detection (i.e. lactate threshold, ventilatory threshold, glucose threshold). These different techniques have brought some confusion about how we should name this parameter, for instance, anaerobic threshold or the physiological measure used (i.e. lactate, ventilation). On the other hand, the modernization of scientific methods and apparatus to detect AT, as well as the body of literature formed in the past decades, could provide a more cohesive understanding over the AT and the multiple physiological systems involved. Thus, the purpose of this review was to provide an integrative perspective of the methods to determine AT. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  3. Development of rod function in term born and former preterm subjects.

    PubMed

    Fulton, Anne B; Hansen, Ronald M; Moskowitz, Anne

    2009-06-01

    To provide an overview of some of our electroretinographic (ERG) and psychophysical studies of normal development of rod function and their application to retinopathy of prematurity (ROP). ERG responses to full-field stimuli were recorded from dark adapted subjects. Rod photoreceptor sensitivity (SROD) was calculated by fit of a biochemical model of the activation of phototransduction to the ERG a-wave. Dark adapted psychophysical thresholds for detecting 2 degrees spots in parafoveal (10 degrees eccentric) and peripheral (30 degrees eccentric) retina were measured and the difference between the thresholds, Delta10-30, was examined as a function of age. SROD and Delta10-30 in term born and former preterm subjects were compared. In term born infants, (1) the normal developmental increase in SROD changes proportionately with the amount of rod visual pigment, rhodopsin, and (2) rod-mediated function in central retina is immature compared with that in peripheral retina. In subjects born prematurely, deficits in SROD persist long after active ROP has resolved. Maturation of rod-mediated thresholds in the central retina is prolonged by mild ROP. Characterization of the development of normal rod and rod-mediated function provides a foundation for understanding ROP.

  4. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  5. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  6. Variable threshold algorithm for division of labor analyzed as a dynamical system.

    PubMed

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro

    2014-12-01

    Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

  7. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  8. Micro-Droplet Detection Method for Measuring the Concentration of Alkaline Phosphatase-Labeled Nanoparticles in Fluorescence Microscopy

    PubMed Central

    Li, Rufeng; Wang, Yibei; Xu, Hong; Fei, Baowei; Qin, Binjie

    2017-01-01

    This paper developed and evaluated a quantitative image analysis method to measure the concentration of the nanoparticles on which alkaline phosphatase (AP) was immobilized. These AP-labeled nanoparticles are widely used as signal markers for tagging biomolecules at nanometer and sub-nanometer scales. The AP-labeled nanoparticle concentration measurement can then be directly used to quantitatively analyze the biomolecular concentration. Micro-droplets are mono-dispersed micro-reactors that can be used to encapsulate and detect AP-labeled nanoparticles. Micro-droplets include both empty micro-droplets and fluorescent micro-droplets, while fluorescent micro-droplets are generated from the fluorescence reaction between the APs adhering to a single nanoparticle and corresponding fluorogenic substrates within droplets. By detecting micro-droplets and calculating the proportion of fluorescent micro-droplets to the overall micro-droplets, we can calculate the AP-labeled nanoparticle concentration. The proposed micro-droplet detection method includes the following steps: (1) Gaussian filtering to remove the noise of overall fluorescent targets, (2) a contrast-limited, adaptive histogram equalization processing to enhance the contrast of weakly luminescent micro-droplets, (3) an red maximizing inter-class variance thresholding method (OTSU) to segment the enhanced image for getting the binary map of the overall micro-droplets, (4) a circular Hough transform (CHT) method to detect overall micro-droplets and (5) an intensity-mean-based thresholding segmentation method to extract the fluorescent micro-droplets. The experimental results of fluorescent micro-droplet images show that the average accuracy of our micro-droplet detection method is 0.9586; the average true positive rate is 0.9502; and the average false positive rate is 0.0073. The detection method can be successfully applied to measure AP-labeled nanoparticle concentration in fluorescence microscopy. PMID:29160812

  9. Micro-Droplet Detection Method for Measuring the Concentration of Alkaline Phosphatase-Labeled Nanoparticles in Fluorescence Microscopy.

    PubMed

    Li, Rufeng; Wang, Yibei; Xu, Hong; Fei, Baowei; Qin, Binjie

    2017-11-21

    This paper developed and evaluated a quantitative image analysis method to measure the concentration of the nanoparticles on which alkaline phosphatase (AP) was immobilized. These AP-labeled nanoparticles are widely used as signal markers for tagging biomolecules at nanometer and sub-nanometer scales. The AP-labeled nanoparticle concentration measurement can then be directly used to quantitatively analyze the biomolecular concentration. Micro-droplets are mono-dispersed micro-reactors that can be used to encapsulate and detect AP-labeled nanoparticles. Micro-droplets include both empty micro-droplets and fluorescent micro-droplets, while fluorescent micro-droplets are generated from the fluorescence reaction between the APs adhering to a single nanoparticle and corresponding fluorogenic substrates within droplets. By detecting micro-droplets and calculating the proportion of fluorescent micro-droplets to the overall micro-droplets, we can calculate the AP-labeled nanoparticle concentration. The proposed micro-droplet detection method includes the following steps: (1) Gaussian filtering to remove the noise of overall fluorescent targets, (2) a contrast-limited, adaptive histogram equalization processing to enhance the contrast of weakly luminescent micro-droplets, (3) an red maximizing inter-class variance thresholding method (OTSU) to segment the enhanced image for getting the binary map of the overall micro-droplets, (4) a circular Hough transform (CHT) method to detect overall micro-droplets and (5) an intensity-mean-based thresholding segmentation method to extract the fluorescent micro-droplets. The experimental results of fluorescent micro-droplet images show that the average accuracy of our micro-droplet detection method is 0.9586; the average true positive rate is 0.9502; and the average false positive rate is 0.0073. The detection method can be successfully applied to measure AP-labeled nanoparticle concentration in fluorescence microscopy.

  10. Ecosystem and Food Security in a Changing Climate

    NASA Astrophysics Data System (ADS)

    Field, C. B.

    2011-12-01

    Observed and projected impacts of climate change for ecosystem and food security tend to appear as changes in the risk of both desirable and undesirable outcomes. As a consequence, it is useful to frame the challenge of adaptation to a changing climate as a problem in risk management. For some kinds of impacts, the risks are relatively well characterized. For others, they are poorly known. Especially for the cases where the risks are poorly known, effective adaptation will need to consider approaches that build dynamic portfolios of options, based on learning from experience. Effective adaptation approaches also need to consider the risks of threshold-type responses, where opportunities for gradual adaptation based on learning may be limited. Finally, effective adaptation should build on the understanding that negative impacts on ecosystems and food security often result from extreme events, where a link to climate change may be unclear now and far into the future. Ecosystem and food security impacts that potentially require adaptation to a changing climate vary from region to region and interact strongly with actions not related to climate. In many ecosystems, climate change shifts the risk profile to increase risks of wildfire and biological invasions. Higher order risks from factors like pests and pathogens remain difficult to quantify. For food security, observational evidence highlights threshold-like behavior to high temperature in yields of a number of crops. But the risks to food security may be much broader, encompassing risks to availability of irrigation, degradation of topsoil, and challenges of storage and distribution. A risk management approach facilitates consideration of all these challenges with a unified framework.

  11. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  12. Characterization of the Distance Relationship Between Localized Serotonin Receptors and Glia Cells on Fluorescence Microscopy Images of Brain Tissue.

    PubMed

    Jacak, Jaroslaw; Schaller, Susanne; Borgmann, Daniela; Winkler, Stephan M

    2015-08-01

    We here present two new methods for the characterization of fluorescent localization microscopy images obtained from immunostained brain tissue sections. Direct stochastic optical reconstruction microscopy images of 5-HT1A serotonin receptors and glial fibrillary acidic proteins in healthy cryopreserved brain tissues are analyzed. In detail, we here present two image processing methods for characterizing differences in receptor distribution on glial cells and their distribution on neural cells: One variant relies on skeleton extraction and adaptive thresholding, the other on k-means based discrete layer segmentation. Experimental results show that both methods can be applied for distinguishing classes of images with respect to serotonin receptor distribution. Quantification of nanoscopic changes in relative protein expression on particular cell types can be used to analyze degeneration in tissues caused by diseases or medical treatment.

  13. Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.

    PubMed

    Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela

    Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  15. ASPIRE In-Home: rationale, design, and methods of a study to evaluate the safety and efficacy of automatic insulin suspension for nocturnal hypoglycemia.

    PubMed

    Klonoff, David C; Bergenstal, Richard M; Garg, Satish K; Bode, Bruce W; Meredith, Melissa; Slover, Robert H; Ahmann, Andrew; Welsh, John B; Lee, Scott W

    2013-07-01

    Nocturnal hypoglycemia is a barrier to therapy intensification efforts in diabetes. The Paradigm® Veo™ system may mitigate nocturnal hypoglycemia by automatically suspending insulin when a prespecified sensor glucose threshold is reached. ASPIRE (Automation to Simulate Pancreatic Insulin REsponse) In-Home (NCT01497938) was a multicenter, randomized, parallel, adaptive study of subjects with type 1 diabetes. The control arm used sensor-augmented pump therapy. The treatment arm used sensor-augmented pump therapy with threshold suspend, which automatically suspends the insulin pump in response to a sensor glucose value at or below a prespecified threshold. To be randomized, subjects had to have demonstrated ≥2 episodes of nocturnal hypoglycemia, defined as >20 consecutive minutes of sensor glucose values ≤65 mg/dl starting between 10:00 PM and 8:00 AM in the 2-week run-in phase. The 3-month study phase evaluated safety by comparing changes in glycated hemoglobin (A1C) values and evaluated efficacy by comparing the mean area under the glucose concentration time curves for nocturnal hypoglycemia events in the two groups. Other outcomes included the rate of nocturnal hypoglycemia events and the distribution of sensor glucose values. Data from the ASPIRE In-Home study should provide evidence on the safety of the threshold suspend feature with respect to A1C and its efficacy with respect to severity and duration of nocturnal hypoglycemia when used at home over a 3-month period. © 2013 Diabetes Technology Society.

  16. Category-Specific Comparison of Univariate Alerting Methods for Biosurveillance Decision Support

    PubMed Central

    Elbert, Yevgeniy; Hung, Vivian; Burkom, Howard

    2013-01-01

    Objective For a multi-source decision support application, we sought to match univariate alerting algorithms to surveillance data types to optimize detection performance. Introduction Temporal alerting algorithms commonly used in syndromic surveillance systems are often adjusted for data features such as cyclic behavior but are subject to overfitting or misspecification errors when applied indiscriminately. In a project for the Armed Forces Health Surveillance Center to enable multivariate decision support, we obtained 4.5 years of out-patient, prescription and laboratory test records from all US military treatment facilities. A proof-of-concept project phase produced 16 events with multiple evidence corroboration for comparison of alerting algorithms for detection performance. We used the representative streams from each data source to compare sensitivity of 6 algorithms to injected spikes, and we used all data streams from 16 known events to compare them for detection timeliness. Methods The six methods compared were: Holt-Winters generalized exponential smoothing method (1)automated choice between daily methods, regression and an exponential weighted moving average (2)adaptive daily Shewhart-type chartadaptive one-sided daily CUSUMEWMA applied to 7-day means with a trend correction; and7-day temporal scan statistic Sensitivity testing: We conducted comparative sensitivity testing for categories of time series with similar scales and seasonal behavior. We added multiples of the standard deviation of each time series as single-day injects in separate algorithm runs. For each candidate method, we then used as a sensitivity measure the proportion of these runs for which the output of each algorithm was below alerting thresholds estimated empirically for each algorithm using simulated data streams. We identified the algorithm(s) whose sensitivity was most consistently high for each data category. For each syndromic query applied to each data source (outpatient, lab test orders, and prescriptions), 502 authentic time series were derived, one for each reporting treatment facility. Data categories were selected in order to group time series with similar expected algorithm performance: Median > 100 < Median ≤ 10Median = 0Lag 7 Autocorrelation Coefficient ≥ 0.2Lag 7 Autocorrelation Coefficient < 0.2 Timeliness testing: For the timeliness testing, we avoided artificiality of simulated signals by measuring alerting detection delays in the 16 corroborated outbreaks. The multiple time series from these events gave a total of 141 time series with outbreak intervals for timeliness testing. The following measures were computed to quantify timeliness of detection: Median Detection Delay – median number of days to detect the outbreak.Penalized Mean Detection Delay –mean number of days to detect the outbreak with outbreak misses penalized as 1 day plus the maximum detection time. Results Based on the injection results, the Holt-Winters algorithm was most sensitive among time series with positive medians. The adaptive CUSUM and the Shewhart methods were most sensitive for data streams with median zero. Table 1 provides timeliness results using the 141 outbreak-associated streams on sparse (Median=0) and non-sparse data categories. [Insert table #1 here] Data median Detection Delay, days Holt-winters Regression EWMA Adaptive Shewhart Adaptive CUSUM 7-day Trend-adj. EWMA 7-day Temporal Scan Median 0 Median 3 2 4 2 4.5 2 Penalized Mean 7.2 7 6.6 6.2 7.3 7.6 Median >0 Median 2 2 2.5 2 6 4 Penalized Mean 6.1 7 7.2 7.1 7.7 6.6 The gray shading in the table 1 indicates methods with shortest detection delays for sparse and non-sparse data streams. The Holt-Winters method was again superior for non-sparse data. For data with median=0, the adaptive CUSUM was superior for a daily false alarm probability of 0.01, but the Shewhart method was timelier for more liberal thresholds. Conclusions Both kinds of detection performance analysis showed the method based on Holt-Winters exponential smoothing superior on non-sparse time series with day-of-week effects. The adaptive CUSUM and She-whart methods proved optimal on sparse data and data without weekly patterns.

  17. Heat-Related Deaths in Hot Cities: Estimates of Human Tolerance to High Temperature Thresholds

    PubMed Central

    Harlan, Sharon L.; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B.; Morales Butler, Emmanuel J.; Ruddell, Benjamin L.; Ruddell, Darren M.

    2014-01-01

    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages <65 and ≥65 during the months May–October for years 2000–2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90–97 °F; 32.2‒36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males <65 years (ATmax = 102 °F; 38.9 °C). Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide. PMID:24658410

  18. Compensation for red-green contrast loss in anomalous trichromats

    PubMed Central

    Boehm, A. E.; MacLeod, D. I. A.; Bosten, J. M.

    2014-01-01

    For anomalous trichromats, threshold contrasts for color differences captured by the L and M cones and their anomalous analogs are much higher than for normal trichromats. The greater spectral overlap of the cone sensitivities reduces chromatic contrast both at and above threshold. But above threshold, adaptively nonlinear processing might compensate for the chromatically impoverished photoreceptor inputs. Ratios of sensitivity for threshold variations and for color appearance along the two cardinal axes of MacLeod-Boynton chromaticity space were calculated for three groups: normals (N = 15), deuteranomals (N = 9), and protanomals (N = 5). Using a four-alternative forced choice (4AFC) task, threshold sensitivity was measured in four color-directions along the two cardinal axes. For the same participants, we reconstructed perceptual color spaces for the positions of 25 hues using multidimensional scaling (MDS). From the reconstructed color spaces we extracted “color difference ratios,” defined as ratios for the size of perceived color differences along the L/(L + M) axis relative to those along the S/(L + M) axis, analogous to “sensitivity ratios” extracted from the 4AFC task. In the 4AFC task, sensitivity ratios were 38% of normal for deuteranomals and 19% of normal for protanomals. Yet, in the MDS results, color difference ratios were 86% of normal for deuteranomals and 67% of normal for protanomals. Thus, the contraction along the L/(L + M) axis shown in the perceptual color spaces of anomalous trichromats is far smaller than predicted by their reduced sensitivity, suggesting that an adaptive adjustment of postreceptoral gain may magnify the cone signals of anomalous trichromats to exploit the range of available postreceptoral neural signals. PMID:25413625

  19. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  20. Effects of silicone hydrogel contact lens wear on ocular surface sensitivity to tactile, pneumatic mechanical, and chemical stimulation.

    PubMed

    Situ, Ping; Simpson, Trefford L; Jones, Lyndon W; Fonn, Desmond

    2010-12-01

    To determine the effects of silicone hydrogel lens wear and lens-solution interactions on ocular surface sensitivity. Forty-eight adapted lens wearers completed the study, which comprised two phases. Phase 1 included habitual lens wear, no lens wear (7 ± 3 days), and balafilcon A lenses (PV; PureVision; Bausch & Lomb, Rochester, NY) with a hydrogen peroxide-based regimen for 2 weeks; phase 2 included wear of PV with the use of a multipurpose solution containing either polyhexamethylene-biguanide (PHMB) or Polyquad/Aldox (Alcon Laboratories, Fort Worth, TX) preservative, each for 1 week, with a 2-week washout period between solutions. Tactile and pneumatic (mechanical and chemical) stimuli were delivered, and thresholds were determined by Cochet-Bonnet (Luneau Ophthalmologie, Chartres, France) and Belmonte (Cooperative Research Centre for Eye Research and Technology, Sydney, NSW, Australia) pneumatic esthesiometers, respectively. Corneal and conjunctival thresholds and staining scores were assessed at baseline, after 2 and 8 hours of lens wear on day 1 and at the end of each wearing cycle (2 hours). In phase 1, compared to the no-lens baseline, corneal tactile thresholds increased at the 1-day, 8-hour and the 2-week visits (P < 0.05), whereas conjunctival mechanical thresholds decreased at the 1-day, 2-hour and the 2-week visits (P < 0.05). In phase 2, the chemical thresholds were lower with PHMB-preserved solution compared with the Polyquad/Aldox system at the 1-day, 2-hour and the 1-week visits (P < 0.05). Staining scores correlated inversely with conjunctival chemical thresholds (all P < 0.05). Ocular surface sensitivity changed in adapted lens wearers, when lenses were refit after a no-lens interval and during lens wear with different care regimens. The corneal staining that was observed with certain lens-solution combinations was accompanied by sensory alteration of the ocular surface-that is, higher levels of staining correlated with increased conjunctival chemical sensitivity. (ClinicalTrials.gov number, NCT00455455.).

  1. Performance analysis of cross-layer design with average PER constraint over MIMO fading channels

    NASA Astrophysics Data System (ADS)

    Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin

    2015-12-01

    In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.

  2. QDMR: a quantitative method for identification of differentially methylated regions by entropy

    PubMed Central

    Zhang, Yan; Liu, Hongbo; Lv, Jie; Xiao, Xue; Zhu, Jiang; Liu, Xiaojuan; Su, Jianzhong; Li, Xia; Wu, Qiong; Wang, Fang; Cui, Ying

    2011-01-01

    DNA methylation plays critical roles in transcriptional regulation and chromatin remodeling. Differentially methylated regions (DMRs) have important implications for development, aging and diseases. Therefore, genome-wide mapping of DMRs across various temporal and spatial methylomes is important in revealing the impact of epigenetic modifications on heritable phenotypic variation. We present a quantitative approach, quantitative differentially methylated regions (QDMRs), to quantify methylation difference and identify DMRs from genome-wide methylation profiles by adapting Shannon entropy. QDMR was applied to synthetic methylation patterns and methylation profiles detected by methylated DNA immunoprecipitation microarray (MeDIP-chip) in human tissues/cells. This approach can give a reasonable quantitative measure of methylation difference across multiple samples. Then DMR threshold was determined from methylation probability model. Using this threshold, QDMR identified 10 651 tissue DMRs which are related to the genes enriched for cell differentiation, including 4740 DMRs not identified by the method developed by Rakyan et al. QDMR can also measure the sample specificity of each DMR. Finally, the application to methylation profiles detected by reduced representation bisulphite sequencing (RRBS) in mouse showed the platform-free and species-free nature of QDMR. This approach provides an effective tool for the high-throughput identification of potential functional regions involved in epigenetic regulation. PMID:21306990

  3. Application of blocking diagnosis methods to General Circulation Models. Part I: a novel detection scheme

    NASA Astrophysics Data System (ADS)

    Barriopedro, D.; García-Herrera, R.; Trigo, R. M.

    2010-12-01

    This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable to General Circulation Models where observational thresholds may be unsuitable due to the presence of model bias. Part II of this study deals with a specific implementation of this novel method to simulations of the ECHO-G global climate model.

  4. Designs and adaptive analysis plans for pivotal clinical trials of therapeutics and companion diagnostics.

    PubMed

    Simon, Richard

    2008-06-01

    Developments in genomics and biotechnology provide unprecedented opportunities for the development of effective therapeutics and companion diagnostics for matching the right drug to the right patient. Effective co-development involves many new challenges with increased opportunity for success as well as delay and failure. Clinical trial designs and adaptive analysis plans for the prospective design of pivotal trials of new therapeutics and companion diagnostics are reviewed. Effective co-development requires careful prospective planning of the design and analysis strategy for pivotal clinical trials. Randomized clinical trials continue to be important for evaluating the effectiveness of new treatments, but the target populations for analysis should be prospectively specified based on the companion diagnostic. Post hoc analyses of traditionally designed randomized clinical trials are often deeply problematic. Clear separation is generally required of the data used for developing the diagnostic test, including their threshold of positivity, from the data used for evaluating treatment effectiveness in subsets determined by the test. Adaptive analysis can be used to provide flexibility to the analysis but the use of such methods requires careful planning and prospective definition in order to assure that the pivotal trial adequately limits the chance of erroneous conclusions.

  5. California sea lion (Zalophus californianus) aerial hearing sensitivity measured using auditory steady-state response and psychophysical methods.

    PubMed

    Mulsow, Jason; Finneran, James J; Houser, Dorian S

    2011-04-01

    Although electrophysiological methods of measuring the hearing sensitivity of pinnipeds are not yet as refined as those for dolphins and porpoises, they appear to be a promising supplement to traditional psychophysical procedures. In order to further standardize electrophysiological methods with pinnipeds, a within-subject comparison of psychophysical and auditory steady-state response (ASSR) measures of aerial hearing sensitivity was conducted with a 1.5-yr-old California sea lion. The psychophysical audiogram was similar to those previously reported for otariids, with a U-shape, and thresholds near 10 dB re 20 μPa at 8 and 16 kHz. ASSR thresholds measured using both single and multiple simultaneous amplitude-modulated tones closely reproduced the psychophysical audiogram, although the mean ASSR thresholds were elevated relative to psychophysical thresholds. Differences between psychophysical and ASSR thresholds were greatest at the low- and high-frequency ends of the audiogram. Thresholds measured using the multiple ASSR method were not different from those measured using the single ASSR method. The multiple ASSR method was more rapid than the single ASSR method, and allowed for threshold measurements at seven frequencies in less than 20 min. The multiple ASSR method may be especially advantageous for hearing sensitivity measurements with otariid subjects that are untrained for psychophysical procedures.

  6. Embedded pitch adapters: A high-yield interconnection solution for strip sensors

    NASA Astrophysics Data System (ADS)

    Ullán, M.; Allport, P. P.; Baca, M.; Broughton, J.; Chisholm, A.; Nikolopoulos, K.; Pyatt, S.; Thomas, J. P.; Wilson, J. A.; Kierstead, J.; Kuczewski, P.; Lynn, D.; Hommels, L. B. A.; Fleta, C.; Fernandez-Tejero, J.; Quirion, D.; Bloch, I.; Díez, S.; Gregor, I. M.; Lohwasser, K.; Poley, L.; Tackmann, K.; Hauser, M.; Jakobs, K.; Kuehn, S.; Mahboubi, K.; Mori, R.; Parzefall, U.; Clark, A.; Ferrere, D.; Gonzalez Sevilla, S.; Ashby, J.; Blue, A.; Bates, R.; Buttar, C.; Doherty, F.; McMullen, T.; McEwan, F.; O'Shea, V.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Unno, Y.; Takashima, R.; Chilingarov, A.; Fox, H.; Affolder, A. A.; Casse, G.; Dervan, P.; Forshaw, D.; Greenall, A.; Wonsak, S.; Wormald, M.; Cindro, V.; Kramberger, G.; Mandić, I.; Mikuž, M.; Gorelov, I.; Hoeferkamp, M.; Palni, P.; Seidel, S.; Taylor, A.; Toms, K.; Wang, R.; Hessey, N. P.; Valencic, N.; Hanagaki, K.; Dolezal, Z.; Kodys, P.; Bohm, J.; Mikestikova, M.; Bevan, A.; Beck, G.; Milke, C.; Domingo, M.; Fadeyev, V.; Galloway, Z.; Hibbard-Lubow, D.; Liang, Z.; Sadrozinski, H. F.-W.; Seiden, A.; To, K.; French, R.; Hodgson, P.; Marin-Reyes, H.; Parker, K.; Jinnouchi, O.; Hara, K.; Bernabeu, J.; Civera, J. V.; Garcia, C.; Lacasta, C.; Marti i Garcia, S.; Rodriguez, D.; Santoyo, D.; Solaz, C.; Soldevila, U.

    2016-09-01

    A proposal to fabricate large area strip sensors with integrated, or embedded, pitch adapters is presented for the End-cap part of the Inner Tracker in the ATLAS experiment. To implement the embedded pitch adapters, a second metal layer is used in the sensor fabrication, for signal routing to the ASICs. Sensors with different embedded pitch adapters have been fabricated in order to optimize the design and technology. Inter-strip capacitance, noise, pick-up, cross-talk, signal efficiency, and fabrication yield have been taken into account in their design and fabrication. Inter-strip capacitance tests taking into account all channel neighbors reveal the important differences between the various designs considered. These tests have been correlated with noise figures obtained in full assembled modules, showing that the tests performed on the bare sensors are a valid tool to estimate the final noise in the full module. The full modules have been subjected to test beam experiments in order to evaluate the incidence of cross-talk, pick-up, and signal loss. The detailed analysis shows no indication of cross-talk or pick-up as no additional hits can be observed in any channel not being hit by the beam above 170 mV threshold, and the signal in those channels is always below 1% of the signal recorded in the channel being hit, above 100 mV threshold. First results on irradiated mini-sensors with embedded pitch adapters do not show any change in the interstrip capacitance measurements with only the first neighbors connected.

  7. Improved Discovery of Molecular Interactions in Genome-Scale Data with Adaptive Model-Based Normalization

    PubMed Central

    Brown, Patrick O.

    2013-01-01

    Background High throughput molecular-interaction studies using immunoprecipitations (IP) or affinity purifications are powerful and widely used in biology research. One of many important applications of this method is to identify the set of RNAs that interact with a particular RNA-binding protein (RBP). Here, the unique statistical challenge presented is to delineate a specific set of RNAs that are enriched in one sample relative to another, typically a specific IP compared to a non-specific control to model background. The choice of normalization procedure critically impacts the number of RNAs that will be identified as interacting with an RBP at a given significance threshold – yet existing normalization methods make assumptions that are often fundamentally inaccurate when applied to IP enrichment data. Methods In this paper, we present a new normalization methodology that is specifically designed for identifying enriched RNA or DNA sequences in an IP. The normalization (called adaptive or AD normalization) uses a basic model of the IP experiment and is not a variant of mean, quantile, or other methodology previously proposed. The approach is evaluated statistically and tested with simulated and empirical data. Results and Conclusions The adaptive (AD) normalization method results in a greatly increased range in the number of enriched RNAs identified, fewer false positives, and overall better concordance with independent biological evidence, for the RBPs we analyzed, compared to median normalization. The approach is also applicable to the study of pairwise RNA, DNA and protein interactions such as the analysis of transcription factors via chromatin immunoprecipitation (ChIP) or any other experiments where samples from two conditions, one of which contains an enriched subset of the other, are studied. PMID:23349766

  8. Temporal integration property of stereopsis after higher-order aberration correction

    PubMed Central

    Kang, Jian; Dai, Yun; Zhang, Yudong

    2015-01-01

    Based on a binocular adaptive optics visual simulator, we investigated the effect of higher-order aberration correction on the temporal integration property of stereopsis. Stereo threshold for line stimuli, viewed in 550nm monochromatic light, was measured as a function of exposure duration, with higher-order aberrations uncorrected, binocularly corrected or monocularly corrected. Under all optical conditions, stereo threshold decreased with increasing exposure duration until a steady-state threshold was reached. The critical duration was determined by a quadratic summation model and the high goodness of fit suggested this model was reasonable. For normal subjects, the slope for stereo threshold versus exposure duration was about −0.5 on logarithmic coordinates, and the critical duration was about 200 ms. Both the slope and the critical duration were independent of the optical condition of the eye, showing no significant effect of higher-order aberration correction on the temporal integration property of stereopsis. PMID:26601010

  9. How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    PubMed

    Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang

    2018-08-01

    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Nociception, pain, negative moods and behavior selection

    PubMed Central

    Baliki, Marwan N.; Apkarian, A. Vania

    2015-01-01

    Recent neuroimaging studies suggest that the brain adapts with pain, as well as imparts risk for developing chronic pain. Within this context we revisit the concepts for nociception, acute and chronic pain, and negative moods relative to behavior selection. We redefine nociception as the mechanism protecting the organism from injury; while acute pain as failure of avoidant behavior; and a mesolimbic threshold process that gates the transformation of nociceptive activity to conscious pain. Adaptations in this threshold process are envisioned to be critical for development of chronic pain. We deconstruct chronic pain into four distinct phases, each with specific mechanisms; and outline current state of knowledge regarding these mechanisms: The limbic brain imparting risk, while mesolimbic learning processes reorganizing the neocortex into a chronic pain state. Moreover, pain and negative moods are envisioned as a continuum of aversive behavioral learning, which enhance survival by protecting against threats. PMID:26247858

  11. Linear servomotor probe drive system with real-time self-adaptive position control for the Alcator C-Mod tokamak

    NASA Astrophysics Data System (ADS)

    Brunner, D.; Kuang, A. Q.; LaBombard, B.; Burke, W.

    2017-07-01

    A new servomotor drive system has been developed for the horizontal reciprocating probe on the Alcator C-Mod tokamak. Real-time measurements of plasma temperature and density—through use of a mirror Langmuir probe bias system—combined with a commercial linear servomotor and controller enable self-adaptive position control. Probe surface temperature and its rate of change are computed in real time and used to control probe insertion depth. It is found that a universal trigger threshold can be defined in terms of these two parameters; if the probe is triggered to retract when crossing the trigger threshold, it will reach the same ultimate surface temperature, independent of velocity, acceleration, or scrape-off layer heat flux scale length. In addition to controlling the probe motion, the controller is used to monitor and control all aspects of the integrated probe drive system.

  12. Low-resolution expression recognition based on central oblique average CS-LBP with adaptive threshold

    NASA Astrophysics Data System (ADS)

    Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong

    2017-11-01

    In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.

  13. Comparison of the Performances of Five Primer Sets for the Detection and Quantification of Plasmodium in Anopheline Vectors by Real-Time PCR.

    PubMed

    Chaumeau, V; Andolina, C; Fustec, B; Tuikue Ndam, N; Brengues, C; Herder, S; Cerqueira, D; Chareonviriyaphap, T; Nosten, F; Corbel, V

    2016-01-01

    Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies.

  14. Comparison of the Performances of Five Primer Sets for the Detection and Quantification of Plasmodium in Anopheline Vectors by Real-Time PCR

    PubMed Central

    Chaumeau, V.; Andolina, C.; Fustec, B.; Tuikue Ndam, N.; Brengues, C.; Herder, S.; Cerqueira, D.; Chareonviriyaphap, T.; Nosten, F.; Corbel, V.

    2016-01-01

    Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies. PMID:27441839

  15. Recruitment dynamics in adaptive social networks

    NASA Astrophysics Data System (ADS)

    Shkarayev, Maxim; Shaw, Leah; Schwartz, Ira

    2011-03-01

    We model recruitment in social networks in the presence of birth and death processes. The recruitment is characterized by nodes changing their status to that of the recruiting class as a result of contact with recruiting nodes. The recruiting nodes may adapt their connections in order to improve recruitment capabilities, thus changing the network structure. We develop a mean-field theory describing the system dynamics. Using mean-field theory we characterize the dependence of the growth threshold of the recruiting class on the adaptation parameter. Furthermore, we investigate the effect of adaptation on the recruitment dynamics, as well as on network topology. The theoretical predictions are confirmed by the direct simulations of the full system.

  16. A voxel-based investigation for MRI-only radiotherapy of the brain using ultra short echo times

    NASA Astrophysics Data System (ADS)

    Edmund, Jens M.; Kjer, Hans M.; Van Leemput, Koen; Hansen, Rasmus H.; Andersen, Jon AL; Andreasen, Daniel

    2014-12-01

    Radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, so-called MRI-only RT, would remove the systematic registration error between MR and computed tomography (CT), and provide co-registered MRI for assessment of treatment response and adaptive RT. Electron densities, however, need to be assigned to the MRI images for dose calculation and patient setup based on digitally reconstructed radiographs (DRRs). Here, we investigate the geometric and dosimetric performance for a number of popular voxel-based methods to generate a so-called pseudo CT (pCT). Five patients receiving cranial irradiation, each containing a co-registered MRI and CT scan, were included. An ultra short echo time MRI sequence for bone visualization was used. Six methods were investigated for three popular types of voxel-based approaches; (1) threshold-based segmentation, (2) Bayesian segmentation and (3) statistical regression. Each approach contained two methods. Approach 1 used bulk density assignment of MRI voxels into air, soft tissue and bone based on logical masks and the transverse relaxation time T2 of the bone. Approach 2 used similar bulk density assignments with Bayesian statistics including or excluding additional spatial information. Approach 3 used a statistical regression correlating MRI voxels with their corresponding CT voxels. A similar photon and proton treatment plan was generated for a target positioned between the nasal cavity and the brainstem for all patients. The CT agreement with the pCT of each method was quantified and compared with the other methods geometrically and dosimetrically using both a number of reported metrics and introducing some novel metrics. The best geometrical agreement with CT was obtained with the statistical regression methods which performed significantly better than the threshold and Bayesian segmentation methods (excluding spatial information). All methods agreed significantly better with CT than a reference water MRI comparison. The mean dosimetric deviation for photons and protons compared to the CT was about 2% and highest in the gradient dose region of the brainstem. Both the threshold based method and the statistical regression methods showed the highest dosimetrical agreement. Generation of pCTs using statistical regression seems to be the most promising candidate for MRI-only RT of the brain. Further, the total amount of different tissues needs to be taken into account for dosimetric considerations regardless of their correct geometrical position.

  17. Numerical simulation on the adaptation of forms in trabecular bone to mechanical disuse and basic multi-cellular unit activation threshold at menopause

    NASA Astrophysics Data System (ADS)

    Gong, He; Fan, Yubo; Zhang, Ming

    2008-04-01

    The objective of this paper is to identify the effects of mechanical disuse and basic multi-cellular unit (BMU) activation threshold on the form of trabecular bone during menopause. A bone adaptation model with mechanical- biological factors at BMU level was integrated with finite element analysis to simulate the changes of trabecular bone structure during menopause. Mechanical disuse and changes in the BMU activation threshold were applied to the model for the period from 4 years before to 4 years after menopause. The changes in bone volume fraction, trabecular thickness and fractal dimension of the trabecular structures were used to quantify the changes of trabecular bone in three different cases associated with mechanical disuse and BMU activation threshold. It was found that the changes in the simulated bone volume fraction were highly correlated and consistent with clinical data, and that the trabecular thickness reduced significantly during menopause and was highly linearly correlated with the bone volume fraction, and that the change trend of fractal dimension of the simulated trabecular structure was in correspondence with clinical observations. The numerical simulation in this paper may help to better understand the relationship between the bone morphology and the mechanical, as well as biological environment; and can provide a quantitative computational model and methodology for the numerical simulation of the bone structural morphological changes caused by the mechanical environment, and/or the biological environment.

  18. Experimental and environmental factors affect spurious detection of ecological thresholds

    USGS Publications Warehouse

    Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.

    2012-01-01

    Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.

  19. Noninvasive Determination of Anaerobic Threshold Based on the Heart Rate Deflection Point in Water Cycling.

    PubMed

    Pinto, Stephanie S; Brasil, Roxana M; Alberton, Cristine L; Ferreira, Hector K; Bagatini, Natália C; Calatayud, Joaquin; Colado, Juan C

    2016-02-01

    This study compared heart rate (HR), oxygen uptake (VO2), percentage of maximal HR (%HRmax), percentage of maximal VO2, and cadence (Cad) related to the anaerobic threshold (AT) during a water cycling maximal test between heart rate deflection point (HRDP) and ventilatory (VT) methods. In addition, the correlations between both methods were assessed for all variables. The test was performed by 27 men in a cycle ergometer in an aquatic environment. The protocol started at a Cad of 100 b · min(-1) for 3 minutes with subsequent increments of 15 b · min(-1) every 2 minutes until exhaustion. A paired two-tailed Student's t-test was used to compare the variables between the HRDP and VT methods. The Pearson product-moment correlation test was used to correlate the same variables determined by the 2 methods. There was no difference in HR (166 ± 13 vs. 166 ± 13 b · min(-1)), VO2 (38.56 ± 6.26 vs. 39.18 ± 6.13 ml · kg(-1) · min(-1)), %HRmax (89.24 ± 3.84 vs. 89.52 ± 4.29%), VO2max (70.44 ± 7.99 vs. 71.64 ± 8.32%), and Cad (174 ± 14 b · min(-1) vs. 171 ± 8 b · min(-1)) related to AT between the HRDP and VT methods. Moreover, significant relationships were found between the methods to determine the AT for all variables analyzed (r = 0.57-0.97). The estimation of the HRDP may be a noninvasive and easy method to determine the AT, which could be used to adapt individualized training intensities to practitioners during water cycling classes.

  20. Inference for High-dimensional Differential Correlation Matrices.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  1. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  2. Finite State Machine with Adaptive Electromyogram (EMG) Feature Extraction to Drive Meal Assistance Robot

    NASA Astrophysics Data System (ADS)

    Zhang, Xiu; Wang, Xingyu; Wang, Bei; Sugi, Takenao; Nakamura, Masatoshi

    Surface electromyogram (EMG) from elbow, wrist and hand has been widely used as an input of multifunction prostheses for many years. However, for patients with high-level limb deficiencies, muscle activities in upper-limbs are not strong enough to be used as control signals. In this paper, EMG from lower-limbs is acquired and applied to drive a meal assistance robot. An onset detection method with adaptive threshold based on EMG power is proposed to recognize different muscle contractions. Predefined control commands are output by finite state machine (FSM), and applied to operate the robot. The performance of EMG control is compared with joystick control by both objective and subjective indices. The results show that FSM provides the user with an easy-performing control strategy, which successfully operates robots with complicated control commands by limited muscle motions. The high accuracy and comfortableness of the EMG-control meal assistance robot make it feasible for users with upper limbs motor disabilities.

  3. Evolving autonomous learning in cognitive networks.

    PubMed

    Sheneman, Leigh; Hintze, Arend

    2017-12-01

    There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. Prior to this work MB could only adapt from one generation to the other, so we introduce feedback gates which augment their ability to learn during their lifetime. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning and could be another step towards autonomously learning machines.

  4. TH-A-BRF-02: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - Modeling Tumor Evolution for Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Lee, CG; Chan, TCY

    2014-06-15

    Purpose: To develop mathematical models of tumor geometry changes under radiotherapy that may support future adaptive paradigms. Methods: A total of 29 cervical patients were scanned using MRI, once for planning and weekly thereafter for treatment monitoring. Using the tumor volumes contoured by a radiologist, three mathematical models were investigated based on the assumption of a stochastic process of tumor evolution. The “weekly MRI” model predicts tumor geometry for the following week from the last two consecutive MRI scans, based on the voxel transition probability. The other two models use only the first pair of consecutive MRI scans, and themore » transition probabilities were estimated via tumor type classified from the entire data set. The classification is based on either measuring the tumor volume (the “weekly volume” model), or implementing an auxiliary “Markov chain” model. These models were compared to a constant volume approach that represents the current clinical practice, using various model parameters; e.g., the threshold probability β converts the probability map into a tumor shape (larger threshold implies smaller tumor). Model performance was measured using volume conformity index (VCI), i.e., the union of the actual target and modeled target volume squared divided by product of these two volumes. Results: The “weekly MRI” model outperforms the constant volume model by 26% on average, and by 103% for the worst 10% of cases in terms of VCI under a wide range of β. The “weekly volume” and “Markov chain” models outperform the constant volume model by 20% and 16% on average, respectively. They also perform better than the “weekly MRI” model when β is large. Conclusion: It has been demonstrated that mathematical models can be developed to predict tumor geometry changes for cervical cancer undergoing radiotherapy. The models can potentially support adaptive radiotherapy paradigm by reducing normal tissue dose. This research was supported in part by the Ontario Consortium for Adaptive Interventions in Radiation Oncology (OCAIRO) funded by the Ontario Research Fund (ORF) and the MITACS Accelerate Internship Program.« less

  5. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  6. A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes

    PubMed Central

    Wang, Jianqiang; Sun, Xiaoyan; Guo, Junbin

    2013-01-01

    The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS). This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR) of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.

  7. Verification of the tumor volume delineation method using a fixed threshold of peak standardized uptake value.

    PubMed

    Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro

    2017-09-01

    We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.

  8. Comparison of software and human observers in reading images of the CDMAM test object to assess digital mammography systems

    NASA Astrophysics Data System (ADS)

    Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde

    2006-03-01

    European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.

  9. Development of color vision discrimination during childhood: differences between Blue-Yellow, Red-Green, and achromatic thresholds.

    PubMed

    Ling, Barbara Y; Dain, Stephen J

    2018-04-01

    Nonvisual demands of tests affect vision test results in children. 150 children (79 females and 71 males, 5.3-12.7 years of age) were examined. Isoluminant Blue, Yellow, Red, Green, and Black and White thresholds were established with a four-alternative forced-choice and pseudo-10-bit system with adaptive staircase and gaming elements. Where Threshold=b 0 +b 1 *age -1 , b 1 for RG=6.26±1.90 (95% confidence limits), Achr=3.96±1.07 and BY=12.48±2.76 were significantly different. The noncolor demands of the test are the same for RG, BY, and Achr, so the later development of BY discrimination is not an artifact of the test.

  10. THE NATURE OF FOVEAL DARK ADAPTATION

    PubMed Central

    Hecht, Selig

    1921-01-01

    1. After a discussion of the sources of error involved in the study of dark adaptation, an apparatus and a procedure are described which avoid these errors. The method includes a control of the initial light adaptation, a record of the exact beginning of dark adaptation, and an accurate means of measuring the threshold of the fovea after different intervals in the dark. 2. The results show that dark adaptation of the eye as measured by foveal vision proceeds at a very precipitous rate during the first few seconds, that most of the adaptation takes place during the first 30 seconds, and that the process practically ceases after 10 minutes. These findings explain much of the irregularity of the older data. 3. The changes which correspond to those in the fovea alone are secured by correcting the above results in terms of the movements of the pupil during dark adaptation. 4. On the assumption that the photochemical effect of the light is a linear function of the intensity, it is shown that the dark adaptation of the fovea itself follows the course of a bimolecular reaction. This is interpreted to mean that there are two photolytic products in the fovea; that they are disappearing because they are recombining to form anew the photosensitive substance of the fovea; and that the concentration of these products of photolysis in the sense cell must be increased by a definite fraction in order to produce a visual effect. 5. It is then suggested that the basis of the initial event in foveal light perception is some mechanism that involves a reversible photochemical reaction of which the "dark" reaction is bimolecular. Dark adaptation follows the "dark" reaction; sensory equilibrium is represented by the stationary state; and light adaptation by the shifting of the stationary state to a fresh point of equilibrium toward the "dark" side of the reaction. PMID:19871919

  11. Long-term effects of retinopathy of prematurity (ROP) on rod and rod-driven function.

    PubMed

    Harris, Maureen E; Moskowitz, Anne; Fulton, Anne B; Hansen, Ronald M

    2011-02-01

    The purpose of this study was to determine whether recovery of scotopic sensitivity occurs in human ROP, as it does in the rat models of ROP. Following a cross-sectional design, scotopic electroretinographic (ERG) responses to full-field stimuli were recorded from 85 subjects with a history of preterm birth. In 39 of these subjects, dark adapted visual threshold was also measured. Subjects were tested post-term as infants (median age 2.5 months) or at older ages (median age 10.5 years) and stratified by severity of ROP: severe, mild, or none. Rod photoreceptor sensitivity, S (ROD), was derived from the a-wave, and post-receptor sensitivity, log σ, was calculated from the b-wave stimulus-response function. Dark adapted visual threshold was measured using a forced-choice preferential procedure. For S (ROD), the deficit from normal for age varied significantly with ROP severity but not with age group. For log σ, in mild ROP, the deficit was smaller in older subjects than in infants, while in severe ROP, the deficit was quite large in both age groups. In subjects who never had ROP, S (ROD) and log σ in both age groups were similar to those in term born controls. Deficits in dark adapted threshold and log σ were correlated in mild but not in severe ROP. The data are evidence that sensitivity of the post-receptor retina improves in those with a history of mild ROP. We speculate that beneficial reorganization of the post-receptor neural circuitry occurs in mild but not in severe ROP.

  12. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  13. How sensitivity to ongoing interaural temporal disparities is affected by manipulations of temporal features of the envelopes of high-frequency stimuli

    PubMed Central

    Bernstein, Leslie R.; Trahiotis, Constantine

    2009-01-01

    This study addressed how manipulating certain aspects of the envelopes of high-frequency stimuli affects sensitivity to envelope-based interaural temporal disparities (ITDs). Listener’s threshold ITDs were measured using an adaptive two-alternative paradigm employing “raised-sine” stimuli [John, M. S., et al. (2002). Ear Hear. 23, 106–117] which permit independent variation in their modulation frequency, modulation depth, and modulation exponent. Threshold ITDs were measured while manipulating modulation exponent for stimuli having modulation frequencies between 32 and 256 Hz. The results indicated that graded increases in the exponent led to graded decreases in envelope-based threshold ITDs. Threshold ITDs were also measured while parametrically varying modulation exponent and modulation depth. Overall, threshold ITDs decreased with increases in the modulation depth. Unexpectedly, increases in the exponent of the raised-sine led to especially large decreases in threshold ITD when the modulation depth was low. An interaural correlation-based model was generally able to capture changes in threshold ITD stemming from changes in the exponent, depth of modulation, and frequency of modulation of the raised-sine stimuli. The model (and several variations of it), however, could not account for the unexpected interaction between the value of raised-sine exponent and its modulation depth. PMID:19425666

  14. Low-threshold support services for people with dementia within the scope of respite care in Germany - A qualitative study on different stakeholders' perspective.

    PubMed

    Hochgraeber, Iris; von Kutzleben, Milena; Bartholomeyczik, Sabine; Holle, Bernhard

    2017-07-01

    Low-threshold support services are provided within the basket of services of German long-term care insurance as a part of respite care to support family carers and people with dementia. This study investigates various stakeholders' (providers, coordinators, volunteers, family carers and people with dementia) perspectives on low-threshold support service regarding its organisation and conceptualisation as well as how stakeholders and users value low-threshold support service using a qualitative approach. Twelve guided interviews and group discussions were conducted with 31 participants. Organisation and conceptualisation are characterised by the lowness of the service thresholds, which is perceived to be quick and simple forms of support with no user requirements. Multiple barriers such as the challenging behaviour of people with dementia and their initial refusal as well as their holding low-threshold support service in low esteem can hinder the utilisation of these services. Low-threshold support service within the scope of the long-term care insurance law can be separated into two types: low-cost (non-professional) services and high-cost services with comprehensive training for 'employed' volunteers (professional). Both types are constantly developing within the landscape of the German long-term care system, and low-threshold support service appears to be adapted to diverse needs. Therefore, it is important to avoid replacing non-professional services with professional services.

  15. Threshold selection for classification of MR brain images by clustering method

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita

    2015-12-01

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  16. Locomotive track detection for underground

    NASA Astrophysics Data System (ADS)

    Ma, Zhonglei; Lang, Wenhui; Li, Xiaoming; Wei, Xing

    2017-08-01

    In order to improve the PC-based track detection system, this paper proposes a method to detect linear track for underground locomotive based on DSP + FPGA. Firstly, the analog signal outputted from the camera is sampled by A / D chip. Then the collected digital signal is preprocessed by FPGA. Secondly, the output signal of FPGA is transmitted to DSP via EMIF port. Subsequently, the adaptive threshold edge detection, polar angle and radius constrain based Hough transform are implemented by DSP. Lastly, the detected track information is transmitted to host computer through Ethernet interface. The experimental results show that the system can not only meet the requirements of real-time detection, but also has good robustness.

  17. PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing.

    PubMed

    Soranzo, Alessandro; Grassi, Massimo

    2014-01-01

    PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment.

  18. PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing

    PubMed Central

    Soranzo, Alessandro; Grassi, Massimo

    2014-01-01

    PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment. PMID:25101013

  19. Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.

    2004-05-01

    Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.

  20. A robust threshold-based cloud mask for the HRV channel of MSG SEVIRI

    NASA Astrophysics Data System (ADS)

    Bley, S.; Deneke, H.

    2013-03-01

    A robust threshold-based cloud mask for the high-resolution visible (HRV) channel (1 × 1 km2) of the METEOSAT SEVIRI instrument is introduced and evaluated. It is based on operational EUMETSAT cloud mask for the low resolution channels of SEVIRI (3 × 3 km2), which is used for the selection of suitable thresholds to ensure consistency with its results. The aim of using the HRV channel is to resolve small-scale cloud structures which cannot be detected by the low resolution channels. We find that it is of advantage to apply thresholds relative to clear-sky reflectance composites, and to adapt the threshold regionally. Furthermore, the accuracy of the different spectral channels for thresholding and the suitability of the HRV channel are investigated for cloud detection. The case studies show different situations to demonstrate the behaviour for various surface and cloud conditions. Overall, between 4 and 24% of cloudy low-resolution SEVIRI pixels are found to contain broken clouds in our test dataset depending on considered region. Most of these broken pixels are classified as cloudy by EUMETSAT's cloud mask, which will likely result in an overestimate if the mask is used as estimate of cloud fraction.

Top