Sample records for local thresholding method

  1. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  2. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  3. Planning Target Margin Calculations for Prostate Radiotherapy Based on Intrafraction and Interfraction Motion Using Four Localization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, Chris; Herman, Michael G.; Davis, Brian J.

    2008-01-01

    Purpose: To determine planning target volume (PTV) margins for prostate radiotherapy based on the internal margin (IM) (intrafractional motion) and the setup margin (SM) (interfractional motion) for four daily localization methods: skin marks (tattoo), pelvic bony anatomy (bone), intraprostatic gold seeds using a 5-mm action threshold, and using no threshold. Methods and Materials: Forty prostate cancer patients were treated with external radiotherapy according to an online localization protocol using four intraprostatic gold seeds and electronic portal images (EPIs). Daily localization and treatment EPIs were obtained. These data allowed inter- and intrafractional analysis of prostate motion. The SM for the fourmore » daily localization methods and the IM were determined. Results: A total of 1532 fractions were analyzed. Tattoo localization requires a SM of 6.8 mm left-right (LR), 7.2 mm inferior-superior (IS), and 9.8 mm anterior-posterior (AP). Bone localization requires 3.1, 8.9, and 10.7 mm, respectively. The 5-mm threshold localization requires 4.0, 3.9, and 3.7 mm. No threshold localization requires 3.4, 3.2, and 3.2 mm. The intrafractional prostate motion requires an IM of 2.4 mm LR, 3.4 mm IS and AP. The PTV margin using the 5-mm threshold, including interobserver uncertainty, IM, and SM, is 4.8 mm LR, 5.4 mm IS, and 5.2 mm AP. Conclusions: Localization based on EPI with implanted gold seeds allows a large PTV margin reduction when compared with tattoo localization. Except for the LR direction, bony anatomy localization does not decrease the margins compared with tattoo localization. Intrafractional prostate motion is a limiting factor on margin reduction.« less

  4. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  5. Threshold Determination for Local Instantaneous Sea Surface Height Derivation with Icebridge Data in Beaufort Sea

    NASA Astrophysics Data System (ADS)

    Zhu, C.; Zhang, S.; Xiao, F.; Li, J.; Yuan, L.; Zhang, Y.; Zhu, T.

    2018-05-01

    The NASA Operation IceBridge (OIB) mission is the largest program in the Earth's polar remote sensing science observation project currently, initiated in 2009, which collects airborne remote sensing measurements to bridge the gap between NASA's ICESat and the upcoming ICESat-2 mission. This paper develop an improved method that optimizing the selection method of Digital Mapping System (DMS) image and using the optimal threshold obtained by experiments in Beaufort Sea to calculate the local instantaneous sea surface height in this area. The optimal threshold determined by comparing manual selection with the lowest (Airborne Topographic Mapper) ATM L1B elevation threshold of 2 %, 1 %, 0.5 %, 0.2 %, 0.1 % and 0.05 % in A, B, C sections, the mean of mean difference are 0.166 m, 0.124 m, 0.083 m, 0.018 m, 0.002 m and -0.034 m. Our study shows the lowest L1B data of 0.1 % is the optimal threshold. The optimal threshold and manual selections are also used to calculate the instantaneous sea surface height over images with leads, we find that improved methods has closer agreement with those from L1B manual selections. For these images without leads, the local instantaneous sea surface height estimated by using the linear equations between distance and sea surface height calculated over images with leads.

  6. Efficient method for calculations of ro-vibrational states in triatomic molecules near dissociation threshold: Application to ozone

    NASA Astrophysics Data System (ADS)

    Teplukhin, Alexander; Babikov, Dmitri

    2016-09-01

    A method for calculations of rotational-vibrational states of triatomic molecules up to dissociation threshold (and scattering resonances above it) is devised, that combines hyper-spherical coordinates, sequential diagonalization-truncation procedure, optimized grid DVR, and complex absorbing potential. Efficiency and accuracy of the method and new code are tested by computing the spectrum of ozone up to dissociation threshold, using two different potential energy surfaces. In both cases good agreement with results of previous studies is obtained for the lower energy states localized in the deep (˜10 000 cm-1) covalent well. Upper part of the bound state spectrum, within 600 cm-1 below dissociation threshold, is also computed and is analyzed in detail. It is found that long progressions of symmetric-stretching and bending states (up to 8 and 11 quanta, respectively) survive up to dissociation threshold and even above it, whereas excitations of the asymmetric-stretching overtones couple to the local vibration modes, making assignments difficult. Within 140 cm-1 below dissociation threshold, large-amplitude vibrational states of a floppy complex O⋯O2 are formed over the shallow van der Waals plateau. These are assigned using two local modes: the rocking-motion and the dissociative-motion progressions, up to 6 quanta in each, both with frequency ˜20 cm-1. Many of these plateau states are mixed with states of the covalent well. Interestingly, excitation of the rocking-motion helps keeping these states localized within the plateau region, by raising the effective barrier.

  7. Effect of density of localized states on the ovonic threshold switching characteristics of the amorphous GeSe films

    NASA Astrophysics Data System (ADS)

    Ahn, Hyung-Woo; Seok Jeong, Doo; Cheong, Byung-ki; Lee, Hosuk; Lee, Hosun; Kim, Su-dong; Shin, Sang-Yeol; Kim, Donghwan; Lee, Suyoun

    2013-07-01

    We investigated the effect of nitrogen (N) doping on the threshold voltage of an ovonic threshold switching device using amorphous GeSe. Using the spectroscopic ellipsometry, we found that the addition of N brought about significant changes in electronic structure of GeSe, such as the density of localized states and the band gap energy. Besides, it was observed that the characteristics of OTS devices strongly depended on the doping of N, which could be attributed to those changes in electronic structure suggesting a method to modulate the threshold voltage of the device.

  8. Exploring three faint source detections methods for aperture synthesis radio images

    NASA Astrophysics Data System (ADS)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  9. Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images

    PubMed Central

    Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu

    2013-01-01

    With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856

  10. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  11. An improved TV caption image binarization method

    NASA Astrophysics Data System (ADS)

    Jiang, Mengdi; Cheng, Jianghua; Chen, Minghui; Ku, Xishu

    2018-04-01

    TV Video caption image binarization has important influence on semantic video retrieval. An improved binarization method for caption image is proposed in this paper. In order to overcome the shortcomings of ghost and broken strokes problems of traditional Niblack method, the method has considered the global information of the images and the local information of the images. First, Tradition Otsu and Niblack thresholds are used for initial binarization. Second, we introduced the difference between maximum and minimum values in the local window as a third threshold to generate two images. Finally, with a logic AND operation of the two images, great results were obtained. The experiment results prove that the proposed method is reliable and effective.

  12. Using pyramids to define local thresholds for blob detection.

    PubMed

    Shneier, M

    1983-03-01

    A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.

  13. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  14. Cold perception and cutaneous microvascular response to local cooling at different cooling temperatures.

    PubMed

    Music, Mark; Finderle, Zarko; Cankar, Ksenija

    2011-05-01

    The aim of the present study was to investigate the effect of quantitatively measured cold perception (CP) thresholds on microcirculatory response to local cooling as measured by direct and indirect response of laser-Doppler (LD) flux during local cooling at different temperatures. The CP thresholds were measured in 18 healthy males using the Marstock method (thermode placed on the thenar). The direct (at the cooling site) and indirect (on contralateral hand) LD flux responses were recorded during immersion of the hand in a water bath at 20°C, 15°C, and 10°C. The cold perception threshold correlated (linear regression analysis, Pearson correlation) with the indirect LD flux response at cooling temperatures 20°C (r=0.782, p<0.01) and 15°C (r=0.605, p<0.01). In contrast, there was no correlation between the CP threshold and the indirect LD flux response during cooling in water at 10°C. The results demonstrate that during local cooling, depending on the cooling temperature used, cold perception threshold influences indirect LD flux response. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    NASA Astrophysics Data System (ADS)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  16. Outlier detection for particle image velocimetry data using a locally estimated noise variance

    NASA Astrophysics Data System (ADS)

    Lee, Yong; Yang, Hua; Yin, ZhouPing

    2017-03-01

    This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.

  17. An evaluation of the effect of recent temperature variability on the prediction of coral bleaching events.

    PubMed

    Donner, Simon D

    2011-07-01

    Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.

  18. Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.

    PubMed

    Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang

    2016-10-10

    In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.

  19. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    NASA Astrophysics Data System (ADS)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  20. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  1. Quantifying fracture geometry with X-ray tomography: Technique of Iterative Local Thresholding (TILT) for 3D image segmentation

    DOE PAGES

    Deng, Hang; Fitts, Jeffrey P.; Peters, Catherine A.

    2016-02-01

    This paper presents a new method—the Technique of Iterative Local Thresholding (TILT)—for processing 3D X-ray computed tomography (xCT) images for visualization and quantification of rock fractures. The TILT method includes the following advancements. First, custom masks are generated by a fracture-dilation procedure, which significantly amplifies the fracture signal on the intensity histogram used for local thresholding. Second, TILT is particularly well suited for fracture characterization in granular rocks because the multi-scale Hessian fracture (MHF) filter has been incorporated to distinguish fractures from pores in the rock matrix. Third, TILT wraps the thresholding and fracture isolation steps in an optimized iterativemore » routine for binary segmentation, minimizing human intervention and enabling automated processing of large 3D datasets. As an illustrative example, we applied TILT to 3D xCT images of reacted and unreacted fractured limestone cores. Other segmentation methods were also applied to provide insights regarding variability in image processing. The results show that TILT significantly enhanced separability of grayscale intensities, outperformed the other methods in automation, and was successful in isolating fractures from the porous rock matrix. Because the other methods are more likely to misclassify fracture edges as void and/or have limited capacity in distinguishing fractures from pores, those methods estimated larger fracture volumes (up to 80 %), surface areas (up to 60 %), and roughness (up to a factor of 2). In conclusion, these differences in fracture geometry would lead to significant disparities in hydraulic permeability predictions, as determined by 2D flow simulations.« less

  2. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  3. A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Yu, Haiyan; Fan, Jiulun

    2017-12-01

    Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.

  4. Metabolic Tumor Volume and Total Lesion Glycolysis in Oropharyngeal Cancer Treated With Definitive Radiotherapy: Which Threshold Is the Best Predictor of Local Control?

    PubMed

    Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O

    2017-06-01

    In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.

  5. Stroke-model-based character extraction from gray-level document images.

    PubMed

    Ye, X; Cheriet, M; Suen, C Y

    2001-01-01

    Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.

  6. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  7. Swarm: robust and fast clustering method for amplicon-based studies.

    PubMed

    Mahé, Frédéric; Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters' internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.

  8. Swarm: robust and fast clustering method for amplicon-based studies

    PubMed Central

    Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units. PMID:25276506

  9. Double Threshold Energy Detection Based Cooperative Spectrum Sensing for Cognitive Radio Networks with QoS Guarantee

    NASA Astrophysics Data System (ADS)

    Hu, Hang; Yu, Hong; Zhang, Yongzhi

    2013-03-01

    Cooperative spectrum sensing, which can greatly improve the ability of discovering the spectrum opportunities, is regarded as an enabling mechanism for cognitive radio (CR) networks. In this paper, we employ a double threshold detection method in energy detector to perform spectrum sensing, only the CR users with reliable sensing information are allowed to transmit one bit local decision to the fusion center. Simulation results will show that our proposed double threshold detection method could not only improve the sensing performance but also save the bandwidth of the reporting channel compared with the conventional detection method with one threshold. By weighting the sensing performance and the consumption of system resources in a utility function that is maximized with respect to the number of CR users, it has been shown that the optimal number of CR users is related to the price of these Quality-of-Service (QoS) requirements.

  10. Survey of abdominal obesities in an adult urban population of Kinshasa, Democratic Republic of Congo

    PubMed Central

    Kasiam Lasi On’kin, JB; Longo-Mbenza, B; Okwe, A Nge; Kabangu, N Kangola

    2007-01-01

    Summary Background The prevalence of overweight/obesity, which is an important cardiovascular risk factor, is rapidly increasing worldwide. Abdominal obesity, a fundamental component of the metabolic syndrome, is not defined by appropriate cutoff points for sub-Saharan Africa. Objective To provide baseline and reference data on the anthropometry/body composition and the prevalence rates of obesity types and levels in the adult urban population of Kinshasa, DRC, Central Africa. Methods During this cross-sectional study carried out within a random sample of adults in Kinshasa town, body mass index, waist circumference and fatty mass were measured using standard methods. Their reference and local thresholds (cut-off points) were compared with those of WHO, NCEP and IFD to define the types and levels of obesity in the population. Results From this sample of 11 511 subjects (5 676 men and 5 835 women), the men presented with similar body mass index and fatty mass values to those of the women, but higher waist measurements. The international thresholds overestimated the prevalence of denutrition, but underscored that of general and abdominal obesity. The two types of obesity were more prevalent among women than men when using both international and local thresholds. Body mass index was negatively associated with age; but abdominal obesity was more frequent before 20 years of age and between 40 and 60 years old. Local thresholds of body mass index (≥ 23, ≥ 27 and ≥ 30 kg/m2) and waist measurement (≥ 80, ≥ 90 and ≥ 94 cm) defined epidemic rates of overweight/general obesity (52%) and abdominal obesity (40.9%). The threshold of waist circumference ≥ 94 cm (90th percentile) corresponding to the threshold of the body mass index ≥ 30 kg/m2 (90th percentile) was proposed as the specific threshold of definition of the metabolic syndrome, without reference to gender, for the cities of sub-Saharan Africa. Conclusion Further studies are required to define the optimal threshold of waist circumference in rural settings. The present local cut-off points of body mass index and waist circumference could be appropriate for the identification of Africans at risk of obesity-related disorders, and indicate the need to implement interventions to reverse increasing levels of obesity. PMID:17985031

  11. First-principles simulation of the optical response of bulk and thin-film α-quartz irradiated with an ultrashort intense laser pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kyung-Min; Min Kim, Chul; Moon Jeong, Tae, E-mail: jeongtm@gist.ac.kr

    A computational method based on a first-principles multiscale simulation has been used for calculating the optical response and the ablation threshold of an optical material irradiated with an ultrashort intense laser pulse. The method employs Maxwell's equations to describe laser pulse propagation and time-dependent density functional theory to describe the generation of conduction band electrons in an optical medium. Optical properties, such as reflectance and absorption, were investigated for laser intensities in the range 10{sup 10} W/cm{sup 2} to 2 × 10{sup 15} W/cm{sup 2} based on the theory of generation and spatial distribution of the conduction band electrons. The method was applied tomore » investigate the changes in the optical reflectance of α-quartz bulk, half-wavelength thin-film, and quarter-wavelength thin-film and to estimate their ablation thresholds. Despite the adiabatic local density approximation used in calculating the exchange–correlation potential, the reflectance and the ablation threshold obtained from our method agree well with the previous theoretical and experimental results. The method can be applied to estimate the ablation thresholds for optical materials, in general. The ablation threshold data can be used to design ultra-broadband high-damage-threshold coating structures.« less

  12. Hypnosis and Local Anesthesia for Dental Pain Relief-Alternative or Adjunct Therapy?-A Randomized, Clinical-Experimental Crossover Study.

    PubMed

    Wolf, Thomas Gerhard; Wolf, Dominik; Callaway, Angelika; Below, Dagna; d'Hoedt, Bernd; Willershausen, Brita; Daubländer, Monika

    2016-01-01

    This prospective randomized clinical crossover trial was designed to compare hypnosis and local anesthesia for experimental dental pain relief. Pain thresholds of the dental pulp were determined. A targeted standardized pain stimulus was applied and rated on the Visual Analogue Scale (0-10). The pain threshold was lower under hypnosis (58.3 ± 17.3, p < .001), maximal (80.0) under local anesthesia. The pain stimulus was scored higher under hypnosis (3.9 ± 3.8) than with local anesthesia (0.0, p < .001). Local anesthesia was superior to hypnosis and is a safe and effective method for pain relief in dentistry. Hypnosis seems to produce similar effects observed under sedation. It can be used in addition to local anesthesia and in individual cases as an alternative for pain control in dentistry.

  13. Comparison of different methods for estimating snowcover in forested, mountainous basins using LANDSAT (ERTS) images. [Washington and Santiam River, Oregon

    NASA Technical Reports Server (NTRS)

    Meier, M. J.; Evans, W. E.

    1975-01-01

    Snow-covered areas on LANDSAT (ERTS) images of the Santiam River basin, Oregon, and other basins in Washington were measured using several operators and methods. Seven methods were used: (1) Snowline tracing followed by measurement with planimeter, (2) mean snowline altitudes determined from many locations, (3) estimates in 2.5 x 2.5 km boxes of snow-covered area with reference to snow-free images, (4) single radiance-threshold level for entire basin, (5) radiance-threshold setting locally edited by reference to altitude contours and other images, (6) two-band color-sensitive extraction locally edited as in (5), and (7) digital (spectral) pattern recognition techniques. The seven methods are compared in regard to speed of measurement, precision, the ability to recognize snow in deep shadow or in trees, relative cost, and whether useful supplemental data are produced.

  14. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  15. Degraded Chinese rubbing images thresholding based on local first-order statistics

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Hou, Ling-Ying; Huang, Han

    2017-06-01

    It is a necessary step for Chinese character segmentation from degraded document images in Optical Character Recognizer (OCR); however, it is challenging due to various kinds of noising in such an image. In this paper, we present three local first-order statistics method that had been adaptive thresholding for segmenting text and non-text of Chinese rubbing image. Both visual inspection and numerically investigate for the segmentation results of rubbing image had been obtained. In experiments, it obtained better results than classical techniques in the binarization of real Chinese rubbing image and PHIBD 2012 datasets.

  16. Prediction of Antibacterial Activity from Physicochemical Properties of Antimicrobial Peptides

    PubMed Central

    Melo, Manuel N.; Ferre, Rafael; Feliu, Lídia; Bardají, Eduard; Planas, Marta; Castanho, Miguel A. R. B.

    2011-01-01

    Consensus is gathering that antimicrobial peptides that exert their antibacterial action at the membrane level must reach a local concentration threshold to become active. Studies of peptide interaction with model membranes do identify such disruptive thresholds but demonstrations of the possible correlation of these with the in vivo onset of activity have only recently been proposed. In addition, such thresholds observed in model membranes occur at local peptide concentrations close to full membrane coverage. In this work we fully develop an interaction model of antimicrobial peptides with biological membranes; by exploring the consequences of the underlying partition formalism we arrive at a relationship that provides antibacterial activity prediction from two biophysical parameters: the affinity of the peptide to the membrane and the critical bound peptide to lipid ratio. A straightforward and robust method to implement this relationship, with potential application to high-throughput screening approaches, is presented and tested. In addition, disruptive thresholds in model membranes and the onset of antibacterial peptide activity are shown to occur over the same range of locally bound peptide concentrations (10 to 100 mM), which conciliates the two types of observations. PMID:22194847

  17. High-resolution modeling of thermal thresholds and environmental influences on coral bleaching for local and regional reef management.

    PubMed

    Kumagai, Naoki H; Yamano, Hiroya

    2018-01-01

    Coral reefs are one of the world's most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004-2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper.

  18. High-resolution modeling of thermal thresholds and environmental influences on coral bleaching for local and regional reef management

    PubMed Central

    Yamano, Hiroya

    2018-01-01

    Coral reefs are one of the world’s most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004–2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper. PMID:29473007

  19. Cognitive and Neural Bases of Skilled Performance.

    DTIC Science & Technology

    1987-10-04

    advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd

  20. Finger Vein Segmentation from Infrared Images Based on a Modified Separable Mumford Shah Model and Local Entropy Thresholding

    PubMed Central

    Dermatas, Evangelos

    2015-01-01

    A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern. PMID:26120357

  1. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  2. Definition of temperature thresholds: the example of the French heat wave warning system.

    PubMed

    Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal

    2013-01-01

    Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.

  3. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  4. A Continuous Threshold Expectile Model.

    PubMed

    Zhang, Feipeng; Li, Qunhua

    2017-12-01

    Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .

  5. Schwinger-variational-principle theory of collisions in the presence of multiple potentials

    NASA Astrophysics Data System (ADS)

    Robicheaux, F.; Giannakeas, P.; Greene, Chris H.

    2015-08-01

    A theoretical method for treating collisions in the presence of multiple potentials is developed by employing the Schwinger variational principle. The current treatment agrees with the local (regularized) frame transformation theory and extends its capabilities. Specifically, the Schwinger variational approach gives results without the divergences that need to be regularized in other methods. Furthermore, it provides a framework to identify the origin of these singularities and possibly improve the local frame transformation. We have used the method to obtain the scattering parameters for different confining potentials symmetric in x ,y . The method is also used to treat photodetachment processes in the presence of various confining potentials, thereby highlighting effects of the infinitely many closed channels. Two general features predicted are the vanishing of the total photoabsorption probability at every channel threshold and the occurrence of resonances below the channel thresholds for negative scattering lengths. In addition, the case of negative-ion photodetachment in the presence of uniform magnetic fields is also considered where unique features emerge at large scattering lengths.

  6. Accurate Construction of Photoactivated Localization Microscopy (PALM) Images for Quantitative Measurements

    PubMed Central

    Coltharp, Carla; Kessler, Rene P.; Xiao, Jie

    2012-01-01

    Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and allows a variety of quantitative measurements tailored to specific needs of different biological systems. PMID:23251611

  7. Prediction of spatially explicit rainfall intensity–duration thresholds for post-fire debris-flow generation in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2017-01-01

    Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity–duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity–duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity–duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity–duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity–duration thresholds do not currently exist.

  8. Prediction of spatially explicit rainfall intensity-duration thresholds for post-fire debris-flow generation in the western United States

    NASA Astrophysics Data System (ADS)

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2017-02-01

    Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity-duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity-duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity-duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity-duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity-duration thresholds do not currently exist.

  9. Quantum secret sharing using orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Liu, Cheng-Ji; Li, Yong-Ming

    2017-12-01

    In this work, we investigate the distinguishability of orthogonal multiqudit entangled states under restricted local operations and classical communication. According to these properties, we propose a quantum secret sharing scheme to realize three types of access structures, i.e., the ( n, n)-threshold, the restricted (3, n)-threshold and restricted (4, n)-threshold schemes (called LOCC-QSS scheme). All cooperating players in the restricted threshold schemes are from two disjoint groups. In the proposed protocol, the participants use the computational basis measurement and classical communication to distinguish between those orthogonal states and reconstruct the original secret. Furthermore, we also analyze the security of our scheme in four primary quantum attacks and give a simple encoding method in order to better prevent the participant conspiracy attack.

  10. Study on the efficacy of ELA-Max (4% liposomal lidocaine) compared with EMLA cream (eutectic mixture of local anesthetics) using thermosensory threshold analysis in adult volunteers.

    PubMed

    Tang, M B Y; Goon, A T J; Goh, C L

    2004-04-01

    ELA-Max and EMLA cream are topical anesthetics that have been shown to have similar anesthetic efficacy in previous studies. To evaluate the analgesic efficacy of ELA-Max in comparison with EMLA cream using a novel method of thermosensory threshold analysis. A thermosensory analyzer was used to assess warmth- and heat-induced pain thresholds. No statistically significant difference was found in pain thresholds using either formulation. However, EMLA cream increased the heat-induced pain threshold to a greater extent than ELA-Max. Thermosensory measurement and analysis was well tolerated and no adverse events were encountered. EMLA cream may be superior to ELA-Max for heat-induced pain. This study suggests that thermosensory measurement may be another suitable tool for future topical anesthetic efficacy studies.

  11. The short time Fourier transform and local signals

    NASA Astrophysics Data System (ADS)

    Okumura, Shuhei

    In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.

  12. Quantifying how the full local distribution of daily precipitation is changing and its uncertainties

    NASA Astrophysics Data System (ADS)

    Stainforth, David; Chapman, Sandra; Watkins, Nicholas

    2016-04-01

    The study of the consequences of global warming would benefit from quantification of geographical patterns of change at specific thresholds or quantiles, and better understandings of the intrinsic uncertainties in such quantities. For precipitation a range of indices have been developed which focus on high percentiles (e.g. rainfall falling on days above the 99th percentile) and on absolute extremes (e.g. maximum annual one day precipitation) but scientific assessments are best undertaken in the context of changes in the whole climatic distribution. Furthermore, the relevant thresholds for climate-vulnerable policy decisions, adaptation planning and impact assessments, vary according to the specific sector and location of interest. We present a methodology which maintains the flexibility to provide information at different thresholds for different downstream users, both scientists and decision makers. We develop a method[1,2] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes in daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the amount of precipitation on those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves not only determining which quantiles and geographical locations show the greatest and smallest changes, but also those at which uncertainty undermines the ability to make confident statements about any change there may be. We demonstrate this approach using E-OBS gridded data[3] which are timeseries of local daily precipitation across Europe over the last 60+ years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the geographical pattern of change at given thresholds of precipitation. This information is model- independent, thus providing data of direct value in model calibration and assessment. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, 2013 [2] S C Chapman, D A Stainforth, N W Watkins, 2015 Limits to the quantification of local climate change, ERL,10, 094018 (2015), ERL,10, 094018 [3] M R Haylock et al . 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  13. How to select a proper early warning threshold to detect infectious disease outbreaks based on the China infectious disease automated alert and response system (CIDARS).

    PubMed

    Wang, Ruiping; Jiang, Yonggen; Michael, Engelgau; Zhao, Genming

    2017-06-12

    China Centre for Diseases Control and Prevention (CDC) developed the China Infectious Disease Automated Alert and Response System (CIDARS) in 2005. The CIDARS was used to strengthen infectious disease surveillance and aid in the early warning of outbreak. The CIDARS has been integrated into the routine outbreak monitoring efforts of the CDC at all levels in China. Early warning threshold is crucial for outbreak detection in the CIDARS, but CDCs at all level are currently using thresholds recommended by the China CDC, and these recommended thresholds have recognized limitations. Our study therefore seeks to explore an operational method to select the proper early warning threshold according to the epidemic features of local infectious diseases. The data used in this study were extracted from the web-based Nationwide Notifiable Infectious Diseases Reporting Information System (NIDRIS), and data for infectious disease cases were organized by calendar week (1-52) and year (2009-2015) in Excel format; Px was calculated using a percentile-based moving window (moving window [5 week*5 year], x), where x represents one of 12 centiles (0.40, 0.45, 0.50….0.95). Outbreak signals for the 12 Px were calculated using the moving percentile method (MPM) based on data from the CIDARS. When the outbreak signals generated by the 'mean + 2SD' gold standard were in line with a Px generated outbreak signal for each week during the year of 2014, this Px was then defined as the proper threshold for the infectious disease. Finally, the performance of new selected thresholds for each infectious disease was evaluated by simulated outbreak signals based on 2015 data. Six infectious diseases were selected in this study (chickenpox, mumps, hand foot and mouth diseases (HFMD), scarlet fever, influenza and rubella). Proper thresholds for chickenpox (P75), mumps (P80), influenza (P75), rubella (P45), HFMD (P75), and scarlet fever (P80) were identified. The selected proper thresholds for these 6 infectious diseases could detect almost all simulated outbreaks within a shorter time period compared to thresholds recommended by the China CDC. It is beneficial to select the proper early warning threshold to detect infectious disease aberrations based on characteristics and epidemic features of local diseases in the CIDARS.

  14. Cloud Detection of Optical Satellite Images Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lee, Kuan-Yi; Lin, Chao-Hung

    2016-06-01

    Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.

  15. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.

  16. Landslide susceptibility and early warning model for shallow landslide in Taiwan

    NASA Astrophysics Data System (ADS)

    Huang, Chun-Ming; Wei, Lun-Wei; Chi, Chun-Chi; Chang, Kan-Tsun; Lee, Chyi-Tyi

    2017-04-01

    This study aims to development a regional susceptibility model and warning threshold as well as the establishment of early warning system in order to prevent and reduce the losses caused by rainfall-induced shallow landslides in Taiwan. For the purpose of practical application, Taiwan is divided into nearly 185,000 slope units. The susceptibility and warning threshold of each slope unit were analyzed as basic information for disaster prevention. The geological characteristics, mechanism and the occurrence time of landslides were recorded for more than 900 cases through field investigation and interview of residents in order to discuss the relationship between landslides and rainfall. Logistic regression analysis was performed to evaluate the landslide susceptibility and an I3-R24 rainfall threshold model was proposed for the early warning of landslides. The validations of recent landslide cases show that the model was suitable for the warning of regional shallow landslide and most of the cases can be warned 3 to 6 hours in advanced. We also propose a slope unit area weighted method to establish local rainfall threshold on landslide for vulnerable villages in order to improve the practical application. Validations of the local rainfall threshold also show a good agreement to the occurrence time reported by newspapers. Finally, a web based "Rainfall-induced Landslide Early Warning System" is built and connected to real-time radar rainfall data so that landslide real-time warning can be achieved. Keywords: landslide, susceptibility analysis, rainfall threshold

  17. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  18. Determination of vessel cross-sectional area by thresholding in Radon space

    PubMed Central

    Gao, Yu-Rong; Drew, Patrick J

    2014-01-01

    The cross-sectional area of a blood vessel determines its resistance, and thus is a regulator of local blood flow. However, the cross-sections of penetrating vessels in the cortex can be non-circular, and dilation and constriction can change the shape of the vessels. We show that observed vessel shape changes can introduce large errors in flux calculations when using a single diameter measurement. Because of these shape changes, typical diameter measurement approaches, such as the full-width at half-maximum (FWHM) that depend on a single diameter axis will generate erroneous results, especially when calculating flux. Here, we present an automated method—thresholding in Radon space (TiRS)—for determining the cross-sectional area of a convex object, such as a penetrating vessel observed with two-photon laser scanning microscopy (2PLSM). The thresholded image is transformed back to image space and contiguous pixels are segmented. The TiRS method is analogous to taking the FWHM across multiple axes and is more robust to noise and shape changes than FWHM and thresholding methods. We demonstrate the superior precision of the TiRS method with in vivo 2PLSM measurements of vessel diameter. PMID:24736890

  19. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  20. Interplay between the local information based behavioral responses and the epidemic spreading in complex networks.

    PubMed

    Liu, Can; Xie, Jia-Rong; Chen, Han-Shuang; Zhang, Hai-Feng; Tang, Ming

    2015-10-01

    The spreading of an infectious disease can trigger human behavior responses to the disease, which in turn plays a crucial role on the spreading of epidemic. In this study, to illustrate the impacts of the human behavioral responses, a new class of individuals, S(F), is introduced to the classical susceptible-infected-recovered model. In the model, S(F) state represents that susceptible individuals who take self-initiate protective measures to lower the probability of being infected, and a susceptible individual may go to S(F) state with a response rate when contacting an infectious neighbor. Via the percolation method, the theoretical formulas for the epidemic threshold as well as the prevalence of epidemic are derived. Our finding indicates that, with the increasing of the response rate, the epidemic threshold is enhanced and the prevalence of epidemic is reduced. The analytical results are also verified by the numerical simulations. In addition, we demonstrate that, because the mean field method neglects the dynamic correlations, a wrong result based on the mean field method is obtained-the epidemic threshold is not related to the response rate, i.e., the additional S(F) state has no impact on the epidemic threshold.

  1. On the need for a time- and location-dependent estimation of the NDSI threshold value for reducing existing uncertainties in snow cover maps at different scales

    NASA Astrophysics Data System (ADS)

    Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten

    2018-05-01

    Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.

  2. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  3. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.

    PubMed

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.

  4. Evaluation Of Water Quality At River Bian In Merauke Papua

    NASA Astrophysics Data System (ADS)

    Djaja, Irba; Purwanto, P.; Sunoko, H. R.

    2018-02-01

    River Bian in Merauke Regency has been utilized by local people in Papua (the Marind) who live along the river for fulfilling their daily needs, such as shower, cloth and dish washing, and even defecation, waste disposal, including domestic waste, as well as for ceremonial activities related to the locally traditional culture. Change in land use for other necessities and domestic activities of the local people have mounted pressures on the status of the River Bian, thus decreasing the quality of the river. This study had objectives to find out and to analyze river water quality and water quality status of the River Bian, and its compliance with water quality standards for ideal use. The study determined sample point by a purposive sampling method, taking the water samples with a grab method. The analysis of the water quality was performed by standard and pollution index methods. The study revealed that the water quality of River Bian, concerning BOD, at the station 3 had exceeded quality threshold. COD parameter for all stations had exceeded the quality threshold for class III. At three stations, there was a decreasing value due to increasing PI, as found at the stations 1, 2, and 3. In other words, River Bian had been lightly contaminated.

  5. An infrared small target detection method based on multiscale local homogeneity measure

    NASA Astrophysics Data System (ADS)

    Nie, Jinyan; Qu, Shaocheng; Wei, Yantao; Zhang, Liming; Deng, Lizhen

    2018-05-01

    Infrared (IR) small target detection plays an important role in the field of image detection area owing to its intrinsic characteristics. This paper presents a multiscale local homogeneity measure (MLHM) for infrared small target detection, which can enhance the performance of IR small target detection system. Firstly, intra-patch homogeneity of the target itself and the inter-patch heterogeneity between target and the local background regions are integrated to enhance the significant of small target. Secondly, a multiscale measure based on local regions is proposed to obtain the most appropriate response. Finally, an adaptive threshold method is applied to small target segmentation. Experimental results on three different scenarios indicate that the MLHM has good performance under the interference of strong noise.

  6. On the optimal z-score threshold for SISCOM analysis to localize the ictal onset zone.

    PubMed

    De Coster, Liesbeth; Van Laere, Koen; Cleeren, Evy; Baete, Kristof; Dupont, Patrick; Van Paesschen, Wim; Goffin, Karolien E

    2018-04-17

    In epilepsy patients, SISCOM or subtraction ictal single photon emission computed tomography co-registered to magnetic resonance imaging has become a routinely used, non-invasive technique to localize the ictal onset zone (IOZ). Thresholding of clusters with a predefined number of standard deviations from normality (z-score) is generally accepted to localize the IOZ. In this study, we aimed to assess the robustness of this parameter in a group of patients with well-characterized drug-resistant epilepsy in whom the exact location of the IOZ was known after successful epilepsy surgery. Eighty patients underwent preoperative SISCOM and were seizure free in a postoperative period of minimum 1 year. SISCOMs with z-threshold 2 and 1.5 were analyzed by two experienced readers separately, blinded from the clinical ground truth data. Their reported location of the IOZ was compared with the operative resection zone. Furthermore, confidence scores of the SISCOM IOZ were compared for the two thresholds. Visual reporting with a z-score threshold of 1.5 and 2 showed no statistically significant difference in localizing correspondence with the ground truth (70 vs. 72% respectively, p = 0.17). Interrater agreement was moderate (κ = 0.65) at the threshold of 1.5, but high (κ = 0.84) at a threshold of 2, where also reviewers were significantly more confident (p < 0.01). SISCOM is a clinically useful, routinely used modality in the preoperative work-up in many epilepsy surgery centers. We found no significant differences in localizing value of the IOZ using a threshold of 1.5 or 2, but interrater agreement and reader confidence were higher using a z-score threshold of 2.

  7. Automated segmentation of linear time-frequency representations of marine-mammal sounds.

    PubMed

    Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I

    2013-09-01

    Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.

  8. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear

    PubMed Central

    Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.

    2014-01-01

    The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490

  9. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear.

    PubMed

    Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F

    2014-06-01

    The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.

  10. Machine vision application in animal trajectory tracking.

    PubMed

    Koniar, Dušan; Hargaš, Libor; Loncová, Zuzana; Duchoň, František; Beňo, Peter

    2016-04-01

    This article was motivated by the doctors' demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Seeing visual word forms: spatial summation, eccentricity and spatial configuration.

    PubMed

    Kao, Chien-Hui; Chen, Chien-Chung

    2012-06-01

    We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. A robustness test of the braided device foreshortening algorithm

    NASA Astrophysics Data System (ADS)

    Moyano, Raquel Kale; Fernandez, Hector; Macho, Juan M.; Blasco, Jordi; San Roman, Luis; Narata, Ana Paula; Larrabide, Ignacio

    2017-11-01

    Different computational methods have been recently proposed to simulate the virtual deployment of a braided stent inside a patient vasculature. Those methods are primarily based on the segmentation of the region of interest to obtain the local vessel morphology descriptors. The goal of this work is to evaluate the influence of the segmentation quality on the method named "Braided Device Foreshortening" (BDF). METHODS: We used the 3DRA images of 10 aneurysmatic patients (cases). The cases were segmented by applying a marching cubes algorithm with a broad range of thresholds in order to generate 10 surface models each. We selected a braided device to apply the BDF algorithm to each surface model. The range of the computed flow diverter lengths for each case was obtained to calculate the variability of the method against the threshold segmentation values. RESULTS: An evaluation study over 10 clinical cases indicates that the final length of the deployed flow diverter in each vessel model is stable, shielding maximum difference of 11.19% in vessel diameter and maximum of 9.14% in the simulated stent length for the threshold values. The average coefficient of variation was found to be 4.08 %. CONCLUSION: A study evaluating how the threshold segmentation affects the simulated length of the deployed FD, was presented. The segmentation algorithm used to segment intracranial aneurysm 3D angiography images presents small variation in the resulting stent simulation.

  13. Quantum-secret-sharing scheme based on local distinguishability of orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-02-01

    In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k

  14. Combining local scaling and global methods to detect soil pore space

    NASA Astrophysics Data System (ADS)

    Martin-Sotoca, Juan Jose; Saa-Requejo, Antonio; Grau, Juan B.; Tarquis, Ana M.

    2017-04-01

    The characterization of the spatial distribution of soil pore structures is essential to obtain different parameters that will influence in several models related to water flow and/or microbial growth processes. The first step in pore structure characterization is obtaining soil images that best approximate reality. Over the last decade, major technological advances in X-ray computed tomography (CT) have allowed for the investigation and reconstruction of natural porous media architectures at very fine scales. The subsequent step is delimiting the pore structure (pore space) from the CT soil images applying a thresholding. Many times we could find CT-scan images that show low contrast at the solid-void interface that difficult this step. Different delimitation methods can result in different spatial distributions of pores influencing the parameters used in the models. Recently, new local segmentation method using local greyscale value (GV) concentration variabilities, based on fractal concepts, has been presented. This method creates singularity maps to measure the GV concentration at each point. The C-A method was combined with the singularity map approach (Singularity-CA method) to define local thresholds that can be applied to binarize CT images. Comparing this method with classical methods, such as Otsu and Maximum Entropy, we observed that more pores can be detected mainly due to its ability to amplify anomalous concentrations. However, it delineated many small pores that were incorrect. In this work, we present an improve version of Singularity-CA method that avoid this problem basically combining it with the global classical methods. References Martín-Sotoca, J.J., A. Saa-Requejo, J.B. Grau, A.M. Tarquis. New segmentation method based on fractal properties using singularity maps. Geoderma, 287, 40-53, 2017. Martín-Sotoca, J.J, A. Saa-Requejo, J.B. Grau, A.M. Tarquis. Local 3D segmentation of soil pore space based on fractal properties using singularity maps. Geoderma, http://dx.doi.org/10.1016/j.geoderma.2016.11.029. Torre, Iván G., Juan C. Losada and A.M. Tarquis. Multiscaling properties of soil images. Biosystems Engineering, http://dx.doi.org/10.1016/j.biosystemseng.2016.11.006.

  15. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  16. Development of Image Segmentation Methods for Intracranial Aneurysms

    PubMed Central

    Qian, Yi; Morgan, Michael

    2013-01-01

    Though providing vital means for the visualization, diagnosis, and quantification of decision-making processes for the treatment of vascular pathologies, vascular segmentation remains a process that continues to be marred by numerous challenges. In this study, we validate eight aneurysms via the use of two existing segmentation methods; the Region Growing Threshold and Chan-Vese model. These methods were evaluated by comparison of the results obtained with a manual segmentation performed. Based upon this validation study, we propose a new Threshold-Based Level Set (TLS) method in order to overcome the existing problems. With divergent methods of segmentation, we discovered that the volumes of the aneurysm models reached a maximum difference of 24%. The local artery anatomical shapes of the aneurysms were likewise found to significantly influence the results of these simulations. In contrast, however, the volume differences calculated via use of the TLS method remained at a relatively low figure, at only around 5%, thereby revealing the existence of inherent limitations in the application of cerebrovascular segmentation. The proposed TLS method holds the potential for utilisation in automatic aneurysm segmentation without the setting of a seed point or intensity threshold. This technique will further enable the segmentation of anatomically complex cerebrovascular shapes, thereby allowing for more accurate and efficient simulations of medical imagery. PMID:23606905

  17. Threshold Switchable Particles (TSPs) To Control Internal Hemorrhage

    DTIC Science & Technology

    2016-09-01

    hemorrhage at local sites. Four collaborating laboratories worked together under this contract to define threshold levels of activators of blood clotting...such that the candidate clotting activators will circulate in the blood at a concentration below the threshold necessary to trigger clotting, but...accumulation of the activators at sites of internal injury/bleeding will cause the local concentration of clotting activators to exceed the clotting

  18. Separable Ernst-Shakin-Thaler expansions of local potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bund, G.W.

    The boundary condition Ernst-Shakin-Thaler method, introduced previously to generate separable expansions of local potentials of finite range, is applied to the study of the triplet s-wave Malfliet-Tjon potential. The effect of varying the radius where the boundary condition is applied on the T matrix is analyzed. Further, we compare the convergence of the n-d scattering cross sections in the quartet state below the breakup threshold for expansions corresponding to two different boundaries.

  19. Discrete diffraction managed solitons: Threshold phenomena and rapid decay for general nonlinearities

    NASA Astrophysics Data System (ADS)

    Choi, Mi-Ran; Hundertmark, Dirk; Lee, Young-Ran

    2017-10-01

    We prove a threshold phenomenon for the existence/non-existence of energy minimizing solitary solutions of the diffraction management equation for strictly positive and zero average diffraction. Our methods allow for a large class of nonlinearities; they are, for example, allowed to change sign, and the weakest possible condition, it only has to be locally integrable, on the local diffraction profile. The solutions are found as minimizers of a nonlinear and nonlocal variational problem which is translation invariant. There exists a critical threshold λcr such that minimizers for this variational problem exist if their power is bigger than λcr and no minimizers exist with power less than the critical threshold. We also give simple criteria for the finiteness and strict positivity of the critical threshold. Our proof of existence of minimizers is rather direct and avoids the use of Lions' concentration compactness argument. Furthermore, we give precise quantitative lower bounds on the exponential decay rate of the diffraction management solitons, which confirm the physical heuristic prediction for the asymptotic decay rate. Moreover, for ground state solutions, these bounds give a quantitative lower bound for the divergence of the exponential decay rate in the limit of vanishing average diffraction. For zero average diffraction, we prove quantitative bounds which show that the solitons decay much faster than exponentially. Our results considerably extend and strengthen the results of Hundertmark and Lee [J. Nonlinear Sci. 22, 1-38 (2012) and Commun. Math. Phys. 309(1), 1-21 (2012)].

  20. Effects of global financial crisis on network structure in a local stock market

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo

    2014-08-01

    This study considers the effects of the 2008 global financial crisis on threshold networks of a local Korean financial market around the time of the crisis. Prices of individual stocks belonging to KOSPI 200 (Korea Composite Stock Price Index 200) are considered for three time periods, namely before, during, and after the crisis. Threshold networks are constructed from fully connected cross-correlation networks, and thresholds of cross-correlation coefficients are assigned to obtain threshold networks. At the high threshold, only one large cluster consisting of firms in the financial sector, heavy industry, and construction is observed during the crisis. However, before and after the crisis, there are several fragmented clusters belonging to various sectors. The power law of the degree distribution in threshold networks is observed within the limited range of thresholds. Threshold networks are fatter during the crisis than before or after the crisis. The clustering coefficient of the threshold network follows the power law in the scaling range.

  1. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  2. Extraction of Extended Small-Scale Objects in Digital Images

    NASA Astrophysics Data System (ADS)

    Volkov, V. Y.

    2015-05-01

    Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.

  3. Intelligent multi-spectral IR image segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert

    2017-05-01

    This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.

  4. Local yield stress statistics in model amorphous solids

    NASA Astrophysics Data System (ADS)

    Barbot, Armand; Lerbinger, Matthias; Hernandez-Garcia, Anier; García-García, Reinaldo; Falk, Michael L.; Vandembroucq, Damien; Patinet, Sylvain

    2018-03-01

    We develop and extend a method presented by Patinet, Vandembroucq, and Falk [Phys. Rev. Lett. 117, 045501 (2016), 10.1103/PhysRevLett.117.045501] to compute the local yield stresses at the atomic scale in model two-dimensional Lennard-Jones glasses produced via differing quench protocols. This technique allows us to sample the plastic rearrangements in a nonperturbative manner for different loading directions on a well-controlled length scale. Plastic activity upon shearing correlates strongly with the locations of low yield stresses in the quenched states. This correlation is higher in more structurally relaxed systems. The distribution of local yield stresses is also shown to strongly depend on the quench protocol: the more relaxed the glass, the higher the local plastic thresholds. Analysis of the magnitude of local plastic relaxations reveals that stress drops follow exponential distributions, justifying the hypothesis of an average characteristic amplitude often conjectured in mesoscopic or continuum models. The amplitude of the local plastic rearrangements increases on average with the yield stress, regardless of the system preparation. The local yield stress varies with the shear orientation tested and strongly correlates with the plastic rearrangement locations when the system is sheared correspondingly. It is thus argued that plastic rearrangements are the consequence of shear transformation zones encoded in the glass structure that possess weak slip planes along different orientations. Finally, we justify the length scale employed in this work and extract the yield threshold statistics as a function of the size of the probing zones. This method makes it possible to derive physically grounded models of plasticity for amorphous materials by directly revealing the relevant details of the shear transformation zones that mediate this process.

  5. Prediction of spatially explicit rainfall intensity-duration thresholds for post-fire debris-flow generation in the western United States

    NASA Astrophysics Data System (ADS)

    Staley, Dennis; Negri, Jacquelyn; Kean, Jason

    2016-04-01

    Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.

  6. Evolution of complex density-dependent dispersal strategies.

    PubMed

    Parvinen, Kalle; Seppänen, Anne; Nagy, John D

    2012-11-01

    The question of how dispersal behavior is adaptive and how it responds to changes in selection pressure is more relevant than ever, as anthropogenic habitat alteration and climate change accelerate around the world. In metapopulation models where local populations are large, and thus local population size is measured in densities, density-dependent dispersal is expected to evolve to a single-threshold strategy, in which individuals stay in patches with local population density smaller than a threshold value and move immediately away from patches with local population density larger than the threshold. Fragmentation tends to convert continuous populations into metapopulations and also to decrease local population sizes. Therefore we analyze a metapopulation model, where each patch can support only a relatively small local population and thus experience demographic stochasticity. We investigated the evolution of density-dependent dispersal, emigration and immigration, in two scenarios: adult and natal dispersal. We show that density-dependent emigration can also evolve to a nonmonotone, "triple-threshold" strategy. This interesting phenomenon results from an interplay between the direct and indirect benefits of dispersal and the costs of dispersal. We also found that, compared to juveniles, dispersing adults may benefit more from density-dependent vs. density-independent dispersal strategies.

  7. A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1996-02-01

    The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.

  8. A Data Centred Method to Estimate and Map Changes in the Full Distribution of Daily Precipitation and Its Exceedances

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily temperature or precipitation. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by 'heavy tailed' distributed variables such as daily precipitation. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those extreme precipitation days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results identify regionally consistent patterns which, dependent on location, show systematic increase in precipitation on the wettest days, shifts in precipitation patterns to less moderate days and more heavy days, and drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013 Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, S. C. Chapman, N. W. Watkins, 2013 Environ. Res. Lett. 8, 034031 [2] Haylock et al. 2008 J. Geophys. Res (Atmospheres), 113, D20119

  9. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  10. Development of quantitative analysis method for stereotactic brain image: assessment of reduced accumulation in extent and severity using anatomical segmentation.

    PubMed

    Mizumura, Sunao; Kumita, Shin-ichiro; Cho, Keiichi; Ishihara, Makiko; Nakajo, Hidenobu; Toba, Masahiro; Kumazaki, Tatsuo

    2003-06-01

    Through visual assessment by three-dimensional (3D) brain image analysis methods using stereotactic brain coordinates system, such as three-dimensional stereotactic surface projections and statistical parametric mapping, it is difficult to quantitatively assess anatomical information and the range of extent of an abnormal region. In this study, we devised a method to quantitatively assess local abnormal findings by segmenting a brain map according to anatomical structure. Through quantitative local abnormality assessment using this method, we studied the characteristics of distribution of reduced blood flow in cases with dementia of the Alzheimer type (DAT). Using twenty-five cases with DAT (mean age, 68.9 years old), all of whom were diagnosed as probable Alzheimer's disease based on NINCDS-ADRDA, we collected I-123 iodoamphetamine SPECT data. A 3D brain map using the 3D-SSP program was compared with the data of 20 cases in the control group, who age-matched the subject cases. To study local abnormalities on the 3D images, we divided the whole brain into 24 segments based on anatomical classification. We assessed the extent of an abnormal region in each segment (rate of the coordinates with a Z-value that exceeds the threshold value, in all coordinates within a segment), and severity (average Z-value of the coordinates with a Z-value that exceeds the threshold value). This method clarified orientation and expansion of reduced accumulation, through classifying stereotactic brain coordinates according to the anatomical structure. This method was considered useful for quantitatively grasping distribution abnormalities in the brain and changes in abnormality distribution.

  11. Quantum dynamics of the intramolecular vibrational energy redistribution in OCS: From localization to quasi-thermalization

    NASA Astrophysics Data System (ADS)

    Pérez, J. B.; Arce, J. C.

    2018-06-01

    We report a fully quantum-dynamical study of the intramolecular vibrational energy redistribution (IVR) in the electronic ground state of carbonyl sulfide, which is a prototype of an isolated many-body quantum system with strong internal couplings and non-Rice-Ramsperger-Kassel-Marcus (RRKM) behavior. We pay particular attention to the role of many-body localization and the approach to thermalization, which currently are topics of considerable interest, as they pertain to the very foundations of statistical mechanics and thermodynamics. We employ local-mode (valence) coordinates and consider initial excitations localized in one local mode, with energies ranging from low to near the dissociation threshold, where the classical dynamics have been shown to be chaotic. We propagate the nuclear wavepacket on the potential energy surface by means of the numerically exact multiconfiguration time-dependent Hartree method and employ mean local energies, time-dependent and time-averaged populations in quantum number space, energy distributions, entanglement entropies, local population distributions, microcanonical averages, and dissociation probabilities, as diagnostic tools. This allows us to identify a continuous localization → delocalization transition in the energy flow, associated with the onset of quantum chaos, as the excitation energy increases up to near the dissociation threshold. Moreover, we find that at this energy and ˜1 ps the molecule nearly thermalizes. Furthermore, we observe that IVR is so slow that the molecule begins to dissociate well before such quasi-thermalization is complete, in accordance with earlier classical-mechanical predictions of non-RRKM behavior.

  12. Quantum dynamics of the intramolecular vibrational energy redistribution in OCS: From localization to quasi-thermalization.

    PubMed

    Pérez, J B; Arce, J C

    2018-06-07

    We report a fully quantum-dynamical study of the intramolecular vibrational energy redistribution (IVR) in the electronic ground state of carbonyl sulfide, which is a prototype of an isolated many-body quantum system with strong internal couplings and non-Rice-Ramsperger-Kassel-Marcus (RRKM) behavior. We pay particular attention to the role of many-body localization and the approach to thermalization, which currently are topics of considerable interest, as they pertain to the very foundations of statistical mechanics and thermodynamics. We employ local-mode (valence) coordinates and consider initial excitations localized in one local mode, with energies ranging from low to near the dissociation threshold, where the classical dynamics have been shown to be chaotic. We propagate the nuclear wavepacket on the potential energy surface by means of the numerically exact multiconfiguration time-dependent Hartree method and employ mean local energies, time-dependent and time-averaged populations in quantum number space, energy distributions, entanglement entropies, local population distributions, microcanonical averages, and dissociation probabilities, as diagnostic tools. This allows us to identify a continuous localization → delocalization transition in the energy flow, associated with the onset of quantum chaos, as the excitation energy increases up to near the dissociation threshold. Moreover, we find that at this energy and ∼1 ps the molecule nearly thermalizes. Furthermore, we observe that IVR is so slow that the molecule begins to dissociate well before such quasi-thermalization is complete, in accordance with earlier classical-mechanical predictions of non-RRKM behavior.

  13. An evidence- and risk-based approach to a harmonized laboratory alert list in Australia and New Zealand.

    PubMed

    Campbell, Craig A; Lam, Que; Horvath, Andrea R

    2018-04-19

    Individual laboratories are required to compose an alert list for identifying critical and significant risk results. The high-risk result working party of the Royal College of Pathologists of Australasia (RCPA) and the Australasian Association of Clinical Biochemists (AACB) has developed a risk-based approach for a harmonized alert list for laboratories throughout Australia and New Zealand. The six-step process for alert threshold identification and assessment involves reviewing the literature, rating the available evidence, performing a risk analysis, assessing method transferability, considering workload implications and seeking endorsement from stakeholders. To demonstrate this approach, a worked example for deciding the upper alert threshold for potassium is described. The findings of the worked example are for infants aged 0-6 months, a recommended upper potassium alert threshold of >7.0 mmol/L in serum and >6.5 mmol/L in plasma, and for individuals older than 6 months, a threshold of >6.2 mmol/L in both serum and plasma. Limitations in defining alert thresholds include the lack of well-designed studies that measure the relationship between high-risk results and patient outcomes or the benefits of treatment to prevent harm, and the existence of a wide range of clinical practice guidelines with conflicting decision points at which treatment is required. The risk-based approach described presents a transparent, evidence- and consensus-based methodology that can be used by any laboratory when designing an alert list for local use. The RCPA-AACB harmonized alert list serves as a starter set for further local adaptation or adoption after consultation with clinical users.

  14. Identifying Turbulent Structures through Topological Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Gruber, Andrea; Bennett, Janine C.

    2016-01-01

    A new method of extracting vortical structures from a turbulent flow is proposed whereby topological segmentation of an indicator function scalar field is used to identify the regions of influence of the individual vortices. This addresses a long-standing challenge in vector field topological analysis: indicator functions commonly used produce a scalar field based on the local velocity vector field; reconstructing regions of influence for a particular structure requires selecting a threshold to define vortex extent. In practice, the same threshold is rarely meaningful throughout a given flow. By also considering the topology of the indicator field function, the characteristics ofmore » vortex strength and extent can be separated and the ambiguity in the choice of the threshold reduced. The proposed approach is able to identify several types of vortices observed in a jet in cross-flow configuration simultaneously where no single threshold value for a selection of common indicator functions appears able to identify all of these vortex types.« less

  15. A data centred method to estimate and map changes in the full distribution of daily surface temperature

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nicholas

    2016-04-01

    Characterizing how our climate is changing includes local information which can inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily surface temperature. Here we focus on these local changes and on a model independent method to transform daily observations into patterns of local climate change. Our method [1] is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of the distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. For temperature, changes in the distribution itself can yield robust results [2]. We demonstrate how the fundamental timescales of anthropogenic climate change limit the identification of societally relevant aspects of changes. We show that it is nevertheless possible to extract, solely from observations, some confident quantified assessments of change at certain thresholds and locations [3]. We demonstrate this approach using E-OBS gridded data [4] timeseries of local daily surface temperature from specific locations across Europe over the last 60 years. [1] Chapman, S. C., D. A. Stainforth, N. W. Watkins, On estimating long term local climate trends, Phil. Trans. Royal Soc., A,371 20120287 (2013) [2] Stainforth, D. A. S. C. Chapman, N. W. Watkins, Mapping climate change in European temperature distributions, ERL 8, 034031 (2013) [3] Chapman, S. C., Stainforth, D. A., Watkins, N. W. Limits to the quantification of local climate change, ERL 10, 094018 (2015) [4] Haylock M. R. et al ., A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119, (2008)

  16. Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui

    2018-05-01

    In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.

  17. 48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System... Purchases at or under the simplified acquisition threshold. Insert 552.229-70, Federal, State, and Local Taxes, in purchases and contracts estimated to exceed the micropurchase threshold, but not the...

  18. Spatially restricted electrical activation of retinal ganglion cells in the rabbit retina by hexapolar electrode return configuration

    NASA Astrophysics Data System (ADS)

    Habib, Amgad G.; Cameron, Morven A.; Suaning, Gregg J.; Lovell, Nigel H.; Morley, John W.

    2013-06-01

    Objective. Visual prostheses currently in development aim to restore some form of vision to patients suffering from diseases such as age-related macular degeneration and retinitis pigmentosa. Most rely on electrically stimulating inner retinal cells via electrodes implanted on or near the retina, resulting in percepts of light termed ‘phosphenes’. Activation of spatially distinct populations of cells in the retina is key for pattern vision to be produced. To achieve this, the electrical stimulation must be localized, activating cells only in the direct vicinity of the stimulating electrode(s). With this goal in mind, a hexagonal return (hexapolar) configuration has been proposed as an alternative to the traditional monopolar or bipolar return configurations for electrically stimulating the retina. This study investigated the efficacy of the hexapolar configuration in localizing the activation of retinal ganglion cells (RGCs), compared to a monopolar configuration. Approach. Patch-clamp electrophysiology was used to measure the activation thresholds of RGCs in whole-mount rabbit retina to monopolar and hexapolar electrical stimulation, applied subretinally. Main results. Hexapolar activation thresholds for RGCs located outside the hex guard were found to be significantly (>2 fold) higher than those located inside the area of tissue bounded by the hex guard. The hexapolar configuration localized the activation of RGCs more effectively than its monopolar counterpart. Furthermore, no difference in hexapolar thresholds or localization was observed when using cathodic-first versus anodic-first stimulation. Significance. The hexapolar configuration may provide an improved method for electrically stimulating spatially distinct populations of cells in retinal tissue.

  19. Characterization of local thermodynamic equilibrium in a laser-induced aluminum alloy plasma.

    PubMed

    Zhang, Yong; Zhao, Zhenyang; Xu, Tao; Niu, GuangHui; Liu, Ying; Duan, Yixiang

    2016-04-01

    The electron temperature was evaluated using the line-to-continuum ratio method, and whether the plasma was close to the local thermodynamic equilibrium (LTE) state was investigated in detail. The results showed that approximately 5 μs after the plasma formed, the changes in the electron and excitation temperatures, which were determined using a Boltzmann plot, overlapped in the 15% error range, which indicated that the LTE state was reached. The recombination of electrons and ions and the free electron expansion process led to the deviation from the LTE state. The plasma's expansion rate slowed over time, and when the expansion time was close to the ionization equilibrium time, the LTE state was almost reached. The McWhirter criterion was adopted to calculate the threshold electron density for different species, and the results showed that experimental electron density was greater than the threshold electron density, which meant that the LTE state may have existed. However, for the nonmetal element N, the threshold electron density was greater than the value experimental value approximately 0.8 μs after the plasma formed, which meant that LTE state did not exist for N.

  20. Relationship between pastoralists' evaluation of rangeland state and vegetation threshold changes in Mongolian rangelands.

    PubMed

    Kakinuma, Kaoru; Sasaki, Takehiro; Jamsran, Undarmaa; Okuro, Toshiya; Takeuchi, Kazuhiko

    2014-10-01

    Applying the threshold concept to rangeland management is an important challenge in semi-arid and arid regions. Threshold recognition and prediction is necessary to enable local pastoralists to prevent the occurrence of an undesirable state that would result from unsustainable grazing pressure, but this requires a better understanding of the pastoralists' perception of vegetation threshold changes. We estimated plant species cover in survey plots along grazing gradients in steppe and desert-steppe areas of Mongolia. We also conducted interviews with local pastoralists and asked them to evaluate whether the plots were suitable for grazing. Floristic composition changed nonlinearly along the grazing gradient in both the desert-steppe and steppe areas. Pastoralists observed the floristic composition changes along the grazing gradients, but their evaluations of grazing suitability did not always decrease along the grazing gradients, both of which included areas in a post-threshold state. These results indicated that local pastoralists and scientists may have different perceptions of vegetation states, even though both of groups used plant species and coverage as indicators in their evaluations. Therefore, in future studies of rangeland management, researchers and pastoralists should exchange their knowledge and perceptions to successfully apply the threshold concept to rangeland management.

  1. Dynamics analysis of SIR epidemic model with correlation coefficients and clustering coefficient in networks.

    PubMed

    Zhang, Juping; Yang, Chan; Jin, Zhen; Li, Jia

    2018-07-14

    In this paper, the correlation coefficients between nodes in states are used as dynamic variables, and we construct SIR epidemic dynamic models with correlation coefficients by using the pair approximation method in static networks and dynamic networks, respectively. Considering the clustering coefficient of the network, we analytically investigate the existence and the local asymptotic stability of each equilibrium of these models and derive threshold values for the prevalence of diseases. Additionally, we obtain two equivalent epidemic thresholds in dynamic networks, which are compared with the results of the mean field equations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Fuzzy Behavior Modulation with Threshold Activation for Autonomous Vehicle Navigation

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward

    2000-01-01

    This paper describes fuzzy logic techniques used in a hierarchical behavior-based architecture for robot navigation. An architectural feature for threshold activation of fuzzy-behaviors is emphasized, which is potentially useful for tuning navigation performance in real world applications. The target application is autonomous local navigation of a small planetary rover. Threshold activation of low-level navigation behaviors is the primary focus. A preliminary assessment of its impact on local navigation performance is provided based on computer simulations.

  3. Damage threshold dependence of optical coatings on substrate materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhouling, W.; Zhenxiu, F.

    1996-04-01

    Damage threshold dependence on substrate materials was investigated for TiO2, ZrO2, SiO2, MgF2, ZnS, and single and TiO2/SiO2 multilayers. The results show that the damage threshold increases with increasing substrate thermal conductivity for single layers and AR coatings and remains the same for HR coatings. With the help of localized absorption measurement and in-situ damage process analysis, these phenomena were well correlated with local absorption-initiated thermal damage mechanism.

  4. Segmentation of singularity maps in the context of soil porosity

    NASA Astrophysics Data System (ADS)

    Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.

    2016-04-01

    Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in porphyry Cu deposits by fractal concentration-volume modeling. Journal of Geochemical Exploration, 108, 220-232. Martín-Sotoca, J. J., Tarquis, A. M., Saa-Requejo, A. and Grau, J. B. (2015). Pore detection in Computed Tomography (CT) soil images through singularity map analysis. Oral Presentation in PedoFract VIII Congress (June, La Coruña - Spain).

  5. Thresholds for activation of rabbit retinal ganglion cells with an ultrafine, extracellular microelectrode.

    PubMed

    Jensen, Ralph J; Rizzo, Joseph F; Ziv, Ofer R; Grumet, Andrew; Wyatt, John

    2003-08-01

    To determine electrical thresholds required for extracellular activation of retinal ganglion cells as part of a project to develop an epiretinal prosthesis. Retinal ganglion cells were recorded extracellularly in retinas isolated from adult New Zealand White rabbits. Electrical current pulses of 100- micro s duration were delivered to the inner surface of the retina from a 5- micro m long electrode. In about half of the cells, the point of lowest threshold was found by searching with anodal current pulses; in the other cells, cathodal current pulses were used. Threshold measurements were obtained near the cell bodies of 20 ganglion cells and near the axons of 19 ganglion cells. Both cathodal and anodal stimuli evoked a neural response in the ganglion cells that consisted of a single action potential of near-constant latency that persisted when retinal synaptic transmission was blocked with cadmium chloride. For cell bodies, but not axons, thresholds for both cathodal and anodal stimulation were dependent on the search method used to find the point of lowest threshold. With search and stimulation of matching polarity, cathodal stimuli evoked a ganglion cell response at lower currents (approximately one seventh to one tenth axonal threshold) than did anodal stimuli for both cell bodies and axons. With cathodal search and stimulation, cell body median thresholds were somewhat lower (approximately one half) than the axonal median thresholds. With anodal search and stimulation, cell body median thresholds were approximately the same as axonal median thresholds. The results suggest that cathodal stimulation should produce lower thresholds, more localized stimulation, and somewhat better selectivity for cell bodies over axons than would anodal stimulation.

  6. An Observationally-Centred Method to Quantify the Changing Shape of Local Temperature Distributions

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    For climate sensitive decisions and adaptation planning, guidance on how local climate is changing is needed at the specific thresholds relevant to particular impacts or policy endeavours. This requires the quantification of how the distributions of variables, such as daily temperature, are changing at specific quantiles. These temperature distributions are non-normal and vary both geographically and in time. We present a method[1,2] for analysing local climatic time series data to assess which quantiles of the local climatic distribution show the greatest and most robust changes. We have demonstrated this approach using the E-OBS gridded dataset[3] which consists of time series of local daily temperature across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. The change in temperature can be tracked at a temperature threshold, at a likelihood, or at a given return time, independently for each geographical location. Geographical correlations are thus an output of our method and reflect both climatic properties (local and synoptic), and spatial correlations inherent in the observation methodology. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. For instance, in a band from Northern France to Denmark the hottest days in the summer temperature distribution have seen changes of at least 2°C over a 43 year period; over four times the global mean change over the same period. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This approach also quantifies the level of detail at which one might wish to see agreement between climate models and observations if such models are to be used directly as tools to assess climate change impacts at local scales. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, Phil. Trans. R. Soc. A, 371 20120287. [2] D A Stainforth, S C Chapman, N W Watkins, 2013, Environ. Res. Lett. 8, 034031 [3] Haylock, M.R. et al., 2008, J. Geophys. Res (Atmospheres), 113, D20119

  7. A novel fusion method of improved adaptive LTP and two-directional two-dimensional PCA for face feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming

    2018-03-01

    In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.

  8. Optimal Design for the Precise Estimation of an Interaction Threshold: The Impact of Exposure to a Mixture of 18 Polyhalogenated Aromatic Hydrocarbons

    PubMed Central

    Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.

    2014-01-01

    Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366

  9. A data centred method to estimate and map how the local distribution of daily precipitation is changing

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nick

    2014-05-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles in distributions of variables such as daily temperature or precipitation. Here we focus on these local changes and on a method to transform daily observations of precipitation into patterns of local climate change. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results show regionally consistent patterns of systematic increase in precipitation on the wettest days, and of drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, 2013, S. C. Chapman, N. W. Watkins, Mapping climate change in European temperature distributions, Environ. Res. Lett. 8, 034031 [2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  10. Modeling of digital mammograms using bicubic spline functions and additive noise

    NASA Astrophysics Data System (ADS)

    Graffigne, Christine; Maintournam, Aboubakar; Strauss, Anne

    1998-09-01

    The purpose of our work is the microcalcifications detection on digital mammograms. In order to do so, we model the grey levels of digital mammograms by the sum of a surface trend (bicubic spline function) and an additive noise or texture. We also introduce a robust estimation method in order to overcome the bias introduced by the microcalcifications. After the estimation we consider the subtraction image values as noise. If the noise is not correlated, we adjust its distribution probability by the Pearson's system of densities. It allows us to threshold accurately the images of subtraction and therefore to detect the microcalcifications. If the noise is correlated, a unilateral autoregressive process is used and its coefficients are again estimated by the least squares method. We then consider non overlapping windows on the residues image. In each window the texture residue is computed and compared with an a priori threshold. This provides correct localization of the microcalcifications clusters. However this technique is definitely more time consuming that then automatic threshold assuming uncorrelated noise and does not lead to significantly better results. As a conclusion, even if the assumption of uncorrelated noise is not correct, the automatic thresholding based on the Pearson's system performs quite well on most of our images.

  11. Normal and abnormal tissue identification system and method for medical images such as digital mammograms

    NASA Technical Reports Server (NTRS)

    Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)

    2001-01-01

    A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.

  12. Development of Thresholds and Exceedance Probabilities for Influent Water Quality to Meet Drinking Water Regulations

    NASA Astrophysics Data System (ADS)

    Reeves, K. L.; Samson, C.; Summers, R. S.; Balaji, R.

    2017-12-01

    Drinking water treatment utilities (DWTU) are tasked with the challenge of meeting disinfection and disinfection byproduct (DBP) regulations to provide safe, reliable drinking water under changing climate and land surface characteristics. DBPs form in drinking water when disinfectants, commonly chlorine, react with organic matter as measured by total organic carbon (TOC), and physical removal of pathogen microorganisms are achieved by filtration and monitored by turbidity removal. Turbidity and TOC in influent waters to DWTUs are expected to increase due to variable climate and more frequent fires and droughts. Traditional methods for forecasting turbidity and TOC require catchment specific data (i.e. streamflow) and have difficulties predicting them under non-stationary climate. A modelling framework was developed to assist DWTUs with assessing their risk for future compliance with disinfection and DBP regulations under changing climate. A local polynomial method was developed to predict surface water TOC using climate data collected from NOAA, Normalized Difference Vegetation Index (NDVI) data from the IRI Data Library, and historical TOC data from three DWTUs in diverse geographic locations. Characteristics from the DWTUs were used in the EPA Water Treatment Plant model to determine thresholds for influent TOC that resulted in DBP concentrations within compliance. Lastly, extreme value theory was used to predict probabilities of threshold exceedances under the current climate. Results from the utilities were used to produce a generalized TOC threshold approach that only requires water temperature and bromide concentration. The threshold exceedance model will be used to estimate probabilities of exceedances under projected climate scenarios. Initial results show that TOC can be forecasted using widely available data via statistical methods, where temperature, precipitation, Palmer Drought Severity Index, and NDVI with various lags were shown to be important predictors of TOC, and TOC thresholds can be determined using water temperature and bromide concentration. Results include a model to predict influent turbidity and turbidity thresholds, similar to the TOC models, as well as probabilities of threshold exceedances for TOC and turbidity under changing climate.

  13. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.

  14. Influence of infectious disease seasonality on the performance of the outbreak detection algorithm in the China Infectious Disease Automated-alert and Response System

    PubMed Central

    Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming

    2017-01-01

    Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS. PMID:28728470

  15. Influence of infectious disease seasonality on the performance of the outbreak detection algorithm in the China Infectious Disease Automated-alert and Response System.

    PubMed

    Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming

    2018-01-01

    Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS.

  16. Performance standard-based validation study for local lymph node assay: 5-bromo-2-deoxyuridine-flow cytometry method.

    PubMed

    Ahn, Ilyoung; Kim, Tae-Sung; Jung, Eun-Sun; Yi, Jung-Sun; Jang, Won-Hee; Jung, Kyoung-Mi; Park, Miyoung; Jung, Mi-Sook; Jeon, Eun-Young; Yeo, Kyeong-Uk; Jo, Ji-Hoon; Park, Jung-Eun; Kim, Chang-Yul; Park, Yeong-Chul; Seong, Won-Keun; Lee, Ai-Young; Chun, Young Jin; Jeong, Tae Cheon; Jeung, Eui Bae; Lim, Kyung-Min; Bae, SeungJin; Sohn, Soojung; Heo, Yong

    2016-10-01

    Local lymph node assay: 5-bromo-2-deoxyuridine-flow cytometry method (LLNA: BrdU-FCM) is a modified non-radioisotopic technique with the additional advantages of accommodating multiple endpoints with the introduction of FCM, and refinement and reduction of animal use by using a sophisticated prescreening scheme. Reliability and accuracy of the LLNA: BrdU-FCM was determined according to OECD Test Guideline (TG) No. 429 (Skin Sensitization: Local Lymph Node Assay) performance standards (PS), with the participation of four laboratories. Transferability was demonstrated through successfully producing stimulation index (SI) values for 25% hexyl cinnamic aldehyde (HCA) consistently greater than 3, a predetermined threshold, by all participating laboratories. Within- and between-laboratory reproducibility was shown using HCA and 2,4-dinitrochlorobenzene, in which EC2.7 values (the estimated concentrations eliciting an SI of 2.7, the threshold for LLNA: BrdU-FCM) fell consistently within the acceptance ranges, 0.025-0.1% and 5-20%, respectively. Predictive capacity was tested using the final protocol version 1.3 for the 18 reference chemicals listed in OECD TG 429, of which results showed 84.6% sensitivity, 100% specificity, and 88.9% accuracy compared with the original LLNA. The data presented are considered to meet the performance criteria for the PS, and its predictive capacity was also sufficiently validated. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Calculation of femtosecond pulse laser induced damage threshold for broadband antireflective microstructure arrays.

    PubMed

    Jing, Xufeng; Shao, Jianda; Zhang, Junchao; Jin, Yunxia; He, Hongbo; Fan, Zhengxiu

    2009-12-21

    In order to more exactly predict femtosecond pulse laser induced damage threshold, an accurate theoretical model taking into account photoionization, avalanche ionization and decay of electrons is proposed by comparing respectively several combined ionization models with the published experimental measurements. In addition, the transmittance property and the near-field distribution of the 'moth eye' broadband antireflective microstructure directly patterned into the substrate material as a function of the surface structure period and groove depth are performed by a rigorous Fourier model method. It is found that the near-field distribution is strongly dependent on the periodicity of surface structure for TE polarization, but for TM wave it is insensitive to the period. What's more, the femtosecond pulse laser damage threshold of the surface microstructure on the pulse duration taking into account the local maximum electric field enhancement was calculated using the proposed relatively accurate theoretical ionization model. For the longer incident wavelength of 1064 nm, the weak linear damage threshold on the pulse duration is shown, but there is a surprising oscillation peak of breakdown threshold as a function of the pulse duration for the shorter incident wavelength of 532 nm.

  18. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  19. Effect of a preventive vaccine on the dynamics of HIV transmission

    NASA Astrophysics Data System (ADS)

    Gumel, A. B.; Moghadas, S. M.; Mickens, R. E.

    2004-12-01

    A deterministic mathematical model for the transmission dynamics of HIV infection in the presence of a preventive vaccine is considered. Although the equilibria of the model could not be expressed in closed form, their existence and threshold conditions for their stability are theoretically investigated. It is shown that the disease-free equilibrium is locally-asymptotically stable if the basic reproductive number R<1 (thus, HIV disease can be eradicated from the community) and unstable if R>1 (leading to the persistence of HIV within the community). A robust, positivity-preserving, non-standard finite-difference method is constructed and used to solve the model equations. In addition to showing that the anti-HIV vaccine coverage level and the vaccine-induced protection are critically important in reducing the threshold quantity R, our study predicts the minimum threshold values of vaccine coverage and efficacy levels needed to eradicate HIV from the community.

  20. Local Bifurcations and Optimal Theory in a Delayed Predator-Prey Model with Threshold Prey Harvesting

    NASA Astrophysics Data System (ADS)

    Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel

    2015-06-01

    We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.

  1. Characterization of Rod Function Phenotypes Across a Range of Age-Related Macular Degeneration Severities and Subretinal Drusenoid Deposits

    PubMed Central

    Flynn, Oliver J.; Cukras, Catherine A.; Jeffrey, Brett G.

    2018-01-01

    Purpose To examine spatial changes in rod-mediated function in relationship to local structural changes across the central retina in eyes with a spectrum of age-related macular degeneration (AMD) disease severity. Methods Participants were categorized into five AMD severity groups based on fundus features. Scotopic thresholds were measured at 14 loci spanning ±18° along the vertical meridian from one eye of each of 42 participants (mean = 71.7 ± 9.9 years). Following a 30% bleach, dark adaptation was measured at eight loci (±12°). Rod intercept time (RIT) was defined from the time to detect a −3.1 log cd/m2 stimulus. RITslope was defined from the linear fit of RIT with decreasing retinal eccentricity. The presence of subretinal drusenoid deposits (SDD), ellipsoid (EZ) band disruption, and drusen at the test loci was evaluated using optical coherence tomography. Results Scotopic thresholds indicated greater rod function loss in the macula, which correlated with increasing AMD group severity. RITslope, which captures the spatial change in the rate of dark adaptation, increased with AMD severity (P < 0.0001). Three rod function phenotypes emerged: RF1, normal rod function; RF2, normal scotopic thresholds but slowed dark adaptation; and RF3, elevated scotopic thresholds with slowed dark adaptation. Dark adaptation was slowed at all loci with SDD or EZ band disruption, and at 32% of loci with no local structural changes. Conclusions Three rod function phenotypes were defined from combined measurement of scotopic threshold and dark adaptation. Spatial changes in dark adaptation across the macula were captured with RITslope, which may be a useful outcome measure for functional studies of AMD. PMID:29847647

  2. Prefrontal rTMS for treating depression: location and intensity results from the OPT-TMS multi-site clinical trial.

    PubMed

    Johnson, Kevin A; Baig, Mirza; Ramsey, Dave; Lisanby, Sarah H; Avery, David; McDonald, William M; Li, Xingbao; Bernhardt, Elisabeth R; Haynor, David R; Holtzheimer, Paul E; Sackeim, Harold A; George, Mark S; Nahas, Ziad

    2013-03-01

    Motor cortex localization and motor threshold determination often guide Transcranial Magnetic Stimulation (TMS) placement and intensity settings for non-motor brain stimulation. However, anatomic variability results in variability of placement and effective intensity. Post-study analysis of the OPT-TMS Study reviewed both the final positioning and the effective intensity of stimulation (accounting for relative prefrontal scalp-cortex distances). We acquired MRI scans of 185 patients in a multi-site trial of left prefrontal TMS for depression. Scans had marked motor sites (localized with TMS) and marked prefrontal sites (5 cm anterior of motor cortex by the "5 cm rule"). Based on a visual determination made before the first treatment, TMS therapy occurred either at the 5 cm location or was adjusted 1 cm forward. Stimulation intensity was 120% of resting motor threshold. The "5 cm rule" would have placed stimulation in premotor cortex for 9% of patients, which was reduced to 4% with adjustments. We did not find a statistically significant effect of positioning on remission, but no patients with premotor stimulation achieved remission (0/7). Effective stimulation ranged from 93 to 156% of motor threshold, and no seizures were induced across this range. Patients experienced remission with effective stimulation intensity ranging from 93 to 146% of motor threshold, and we did not find a significant effect of effective intensity on remission. Our data indicates that individualized positioning methods are useful to reduce variability in placement. Stimulation at 120% of motor threshold, unadjusted for scalp-cortex distances, appears safe for a broad range of patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy?

    PubMed

    Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa

    2015-11-01

    To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. PMID:27455279

  5. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.

  6. Margin to tumor thickness ratio - A predictor of local recurrence and survival in oral squamous cell carcinoma.

    PubMed

    Heiduschka, Gregor; Virk, Sohaib A; Palme, Carsten E; Ch'ng, Sydney; Elliot, Michael; Gupta, Ruta; Clark, Jonathan

    2016-04-01

    To assess whether small oral squamous cell carcinomas (OSCC) require the same margin clearance as large tumors. We evaluated the association between the ratio of the closest margin to tumor size (MSR) and tumor thickness (MTR) with local control and survival. The clinicopathologic and follow up data were obtained for 501 OSCC patients who had surgical resection with curative intent at our institution. MTR and MSR were computed and their associations with local control and survival were assessed using multivariable Cox-regression model. Survival curves were generated using the Kaplan-Meier method. MTR was a better predictor of disease control than MSR. MTR was a predictor of local failure (p=0.033) and disease specific death (p=0.038) after adjusting for perineural invasion, lymphovascular involvement, nodal status, and radiotherapy. A threshold MTR value of 0.3 was identified, above which the risk of local recurrence was low. The ratio of margin to tumor thickness was an independent predictor for local recurrence and disease specific death in this cohort. A MTR>0.3 can serve as a useful tool for adjuvant therapy planning as it combines tumor thickness and margin clearance, two well established prognostic factors. The minimum safe margin can be calculated by multiplying the tumor thickness by 0.3. Further prospective studies in other institutions are warranted to confirm the prognostic utility of MTR and assess the generalizability of our threshold values. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Implementation and testing of the first prompt search for gravitational wave transients with electromagnetic counterparts

    NASA Astrophysics Data System (ADS)

    LIGO Scientific Collaboration; Virgo Collaboration; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Affeldt, C.; Ajith, P.; Allen, B.; Allen, G. S.; Amador Ceron, E.; Amariutei, D.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Arain, M. A.; Araya, M. C.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Behnke, B.; Beker, M. G.; Bell, A. S.; Belletoile, A.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brummit, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet-Castell, J.; Burmeister, O.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannizzo, J.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarin, E.; Chaibi, O.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande-Mottin, E.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Danilishin, S. L.; Dannenberg, R.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G.; Daw, E. J.; Day, R.; Dayanga, T.; DeRosa, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Del Prete, M.; Dent, T.; Dergachev, V.; Derosa, R.; Desalvo, R.; Dhillon, V.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; De Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Diguglielmo, J.; Donovan, F.; Dooley, K. L.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endröczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fan, Y.; Farr, B. F.; Farr, W.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Flanigan, M.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fridriksson, J. K.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P. J.; Fyffe, M.; Galimberti, M.; Gammaitoni, L.; Ganija, M. R.; Garcia, J.; Garofoli, J. A.; Garufi, F.; Gáspár, M. E.; Gemme, G.; Geng, R.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gill, C.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Gray, N.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Ha, T.; Hage, B.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Hayler, T.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Homan, J.; Hong, T.; Hooper, S.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; Jang, H.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kamaretsos, I.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B.; Kim, C.; Kim, D.; Kim, H.; Kim, K.; Kim, N.; Kim, Y.-M.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnamurthy, S.; Krishnan, B.; Krâ´Olak, A.; Kuehn, G.; Kumar, R.; Kwee, P.; Laas-Bourez, M.; Lam, P. K.; Landry, M.; Lang, M.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Leaci, P.; Lee, C. H.; Lee, H. M.; Leindecker, N.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Li, J.; Li, T. G. F.; Liguori, N.; Lindquist, P. E.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Luan, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; MacDonald, E.; Machenschalk, B.; Macinnis, M.; MacLeod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marandi, A.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McDaniel, P.; McGuire, S. C.; McIntyre, G.; McIver, J.; McKechan, D. J. A.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menendez, D.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Moesta, P.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Mosca, S.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Nawrodt, R.; Necula, V.; Nelson, J.; Newton, G.; Nishizawa, A.; Nocera, F.; Nolting, D.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenburg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pedraza, M.; Peiris, P.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Phelps, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Prato, M.; Predoi, V.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C. R.; Rankins, B.; Rapagnani, P.; Rapoport, S.; Raymond, V.; Re, V.; Redwine, K.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Ryll, H.; Sainathan, P.; Sakosky, M.; Salemi, F.; Samblowski, A.; Sammut, L.; Sancho de La Jordana, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schlamminger, S.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Smith, R. J. E.; Somiya, K.; Sorazu, B.; Soto, J.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Stein, A. J.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Tseng, K.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; van den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, X.; Wang, Z.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, H. R.; Williams, L.; Willke, B.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yu, P.; Yvert, M.; Zadroźny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhang, W.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; Akerlof, C.; Boer, M.; Fender, R.; Gehrels, N.; Klotz, A.; Ofek, E. O.; Smith, M.; Sokolowski, M.; Stappers, B. W.; Steele, I.; Swinbank, J.; Wijeres, R. A. M. J.

    2012-04-01

    Aims: A transient astrophysical event observed in both gravitational wave (GW) and electromagnetic (EM) channels would yield rich scientific rewards. A first program initiating EM follow-ups to possible transient GW events has been developed and exercised by the LIGO and Virgo community in association with several partners. In this paper, we describe and evaluate the methods used to promptly identify and localize GW event candidates and to request images of targeted sky locations. Methods: During two observing periods (Dec. 17, 2009 to Jan. 8, 2010 and Sep. 2 to Oct. 20, 2010), a low-latency analysis pipeline was used to identify GW event candidates and to reconstruct maps of possible sky locations. A catalog of nearby galaxies and Milky Way globular clusters was used to select the most promising sky positions to be imaged, and this directional information was delivered to EM observatories with time lags of about thirty minutes. A Monte Carlo simulation has been used to evaluate the low-latency GW pipeline's ability to reconstruct source positions correctly. Results: For signals near the detection threshold, our low-latency algorithms often localized simulated GW burst signals to tens of square degrees, while neutron star/neutron star inspirals and neutron star/black hole inspirals were localized to a few hundred square degrees. Localization precision improves for moderately stronger signals. The correct sky location of signals well above threshold and originating from nearby galaxies may be observed with ~50% or better probability with a few pointings of wide-field telescopes.

  8. Implementation and Testing of the First Prompt for Electromagnetic Counterparts to Gravitational Wave Transients

    NASA Technical Reports Server (NTRS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Affeldt, C.; hide

    2011-01-01

    A transient astrophysical event observed in both gravitational wave (GW) and electromagnetic (EM) channels would yield rich scientific rewards. A first program initiating EM follow-ups to possible transient GW events has been developed and exercised by the LIGO and Virgo community in association with several partners. In this paper, we describe and evaluate the methods used to promptly identify and localize GW event candidates and to request images of targeted sky locations. Methods. During two observing periods (Dec 17 2009 to Jan 8 2010 and Sep 2 to Oct 20 2010), a low-latency analysis pipeline was used to identify GW-event candidates and to reconstruct-maps of possible sky locations. A catalog of nearby galaxies and Milky Way globular clusters was used to select the most promising sky positions to be imaged, and this directional information was delivered to EM observatories with time lags of about thirty minutes. A Monte Carlo simulation has been used to evaluate the low-latency GW pipeline s ability to reconstruct source positions correctly. Results. For signals near the detection threshold, our low-latency algorithms often localized simulated GW burst signals to tens of square degrees, while neutron star/neutron star inspirals and neutron star/black hole inspirals were localized to a few hundred square degrees. Localization precision improves for moderately stronger signals. The correct sky location of signals well above threshold and originating from nearby galaxies may be observed with 50% or better probability with a few pointings of wide-field telescopes.

  9. Implementation and Testing of the First Prompt Search for Gravitational Wave Transients with Electromagnetic Counterparts

    NASA Technical Reports Server (NTRS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Affeldt, C.; hide

    2012-01-01

    Aims. A transient astrophysical event observed in both gravitational wave (GW) and electromagnetic (EM) channels would yield rich scientific rewards. A first program initiating EM follow-ups to possible transient GW events has been developed and exercised by the LIGO and Virgo community in association with several partners. In this paper, we describe and evaluate the methods used to promptly identify and localize GW event candidates and to request images of targeted sky locations. Methods. During two observing periods (Dec. 17, 2009 to Jan. 8, 2010 and Sep. 2 to Oct. 20, 2010), a low-latency analysis pipeline was used to identify GW event candidates and to reconstruct maps of possible sky locations. A catalog of nearby galaxies and MilkyWay globular clusters was used to select the most promising sky positions to be imaged, and this directional information was delivered to EM observatories with time lags of about thirty minutes. A Monte Carlo simulation has been used to evaluate the low-latency GW pipeline's ability to reconstruct source positions correctly. Results. For signals near the detection threshold, our low-latency algorithms often localized simulated GW burst signals to tens of square degrees, while neutron star/neutron star inspirals and neutron star/black hole inspirals were localized to a few hundred square degrees. Localization precision improves for moderately stronger signals. The correct sky location of signals well above threshold and originating from nearby galaxies may be observed with 50% or better probability with a few pointings of wide-field telescopes.

  10. Fast evaluation of scaled opposite spin second-order Møller-Plesset correlation energies using auxiliary basis expansions and exploiting sparsity.

    PubMed

    Jung, Yousung; Shao, Yihan; Head-Gordon, Martin

    2007-09-01

    The scaled opposite spin Møller-Plesset method (SOS-MP2) is an economical way of obtaining correlation energies that are computationally cheaper, and yet, in a statistical sense, of higher quality than standard MP2 theory, by introducing one empirical parameter. But SOS-MP2 still has a fourth-order scaling step that makes the method inapplicable to very large molecular systems. We reduce the scaling of SOS-MP2 by exploiting the sparsity of expansion coefficients and local integral matrices, by performing local auxiliary basis expansions for the occupied-virtual product distributions. To exploit sparsity of 3-index local quantities, we use a blocking scheme in which entire zero-rows and columns, for a given third global index, are deleted by comparison against a numerical threshold. This approach minimizes sparse matrix book-keeping overhead, and also provides sufficiently large submatrices after blocking, to allow efficient matrix-matrix multiplies. The resulting algorithm is formally cubic scaling, and requires only moderate computational resources (quadratic memory and disk space) and, in favorable cases, is shown to yield effective quadratic scaling behavior in the size regime we can apply it to. Errors associated with local fitting using the attenuated Coulomb metric and numerical thresholds in the blocking procedure are found to be insignificant in terms of the predicted relative energies. A diverse set of test calculations shows that the size of system where significant computational savings can be achieved depends strongly on the dimensionality of the system, and the extent of localizability of the molecular orbitals. Copyright 2007 Wiley Periodicals, Inc.

  11. MLESAC Based Localization of Needle Insertion Using 2D Ultrasound Images

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Gao, Dedong; Wang, Shan; Zhanwen, A.

    2018-04-01

    In the 2D ultrasound image of ultrasound-guided percutaneous needle insertions, it is difficult to determine the positions of needle axis and tip because of the existence of artifacts and other noises. In this work the speckle is regarded as the noise of an ultrasound image, and a novel algorithm is presented to detect the needle in a 2D ultrasound image. Firstly, the wavelet soft thresholding technique based on BayesShrink rule is used to denoise the speckle of ultrasound image. Secondly, we add Otsu’s thresholding method and morphologic operations to pre-process the ultrasound image. Finally, the localization of the needle is identified and positioned in the 2D ultrasound image based on the maximum likelihood estimation sample consensus (MLESAC) algorithm. The experimental results show that it is valid for estimating the position of needle axis and tip in the ultrasound images with the proposed algorithm. The research work is hopeful to be used in the path planning and robot-assisted needle insertion procedures.

  12. Large-Scale Propagation of Ultrasound in a 3-D Breast Model Based on High-Resolution MRI Data

    PubMed Central

    Tillett, Jason C.; Metlay, Leon A.; Waag, Robert C.

    2010-01-01

    A 40 × 35 × 25-mm3 specimen of human breast consisting mostly of fat and connective tissue was imaged using a 3-T magnetic resonance scanner. The resolutions in the image plane and in the orthogonal direction were 130 μm and 150 μm, respectively. Initial processing to prepare the data for segmentation consisted of contrast inversion, interpolation, and noise reduction. Noise reduction used a multilevel bidirectional median filter to preserve edges. The volume of data was segmented into regions of fat and connective tissue by using a combination of local and global thresholding. Local thresholding was performed to preserve fine detail, while global thresholding was performed to minimize the interclass variance between voxels classified as background and voxels classified as object. After smoothing the data to avoid aliasing artifacts, the segmented data volume was visualized using iso-surfaces. The isosurfaces were enhanced using transparency, lighting, shading, reflectance, and animation. Computations of pulse propagation through the model illustrate its utility for the study of ultrasound aberration. The results show the feasibility of using the described combination of methods to demonstrate tissue morphology in a form that provides insight about the way ultrasound beams are aberrated in three dimensions by tissue. PMID:20172794

  13. Large-scale propagation of ultrasound in a 3-D breast model based on high-resolution MRI data.

    PubMed

    Salahura, Gheorghe; Tillett, Jason C; Metlay, Leon A; Waag, Robert C

    2010-06-01

    A 40 x 35 x 25-mm(3) specimen of human breast consisting mostly of fat and connective tissue was imaged using a 3-T magnetic resonance scanner. The resolutions in the image plane and in the orthogonal direction were 130 microm and 150 microm, respectively. Initial processing to prepare the data for segmentation consisted of contrast inversion, interpolation, and noise reduction. Noise reduction used a multilevel bidirectional median filter to preserve edges. The volume of data was segmented into regions of fat and connective tissue by using a combination of local and global thresholding. Local thresholding was performed to preserve fine detail, while global thresholding was performed to minimize the interclass variance between voxels classified as background and voxels classified as object. After smoothing the data to avoid aliasing artifacts, the segmented data volume was visualized using isosurfaces. The isosurfaces were enhanced using transparency, lighting, shading, reflectance, and animation. Computations of pulse propagation through the model illustrate its utility for the study of ultrasound aberration. The results show the feasibility of using the described combination of methods to demonstrate tissue morphology in a form that provides insight about the way ultrasound beams are aberrated in three dimensions by tissue.

  14. CEM43°C thermal dose thresholds: a potential guide for magnetic resonance radiofrequency exposure levels?

    PubMed

    van Rhoon, Gerard C; Samaras, Theodoros; Yarmolenko, Pavel S; Dewhirst, Mark W; Neufeld, Esra; Kuster, Niels

    2013-08-01

    To define thresholds of safe local temperature increases for MR equipment that exposes patients to radiofrequency fields of high intensities for long duration. These MR systems induce heterogeneous energy absorption patterns inside the body and can create localised hotspots with a risk of overheating. The MRI + EUREKA research consortium organised a "Thermal Workshop on RF Hotspots". The available literature on thresholds for thermal damage and the validity of the thermal dose (TD) model were discussed. The following global TD threshold guidelines for safe use of MR are proposed: 1. All persons: maximum local temperature of any tissue limited to 39 °C 2. Persons with compromised thermoregulation AND (a) Uncontrolled conditions: maximum local temperature limited to 39 °C (b) Controlled conditions: TD < 2 CEM43°C 3. Persons with uncompromised thermoregulation AND (a) Uncontrolled conditions: TD < 2 CEM43°C (b) Controlled conditions: TD < 9 CEM43°C The following definitions are applied: Controlled conditions A medical doctor or a dedicated trained person can respond instantly to heat-induced physiological stress Compromised thermoregulation All persons with impaired systemic or reduced local thermoregulation • Standard MRI can cause local heating by radiofrequency absorption. • Monitoring thermal dose (in units of CEM43°C) can control risk during MRI. • 9 CEM43°C seems an acceptable thermal dose threshold for most patients. • For skin, muscle, fat and bone,16 CEM43°C is likely acceptable.

  15. Threshold pressure for mechanoluminescence of macrocrystals, microcrystals and nanocrystals of doped zinc sulphide

    NASA Astrophysics Data System (ADS)

    Chandra, B. P.; Chandra, V. K.; Jha, Piyush; Sonwane, V. D.

    2016-06-01

    The threshold pressure for elastico-mechanoluminescence (EML) of ZnS:Mn macrocrystals is 20 MPa, and ZnS:Cu,Al macrocrystals do not show ML during elastic deformation. However, the threshold pressure for EML of ZnS:Mn and ZnS:Cu,Cl microcrystals and nanocrystals is nearly 1 MPa. Thus, it seems that high concentration of defects in microcrystalline and nanocrystalline ZnS:Mn and ZnS:Cu,Cl produces disorder and distortion in lattice and changes the local crystal-structure near impurities, and consequently, the enhanced piezoelectric constant of local region produces EML for low value of applied pressure. The threshold pressure for the ML of ZnS:Mn and ZnS:Cu,Al single macrocrystals is higher because such crystals possess comparatively less number of defects near the impurities where the phase-transition is not possible and their ML is caused for high value of stress because the bulk piezoelectric constant is less. Thus, size-dependent threshold pressure for ML supports the origin of EML from piezoelectricity in local region of the crystals. The finding of present investigation may be useful in tailoring phosphors emitting intense EML of different colours.

  16. Stable and unstable roots of ion temperature gradient driven mode using curvature modified plasma dispersion functions

    NASA Astrophysics Data System (ADS)

    Gültekin, Ö.; Gürcan, Ö. D.

    2018-02-01

    Basic, local kinetic theory of ion temperature gradient driven (ITG) mode, with adiabatic electrons is reconsidered. Standard unstable, purely oscillating as well as damped solutions of the local dispersion relation are obtained using a bracketing technique that uses the argument principle. This method requires computing the plasma dielectric function and its derivatives, which are implemented here using modified plasma dispersion functions with curvature and their derivatives, and allows bracketing/following the zeros of the plasma dielectric function which corresponds to different roots of the ITG dispersion relation. We provide an open source implementation of the derivatives of modified plasma dispersion functions with curvature, which are used in this formulation. Studying the local ITG dispersion, we find that near the threshold of instability the unstable branch is rather asymmetric with oscillating solutions towards lower wave numbers (i.e. drift waves), and damped solutions toward higher wave numbers. This suggests a process akin to inverse cascade by coupling to the oscillating branch towards lower wave numbers may play a role in the nonlinear evolution of the ITG, near the instability threshold. Also, using the algorithm, the linear wave diffusion is estimated for the marginally stable ITG mode.

  17. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  18. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)

    PubMed Central

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-01-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574

  19. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).

    PubMed

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-06-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.

  20. SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M

    Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less

  1. Quantum secret sharing via local operations and classical communication.

    PubMed

    Yang, Ying-Hui; Gao, Fei; Wu, Xia; Qin, Su-Juan; Zuo, Hui-Juan; Wen, Qiao-Yan

    2015-11-20

    We investigate the distinguishability of orthogonal multipartite entangled states in d-qudit system by restricted local operations and classical communication. According to these properties, we propose a standard (2, n)-threshold quantum secret sharing scheme (called LOCC-QSS scheme), which solves the open question in [Rahaman et al., Phys. Rev. A, 91, 022330 (2015)]. On the other hand, we find that all the existing (k, n)-threshold LOCC-QSS schemes are imperfect (or "ramp"), i.e., unauthorized groups can obtain some information about the shared secret. Furthermore, we present a (3, 4)-threshold LOCC-QSS scheme which is close to perfect.

  2. A comparison of earthquake backprojection imaging methods for dense local arrays

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.

  3. Secondary iris recognition method based on local energy-orientation feature

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing

    2015-01-01

    This paper proposes a secondary iris recognition based on local features. The application of the energy-orientation feature (EOF) by two-dimensional Gabor filter to the extraction of the iris goes before the first recognition by the threshold of similarity, which sets the whole iris database into two categories-a correctly recognized class and a class to be recognized. Therefore, the former are accepted and the latter are transformed by histogram to achieve an energy-orientation histogram feature (EOHF), which is followed by a second recognition with the chi-square distance. The experiment has proved that the proposed method, because of its higher correct recognition rate, could be designated as the most efficient and effective among its companion studies in iris recognition algorithms.

  4. SU-G-206-01: A Fully Automated CT Tool to Facilitate Phantom Image QA for Quantitative Imaging in Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahi-Anwar, M; Lo, P; Kim, H

    Purpose: The use of Quantitative Imaging (QI) methods in Clinical Trials requires both verification of adherence to a specified protocol and an assessment of scanner performance under that protocol, which are currently accomplished manually. This work introduces automated phantom identification and image QA measure extraction towards a fully-automated CT phantom QA system to perform these functions and facilitate the use of Quantitative Imaging methods in clinical trials. Methods: This study used a retrospective cohort of CT phantom scans from existing clinical trial protocols - totaling 84 phantoms, across 3 phantom types using various scanners and protocols. The QA system identifiesmore » the input phantom scan through an ensemble of threshold-based classifiers. Each classifier - corresponding to a phantom type - contains a template slice, which is compared to the input scan on a slice-by-slice basis, resulting in slice-wise similarity metric values for each slice compared. Pre-trained thresholds (established from a training set of phantom images matching the template type) are used to filter the similarity distribution, and the slice with the most optimal local mean similarity, with local neighboring slices meeting the threshold requirement, is chosen as the classifier’s matched slice (if it existed). The classifier with the matched slice possessing the most optimal local mean similarity is then chosen as the ensemble’s best matching slice. If the best matching slice exists, image QA algorithm and ROIs corresponding to the matching classifier extracted the image QA measures. Results: Automated phantom identification performed with 84.5% accuracy and 88.8% sensitivity on 84 phantoms. Automated image quality measurements (following standard protocol) on identified water phantoms (n=35) matched user QA decisions with 100% accuracy. Conclusion: We provide a fullyautomated CT phantom QA system consistent with manual QA performance. Further work will include parallel component to automatically verify image acquisition parameters and automated adherence to specifications. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant support from: U01 CA181156.« less

  5. Investigation of the first-order phase transition kinetics using the method of pulsed photothermal surface deformation: radial measurements

    NASA Astrophysics Data System (ADS)

    Vintzentz, S. V.; Sandomirsky, V. B.

    1992-09-01

    An extension of the photothermal surface deformation (PTSD) method to study the macroscopic kinetics of the first-order phase transition (PTr) is given. The movement of the phase interface (PI) over a surface with a PTr locally induced in the subsurface volume by a focused laser pulse is investigated for the first time using radial measurements of the PTSD kinetics. For the known metal-to-semiconductor PTr in VO 2 (a good model system) a procedure is suggested for measuring the maximum size rsm of the "hot" (metal) phase on the surface (a parameter most difficult to determine) as well as for estimating the velocity of the PI movement over the surface, vs, and in the bulk, vb. Besides, it is shown that the PTSD method may be used to determine the "local" threshold energy E0 needed for the laser-induced PTr and the "local" latent heat L of the PTr. This demonstrates the feasibility of scanning surface E0- and L-microscopy.

  6. Thermal quantitative sensory testing to assess the sensory effects of three local anesthetic solutions in a randomized trial of interscalene blockade for shoulder surgery.

    PubMed

    Sermeus, Luc A; Hans, Guy H; Schepens, Tom; Bosserez, Nathalie M-L; Breebaart, Margaretha B; Smitz, Carine J; Vercauteren, Marcel P

    2016-01-01

    This study investigated whether quantitative sensory testing (QST) with thermal stimulations can quantitatively measure the characteristics of an ultrasound-guided interscalene brachial plexus block (US-ISB). This was a prospective randomized trial in patients scheduled for arthroscopic shoulder surgery under general anesthesia and US-ISB. Participants and observers were blinded for the study. We assigned the study participants to one of three groups: 0.5% levobupivacaine 15 mL, 0.5% levobupivacaine 15 mL with 1:200,000 epinephrine, and 0.75% ropivacaine 15 mL. We performed thermal QST within dermatomes C4, C5, C6, and C7 before infiltration and 30 min, six hours, ten hours, and 24 hr after performing the US-ISB. In addition, we used QST, a semi-objective quantitative testing method, to measure the onset, intensity, duration, extent, and functional recovery of the sensory block. We also measured detection thresholds for cold/warm sensations and cold/heat pain. Detection thresholds for all thermal sensations within the ipsilateral C4, C5, C6, and C7 dermatomes increased rapidly (indicating the development of a hypoesthetic state) and reached a steady state after 30 min. This lasted for approximately ten hours and returned to normal detection thresholds by 24 hr. There were no differences detected between the three groups at 24 hr when we compared warm sensation thresholds on one dermatome. Visual inspection of the pooled results per dermatome suggests the ability of QST to detect clinically relevant differences in block intensity per dermatome. Quantitative sensory testing can be useful as a method for detecting the presence and characteristics of regional anesthesia-induced sensory block and may be used for the evaluation of clinical protocols. The three local anesthetic solutions exhibited a similar anesthetic effect. The results support the use of QST to assess block characteristics quantitatively under clinical research conditions. This trial was registered at Clinicaltrals.gov, NCT02271867.

  7. Localization suppression and fusion measure of the precedence effect in young children

    NASA Astrophysics Data System (ADS)

    Litovsky, Ruth; Godar, Shelly; Yu, Gongqiang

    2004-05-01

    This study investigated aspects of the precedence effect (PE) known as fusion and localization dominance in children 4-5 years of age. Stimuli were three, 25-ms noise bursts (2-ms rise/fall times) with 250-ms ISI. On PE conditions the lead stimulus was presented from one of six locations in azimuth, and the lag was at 0 deg. Lead-lag delays varied from 5 to 100 ms. Localization was measured using an identification paradigm. Fusion was measured separately whereby subjects reported whether a single auditory event or two auditory events were perceived. Children reported two sounds on 75% of trials (fusion threshold) at delays ranging from 15 to 35 ms. Below fusion thresholds, the localization of the lead was similar to that of single-source stimuli. Above fusion thresholds lead localization was significantly degraded, persisting out to 100 ms. Localization of the lag was poor at all delays on which it was reported as being heard. According to these results localization dominance (difficulty localizing the lag) in children persists at greater delays than fusion, which is consistent with findings obtained in adult subjects. The range of delays over which these effects are robust in children is longer than the range observed in adults.

  8. Experimental and theoretical study of the microsolvation of sodium atoms in methanol clusters: differences and similarities to sodium-water and sodium-ammonia.

    PubMed

    Dauster, Ingo; Suhm, Martin A; Buck, Udo; Zeuch, Thomas

    2008-01-07

    Methanol clusters are generated in a continuous He-seeded supersonic expansion and doped with sodium atoms in a pick-up cell. By this method, clusters of the type Na(CH(3)OH)(n) are formed and subsequently photoionized by applying a tunable dye-laser system. The microsolvation process of the Na 3s electron is studied by determining the ionization potentials (IPs) of these clusters size-selectively for n = 2-40. A decrease is found from n = 2 to 6 and a constant value of 3.19 +/- 0.07 eV for n = 6-40. The experimentally-determined ionization potentials are compared with ionization potentials derived from quantum-chemical calculations, assuming limiting vertical and adiabatic processes. In the first case, energy differences are calculated between the neutral and the ionized cationic clusters of the same geometry. In the second case, the ionized clusters are used in their optimized relaxed geometry. These energy differences and relative stabilities of isomeric clusters vary significantly with the applied quantum-chemical method (B3LYP or MP2). The comparison with the experiment for n = 2-7 reveals strong variations of the ionization potential with the cluster structure indicating that structural diversity and non-vertical pathways give significant signal contributions at the threshold. Based on these findings, a possible explanation for the remarkable difference in IP evolutions of methanol or water and ammonia is presented: for methanol and water a rather localized surface or semi-internal Na 3s electron is excited to either high Rydberg or more localized states below the vertical ionization threshold. This excitation is followed by a local structural relaxation that couples to an autoionization process. For small clusters with n < 6 for methanol and n < 4 for water the addition of solvent molecules leads to larger solvent-metal-ion interaction energies, which consequently lead to lower ionization thresholds. For n = 6 (methanol) and n = 4 (water) this effect comes to a halt, which may be connected with the completion of the first cationic solvation shell limiting the release of local relaxation energy. For Na(NH(3))(n), a largely delocalized and internal electron is excited to autoionizing electronic states, a process that is no longer local and consequently may depend on cluster size up to very large n.

  9. Automated extraction of pleural effusion in three-dimensional thoracic CT images

    NASA Astrophysics Data System (ADS)

    Kido, Shoji; Tsunomori, Akinori

    2009-02-01

    It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.

  10. Lane Marking Detection and Reconstruction with Line-Scan Imaging Data.

    PubMed

    Li, Lin; Luo, Wenting; Wang, Kelvin C P

    2018-05-20

    A bstract: Lane marking detection and localization are crucial for autonomous driving and lane-based pavement surveys. Numerous studies have been done to detect and locate lane markings with the purpose of advanced driver assistance systems, in which image data are usually captured by vision-based cameras. However, a limited number of studies have been done to identify lane markings using high-resolution laser images for road condition evaluation. In this study, the laser images are acquired with a digital highway data vehicle (DHDV). Subsequently, a novel methodology is presented for the automated lane marking identification and reconstruction, and is implemented in four phases: (1) binarization of the laser images with a new threshold method (multi-box segmentation based threshold method); (2) determination of candidate lane markings with closing operations and a marching square algorithm; (3) identification of true lane marking by eliminating false positives (FPs) using a linear support vector machine method; and (4) reconstruction of the damaged and dash lane marking segments to form a continuous lane marking based on the geometry features such as adjacent lane marking location and lane width. Finally, a case study is given to validate effects of the novel methodology. The findings indicate the new strategy is robust in image binarization and lane marking localization. This study would be beneficial in road lane-based pavement condition evaluation such as lane-based rutting measurement and crack classification.

  11. Optic disk localization by a robust fusion method

    NASA Astrophysics Data System (ADS)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  12. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks

    PubMed Central

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-01-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608

  13. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    PubMed

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-08-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.

  14. Contextual Interactions in Grating Plaid Configurations Are Explained by Natural Image Statistics and Neural Modeling

    PubMed Central

    Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter

    2016-01-01

    Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076

  15. Retinal vessel segmentation on SLO image

    PubMed Central

    Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Schuman, Joel S.

    2010-01-01

    A scanning laser ophthalmoscopy (SLO) image, taken from optical coherence tomography (OCT), usually has lower global/local contrast and more noise compared to the traditional retinal photograph, which makes the vessel segmentation challenging work. A hybrid algorithm is proposed to efficiently solve these problems by fusing several designed methods, taking the advantages of each method and reducing the error measurements. The algorithm has several steps consisting of image preprocessing, thresholding probe and weighted fusing. Four different methods are first designed to transform the SLO image into feature response images by taking different combinations of matched filter, contrast enhancement and mathematical morphology operators. A thresholding probe algorithm is then applied on those response images to obtain four vessel maps. Weighted majority opinion is used to fuse these vessel maps and generate a final vessel map. The experimental results showed that the proposed hybrid algorithm could successfully segment the blood vessels on SLO images, by detecting the major and small vessels and suppressing the noises. The algorithm showed substantial potential in various clinical applications. The use of this method can be also extended to medical image registration based on blood vessel location. PMID:19163149

  16. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  17. Multiple targets detection method in detection of UWB through-wall radar

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Yang, Chuanfa; Zhao, Xingwen; Tian, Xianzhong

    2017-11-01

    In this paper, the problems and difficulties encountered in the detection of multiple moving targets by UWB radar are analyzed. The experimental environment and the penetrating radar system are established. An adaptive threshold method based on local area is proposed to effectively filter out clutter interference The objective of the moving target is analyzed, and the false target is further filtered out by extracting the target feature. Based on the correlation between the targets, the target matching algorithm is proposed to improve the detection accuracy. Finally, the effectiveness of the above method is verified by practical experiment.

  18. Automatic segmentation of coronary arteries from computed tomography angiography data cloud using optimal thresholding

    NASA Astrophysics Data System (ADS)

    Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik

    2017-01-01

    Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.

  19. Quantum secret sharing via local operations and classical communication

    PubMed Central

    Yang, Ying-Hui; Gao, Fei; Wu, Xia; Qin, Su-Juan; Zuo, Hui-Juan; Wen, Qiao-Yan

    2015-01-01

    We investigate the distinguishability of orthogonal multipartite entangled states in d-qudit system by restricted local operations and classical communication. According to these properties, we propose a standard (2, n)-threshold quantum secret sharing scheme (called LOCC-QSS scheme), which solves the open question in [Rahaman et al., Phys. Rev. A, 91, 022330 (2015)]. On the other hand, we find that all the existing (k, n)-threshold LOCC-QSS schemes are imperfect (or “ramp”), i.e., unauthorized groups can obtain some information about the shared secret. Furthermore, we present a (3, 4)-threshold LOCC-QSS scheme which is close to perfect. PMID:26586412

  20. Multidrug Resistance among New Tuberculosis Cases: Detecting Local Variation through Lot Quality-Assurance Sampling

    PubMed Central

    Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-01-01

    Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242

  1. The potential advantages of (18)FDG PET/CT-based target volume delineation in radiotherapy planning of head and neck cancer.

    PubMed

    Moule, Russell N; Kayani, Irfan; Moinuddin, Syed A; Meer, Khalda; Lemon, Catherine; Goodchild, Kathleen; Saunders, Michele I

    2010-11-01

    This study investigated two fixed threshold methods to delineate the target volume using (18)FDG PET/CT before and during a course of radical radiotherapy in locally advanced squamous cell carcinoma of the head and neck. Patients were enrolled into the study between March 2006 and May 2008. (18)FDG PET/CT scans were carried out 72h prior to the start of radiotherapy and then at 10, 44 and 66Gy. Functional volumes were delineated according to the SUV Cut Off (SUVCO) (2.5, 3.0, 3.5, and 4.0bwg/ml) and percentage of the SUVmax (30%, 35%, 40%, 45%, and 50%) thresholds. The background (18)FDG uptake and the SUVmax within the volumes were also assessed. Primary and lymph node volumes for the eight patients significantly reduced with each increase in the delineation threshold (for example 2.5-3.0bwg/ml SUVCO) compared to the baseline threshold at each imaging point. There was a significant reduction in the volume (p⩽0.0001-0.01) after 36Gy compared to the 0Gy by the SUVCO method. There was a negative correlation between the SUVmax within the primary and lymph node volumes and delivered radiation dose (p⩽0.0001-0.011) but no difference in the SUV within the background reference region. The volumes delineated by the PTSUVmax method increased with the increase in the delivered radiation dose after 36Gy because the SUVmax within the region of interest used to define the edge of the volume was equal or less than the background (18)FDG uptake and the software was unable to effectively differentiate between tumour and background uptake. The changes in the target volumes delineated by the SUVCO method were less susceptible to background (18)FDG uptake compared to those delineated by the PTSUVmax and may be more helpful in radiotherapy planning. The best method and threshold have still to be determined within institutions, both nationally and internationally. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Modulational estimate for the maximal Lyapunov exponent in Fermi-Pasta-Ulam chains

    NASA Astrophysics Data System (ADS)

    Dauxois, Thierry; Ruffo, Stefano; Torcini, Alessandro

    1997-12-01

    In the framework of the Fermi-Pasta-Ulam (FPU) model, we show a simple method to give an accurate analytical estimation of the maximal Lyapunov exponent at high energy density. The method is based on the computation of the mean value of the modulational instability growth rates associated to unstable modes. Moreover, we show that the strong stochasticity threshold found in the β-FPU system is closely related to a transition in tangent space, the Lyapunov eigenvector being more localized in space at high energy.

  3. Supratransmission in a metastable modular metastructure for tunable non-reciprocal wave transmission

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wang, K. W.

    2018-03-01

    In this research, we numerically and analytically investigate the nonlinear energy transmission phenomenon in a metastable modular metastructure. Numerical studies on a 1D metastable chain provide clear evidence that when driving frequency is within the stopband of the periodic structure, there exists a threshold for the driving amplitude, above which sudden increase in the energy transmission can be observed. This onset of transmission is due to nonlinear instability and is known as supratransmission. We discover that due to spatial asymmetry of strategically configured constituents, such transmission thresholds are considerably different when structure is excited from different ends and this discrepancy creates a region of non-reciprocal energy transmission. We demonstrate that when the loss of stability is due to saddlenode bifurcation, the transmission threshold can be predicted analytically using a localized nonlinear-linear system model, and analyzed via combining harmonic balancing and transfer matrix methods. These investigations elucidate the rich and complex dynamics achievable by nonlinearity and metastabilities, and provide synthesize tools for tunable bandgaps and non-reciprocal wave transmissions.

  4. Optimal estimation of recurrence structures from time series

    NASA Astrophysics Data System (ADS)

    beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel

    2016-05-01

    Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  5. Local region power spectrum-based unfocused ship detection method in synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Wei, Xiangfei; Wang, Xiaoqing; Chong, Jinsong

    2018-01-01

    Ships on synthetic aperture radar (SAR) images will be severely defocused and their energy will disperse into numerous resolution cells under long SAR integration time. Therefore, the image intensity of ships is weak and sometimes even overwhelmed by sea clutter on SAR image. Consequently, it is hard to detect the ships from SAR intensity images. A ship detection method based on local region power spectrum of SAR complex image is proposed. Although the energies of the ships are dispersed on SAR intensity images, their spectral energies are rather concentrated or will cause the power spectra of local areas of SAR images to deviate from that of sea surface background. Therefore, the key idea of the proposed method is to detect ships via the power spectra distortion of local areas of SAR images. The local region power spectrum of a moving target on SAR image is analyzed and the way to obtain the detection threshold through the probability density function (pdf) of the power spectrum is illustrated. Numerical P- and L-band airborne SAR ocean data are utilized and the detection results are also illustrated. Results show that the proposed method can well detect the unfocused ships, with a detection rate of 93.6% and a false-alarm rate of 8.6%. Moreover, by comparing with some other algorithms, it indicates that the proposed method performs better under long SAR integration time. Finally, the applicability of the proposed method and the way of parameters selection are also discussed.

  6. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system. PACS number(s): 87.57.nm, 87.57.N-, 87.61.Tg. © 2016 The Authors.

  7. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    PubMed Central

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on‐board MR‐IGRT system. PACS number(s): 87.57.nm, 87.57.N‐, 87.61.Tg

  8. Detection, localization and classification of multiple dipole-like magnetic sources using magnetic gradient tensor data

    NASA Astrophysics Data System (ADS)

    Gang, Yin; Yingtang, Zhang; Hongbo, Fan; Zhining, Li; Guoquan, Ren

    2016-05-01

    We have developed a method for automatic detection, localization and classification (DLC) of multiple dipole sources using magnetic gradient tensor data. First, we define modified tilt angles to estimate the approximate horizontal locations of the multiple dipole-like magnetic sources simultaneously and detect the number of magnetic sources using a fixed threshold. Secondly, based on the isotropy of the normalized source strength (NSS) response of a dipole, we obtain accurate horizontal locations of the dipoles. Then the vertical locations are calculated using magnitude magnetic transforms of magnetic gradient tensor data. Finally, we invert for the magnetic moments of the sources using the measured magnetic gradient tensor data and forward model. Synthetic and field data sets demonstrate effectiveness and practicality of the proposed method.

  9. Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection

    NASA Astrophysics Data System (ADS)

    Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei

    Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.

  10. Epidemic spreading with activity-driven awareness diffusion on multiplex network.

    PubMed

    Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming

    2016-04-01

    There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.

  11. Epidemic spreading with activity-driven awareness diffusion on multiplex network

    NASA Astrophysics Data System (ADS)

    Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming

    2016-04-01

    There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.

  12. Thresholds for decision-making: informing the cost-effectiveness and affordability of rotavirus vaccines in Malaysia.

    PubMed

    Loganathan, Tharani; Ng, Chiu-Wan; Lee, Way-Seah; Hutubessy, Raymond C W; Verguet, Stéphane; Jit, Mark

    2018-03-01

    Cost-effectiveness thresholds (CETs) based on the Commission on Macroeconomics and Health (CMH) are extensively used in low- and middle-income countries (LMICs) lacking locally defined CETs. These thresholds were originally intended for global and regional prioritization, and do not reflect local context or affordability at the national level, so their value for informing resource allocation decisions has been questioned. Using these thresholds, rotavirus vaccines are widely regarded as cost-effective interventions in LMICs. However, high vaccine prices remain a barrier towards vaccine introduction. This study aims to evaluate the cost-effectiveness, affordability and threshold price of universal rotavirus vaccination at various CETs in Malaysia. Cost-effectiveness of Rotarix and RotaTeq were evaluated using a multi-cohort model. Pan American Health Organization Revolving Fund's vaccine prices were used as tender price, while the recommended retail price for Malaysia was used as market price. We estimate threshold prices defined as prices at which vaccination becomes cost-effective, at various CETs reflecting economic theories of human capital, societal willingness-to-pay and marginal productivity. A budget impact analysis compared programmatic costs with the healthcare budget. At tender prices, both vaccines were cost-saving. At market prices, cost-effectiveness differed with thresholds used. At market price, using 'CMH thresholds', Rotarix programmes were cost-effective and RotaTeq were not cost-effective from the healthcare provider's perspective, while both vaccines were cost-effective from the societal perspective. Using other CETs, both vaccines were not cost-effective at market price, from the healthcare provider's and societal perspectives. At tender and cost-effective prices, rotavirus vaccination cost ∼1 and 3% of the public health budget, respectively. Using locally defined thresholds, rotavirus vaccination is cost-effective at vaccine prices in line with international tenders, but not at market prices. Thresholds representing marginal productivity are likely to be lower than those reflecting human capital and individual preference measures, and may be useful in determining affordable vaccine prices. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Predictive performance of rainfall thresholds for shallow landslide triggering in Switzerland from daily gridded precipitation data

    NASA Astrophysics Data System (ADS)

    Leonarduzzi, E.; Molnar, P.; McArdell, B. W.

    2017-12-01

    In Switzerland floods are responsible for most of the damage caused by rainfall-triggered natural hazards (89%), followed by landslides (6%, almost 600 M USD) as reported in Hilker et al. (2009) for the period 1972-2007. A high-resolution gridded daily precipitation dataset is combined with a landslide inventory containing over 2000 events in the period 1972-2012 to analyze rainfall thresholds that lead to landsliding in Switzerland. First triggering rainfall and landslides are co-located obtaining the distributions of triggering and non-triggering rainfall event properties at the scale of the precipitation data (2*2 km2) and considering 1 day as the interarrival time to separate events. Then rainfall thresholds are obtained by maximizing true positives (accurate predictions) while minimizing false negatives (false alarms), using the True Skill Statistic. The best predictive performance is obtained by the intensity-duration ID threshold curve, followed by peak daily intensity (Imax) and mean event intensity (Imean). Event duration by itself has very low predictive power. In addition to country-wide thresholds, local ones are also defined by regionalization based on surface erodibility and local long-term climate (mean daily precipitation). Different Imax thresholds are determined for each of the regions separately. It is found that wetter local climate and lower erodibility lead to significantly higher rainfall thresholds required to trigger landslides. However, the improvement in model performance due to regionalization is marginal and much lower than what can be achieved by having a high quality landslide database. In order to validate the performance of the Imax rainfall threshold model, reference cases will be presented in which the landslide locations and timing are randomized and the landslide sample size is reduced. Jack-knife and cross-validation experiments demonstrate that the model is robust. The results highlight the potential of using rainfall I-D threshold curves and Imax threshold values for predicting the occurrence of landslides on a country or regional scale even with daily precipitation data, with possible applications in landslide warning systems.

  14. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  15. Characterization of the Distance Relationship Between Localized Serotonin Receptors and Glia Cells on Fluorescence Microscopy Images of Brain Tissue.

    PubMed

    Jacak, Jaroslaw; Schaller, Susanne; Borgmann, Daniela; Winkler, Stephan M

    2015-08-01

    We here present two new methods for the characterization of fluorescent localization microscopy images obtained from immunostained brain tissue sections. Direct stochastic optical reconstruction microscopy images of 5-HT1A serotonin receptors and glial fibrillary acidic proteins in healthy cryopreserved brain tissues are analyzed. In detail, we here present two image processing methods for characterizing differences in receptor distribution on glial cells and their distribution on neural cells: One variant relies on skeleton extraction and adaptive thresholding, the other on k-means based discrete layer segmentation. Experimental results show that both methods can be applied for distinguishing classes of images with respect to serotonin receptor distribution. Quantification of nanoscopic changes in relative protein expression on particular cell types can be used to analyze degeneration in tissues caused by diseases or medical treatment.

  16. Fast microcalcification detection in ultrasound images using image enhancement and threshold adjacency statistics

    NASA Astrophysics Data System (ADS)

    Cho, Baek Hwan; Chang, Chuho; Lee, Jong-Ha; Ko, Eun Young; Seong, Yeong Kyeong; Woo, Kyoung-Gu

    2013-02-01

    The existence of microcalcifications (MCs) is an important marker of malignancy in breast cancer. In spite of the benefits in mass detection for dense breasts, ultrasonography is believed that it might not reliably detect MCs. For computer aided diagnosis systems, however, accurate detection of MCs has the possibility of improving the performance in both Breast Imaging-Reporting and Data System (BI-RADS) lexicon description for calcifications and malignancy classification. We propose a new efficient and effective method for MC detection using image enhancement and threshold adjacency statistics (TAS). The main idea of TAS is to threshold an image and to count the number of white pixels with a given number of adjacent white pixels. Our contribution is to adopt TAS features and apply image enhancement to facilitate MC detection in ultrasound images. We employed fuzzy logic, tophat filter, and texture filter to enhance images for MCs. Using a total of 591 images, the classification accuracy of the proposed method in MC detection showed 82.75%, which is comparable to that of Haralick texture features (81.38%). When combined, the performance was as high as 85.11%. In addition, our method also showed the ability in mass classification when combined with existing features. In conclusion, the proposed method exploiting image enhancement and TAS features has the potential to deal with MC detection in ultrasound images efficiently and extend to the real-time localization and visualization of MCs.

  17. Self-Tuning Threshold Method for Real-Time Gait Phase Detection Based on Ground Contact Forces Using FSRs.

    PubMed

    Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi

    2018-02-06

    This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.

  18. Predictive performance of rainfall thresholds for shallow landslides in Switzerland from gridded daily data

    NASA Astrophysics Data System (ADS)

    Leonarduzzi, Elena; Molnar, Peter; McArdell, Brian W.

    2017-08-01

    A high-resolution gridded daily precipitation data set was combined with a landslide inventory containing over 2000 events in the period 1972-2012 to analyze rainfall thresholds which lead to landsliding in Switzerland. We colocated triggering rainfall to landslides, developed distributions of triggering and nontriggering rainfall event properties, and determined rainfall thresholds and intensity-duration ID curves and validated their performance. The best predictive performance was obtained by the intensity-duration ID threshold curve, followed by peak daily intensity Imax and mean event intensity Imean. Event duration by itself had very low predictive power. A single country-wide threshold of Imax = 28 mm/d was extended into space by regionalization based on surface erodibility and local climate (mean daily precipitation). It was found that wetter local climate and lower erodibility led to significantly higher rainfall thresholds required to trigger landslides. However, we showed that the improvement in model performance due to regionalization was marginal and much lower than what can be achieved by having a high-quality landslide database. Reference cases in which the landslide locations and timing were randomized and the landslide sample size was reduced showed the sensitivity of the Imax rainfall threshold model. Jack-knife and cross-validation experiments demonstrated that the model was robust. The results reported here highlight the potential of using rainfall ID threshold curves and rainfall threshold values for predicting the occurrence of landslides on a country or regional scale with possible applications in landslide warning systems, even with daily data.

  19. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  20. Humans and seasonal climate variability threaten large-bodied coral reef fish with small ranges.

    PubMed

    Mellin, C; Mouillot, D; Kulbicki, M; McClanahan, T R; Vigliola, L; Bradshaw, C J A; Brainard, R E; Chabanet, P; Edgar, G J; Fordham, D A; Friedlander, A M; Parravicini, V; Sequeira, A M M; Stuart-Smith, R D; Wantiez, L; Caley, M J

    2016-02-03

    Coral reefs are among the most species-rich and threatened ecosystems on Earth, yet the extent to which human stressors determine species occurrences, compared with biogeography or environmental conditions, remains largely unknown. With ever-increasing human-mediated disturbances on these ecosystems, an important question is not only how many species can inhabit local communities, but also which biological traits determine species that can persist (or not) above particular disturbance thresholds. Here we show that human pressure and seasonal climate variability are disproportionately and negatively associated with the occurrence of large-bodied and geographically small-ranging fishes within local coral reef communities. These species are 67% less likely to occur where human impact and temperature seasonality exceed critical thresholds, such as in the marine biodiversity hotspot: the Coral Triangle. Our results identify the most sensitive species and critical thresholds of human and climatic stressors, providing opportunity for targeted conservation intervention to prevent local extinctions.

  1. Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds

    PubMed Central

    Deeks, J.J.; Martin, E.C.; Riley, R.D.

    2017-01-01

    Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347

  2. Anticoagulant effects of inhaled unfractionated heparin in the dog as determined by partial thromboplastin time and factor Xa activity.

    PubMed

    Manion, Jill S; Thomason, John M; Langston, Vernon C; Claude, Andrew K; Brooks, Marjory B; Mackin, Andrew J; Lunsford, Kari V

    2016-01-01

    To evaluate the anticoagulant effects of inhaled heparin in dogs. This study was conducted in 3 phases. In phase 1, bronchoalveolar lavage fluid (BALf) was collected to generate an in vitro calibration curve to relate heparin concentration to the activated partial thromboplastin time (aPTT). In phase 2, heparin was administered via nebulization to determine the threshold dose needed to prolong systemic aPTT. In phase 3, the local anticoagulant activity of inhaled heparin was determined by measurement of BALf anti-Xa activity and aPTT. University teaching hospital. Six healthy intact female Walker Hounds were used in this study. Two dogs were used for each phase. Inhaled unfractionated sodium heparin was administered in doses ranging from 50,000 to 200,000 IU. In vitro addition of heparin to BALf caused a prolongation in aPTT. Inhaled heparin at doses as high as 200,000 IU failed to prolong systemic aPTT, and a threshold dose could not be determined. No significant local anticoagulant effects were detected. Even at doses higher than those known to be effective in people, inhaled heparin appears to have no detectable local or systemic anticoagulant effects in dogs with the current delivery method. © Veterinary Emergency and Critical Care Society 2015.

  3. Event-triggered output feedback control for distributed networked systems.

    PubMed

    Mahmoud, Magdi S; Sabih, Muhammad; Elshafei, Moustafa

    2016-01-01

    This paper addresses the problem of output-feedback communication and control with event-triggered framework in the context of distributed networked control systems. The design problem of the event-triggered output-feedback control is proposed as a linear matrix inequality (LMI) feasibility problem. The scheme is developed for the distributed system where only partial states are available. In this scheme, a subsystem uses local observers and share its information to its neighbors only when the subsystem's local error exceeds a specified threshold. The developed method is illustrated by using a coupled cart example from the literature. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  5. Meteoalarm severe wind gust thresholds from uniform periods in ECA&D

    NASA Astrophysics Data System (ADS)

    Wijnant, I. L.

    2010-09-01

    The main aim of our work is to propose new thresholds for Meteoalarm severe weather warnings which are based on the local climate, specifically for the severe wind gust warnings because the variability of these thresholds is currently rather extreme and unrealistic. In order to achieve this we added validated wind data to the database of the European Climate Assessment and Database project (ECA&D) and analysed them. We also developed wind related indices for ECA&D in order to facilitate further research. Since 2007 most of the severe weather warnings issued by the National Weather Services in Europe can be found on one website: www.meteoalarm.eu. For the 30 participating countries colour codes (yellow, orange, red) are presented on a map of Europe to reflect the severity of the weather event and its possible impact. The thresholds used for these colour codes obviously depend on the type of severe weather, but should also reflect local climate (for example: identical heat waves will have a more significant impact in Sweden than in Spain). The current Meteoalarm guideline is to issue second level warnings (orange) 1-30 times a year and third level warnings (red) less than once a year (being the total number of warnings from a specific country for all of the different sorts of severe weather events in that year). There is no similar guideline for specific sorts of severe weather events and participating countries choose their own thresholds. As a result we see unrealistic differences in the frequency and thresholds of the warnings for neighbouring countries. New thresholds based on return values would reflect the local climate of each country and give a more uniform indication of the social impact. Additionally, without uniform definitions of severe weather it remains difficult to determine if severe weather in Europe is changing. ECA&D receives long series of daily data from 62 countries throughout Europe and the Mediterranean. So far we have 7 countries that provide us with wind data. Quality control and homogeneity tests are conducted on all data before analysis is carried out. For wind data the standard ECA&D homogeneity tests (SNHT, Pettitt, Buishand and Von Neuman Ratio) are performed on the wind gust factor (the ratio of the maximum daily gust to the daily average wind speed) and a relatively new test (Petrovic's ReDistribution Method) on wind direction data. For the Dutch data we compared the results of the homogeneity tests with the available meta-data. Inhomogeneous series are not corrected but the older part (before the most recent break) is excluded from further analysis.

  6. User-relevant, threshold-specific observations of climate change

    NASA Astrophysics Data System (ADS)

    Stainforth, Dave; Chapman, Sandra; Watkins, Nicholas

    2014-05-01

    Users of climate information look for details of changing climate at local scales (to inform specific activities) and on the geographical patterns of such changes (to prioritise adaptation investments). They often have user-specific thresholds of vulnerability so the changes of interest must refer to such thresholds or to the related quantile of the climatic distribution. A method for providing such information from timeseries of temperature data has recently been published [1] along with maps of changes at thresholds and quantiles [2] derived from the European Observational dataset E-Obs [3]. In this presentation we will do two things. First we will discuss the opportunities to tailor such methods to provide user-specific information through climate services, using illustrations from the existing methodology applied to daily maximum and minimum temperatures [1,2]. Second we will present new results on threshold specific observed changes in precipitation. The methodology for precipitation is related to that which has been applied to temperature but has been developed to handle the characteristics of precipitation distributions. The results identify some regions with systematic increases in precipitation on the seasonally wettest days and others which show drying across all days, on a seasonal basis. We will present the geographic locations and precipitation thresholds where strong signals of changes are seen across Europe. The coherency of such results and the methodology used to process the observational data will be discussed. We will also highlight the justifications for having confidence in the results in some regions and at some thresholds while having a lack of confidence in others. Such information should be an important element of any climate services. It is worth noting that here "wettest days" refers to events which are uncommon within a season (e.g. one in ~20 wet days). This is in contrast and complementary to, for instance, the one in a hundred year extreme event. Users can be vulnerable to one or the other or both of these event types and climate services are required which are sufficiently flexible to provide tailored information in either situation. It is common to focus on the latter while the former is relatively understudied. [1] Chapman, S C, Stainforth, D A, Watkins, N W. 2013 On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; [2] Stainforth, D A, Chapman, S. C. & Watkins, N. W. 2013. Mapping climate change in European temperature distributions Environ. Res. Lett. 8 034031 [3] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  7. Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.

    2004-05-01

    Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.

  8. Microstimulation of the lumbar DRG recruits primary afferent neurons in localized regions of lower limb.

    PubMed

    Ayers, Christopher A; Fisher, Lee E; Gaunt, Robert A; Weber, Douglas J

    2016-07-01

    Patterned microstimulation of the dorsal root ganglion (DRG) has been proposed as a method for delivering tactile and proprioceptive feedback to amputees. Previous studies demonstrated that large- and medium-diameter afferent neurons could be recruited separately, even several months after implantation. However, those studies did not examine the anatomical localization of sensory fibers recruited by microstimulation in the DRG. Achieving precise recruitment with respect to both modality and receptive field locations will likely be crucial to create a viable sensory neuroprosthesis. In this study, penetrating microelectrode arrays were implanted in the L5, L6, and L7 DRG of four isoflurane-anesthetized cats instrumented with nerve cuff electrodes around the proximal and distal branches of the sciatic and femoral nerves. A binary search was used to find the recruitment threshold for evoking a response in each nerve cuff. The selectivity of DRG stimulation was characterized by the ability to recruit individual distal branches to the exclusion of all others at threshold; 84.7% (n = 201) of the stimulation electrodes recruited a single nerve branch, with 9 of the 15 instrumented nerves recruited selectively. The median stimulation threshold was 0.68 nC/phase, and the median dynamic range (increase in charge while stimulation remained selective) was 0.36 nC/phase. These results demonstrate the ability of DRG microstimulation to achieve selective recruitment of the major nerve branches of the hindlimb, suggesting that this approach could be used to drive sensory input from localized regions of the limb. This sensory input might be useful for restoring tactile and proprioceptive feedback to a lower-limb amputee. Copyright © 2016 the American Physiological Society.

  9. Manipulating Traveling Brain Waves with Electric Fields: From Theory to Experiment.

    NASA Astrophysics Data System (ADS)

    Gluckman, Bruce J.

    2004-03-01

    Activity waves in disinhibited neocortical slices have been used as a biological model for epileptic seizure propagation [1]. Such waves have been mathematically modeled with integro-differential equations [2] representing non-local reaction diffusion dynamics of an excitable medium with an excitability threshold. Stability and propagation speed of traveling pulse solutions depend strongly on the threshold in the following manner: propagation speed should decrease with increased threshold over a finite range, beyond which the waves become unstable. Because populations of neurons can be polarized with an applied electric field that effectively shifts their threshold for action potential initiation [3], we predicted, and have experimentally verified, that electric fields could be used globally or locally to speed up, slow down and even block wave propagation. [1] Telfeian and Conners, Epilepsia, 40, 1499-1506, 1999. [2] Pinto and Ermentrout, SIAM J. App. Math, 62, 206-225, 2001. [3] Gluckman, et. al. J Neurophysiol. 76, 4202-5, 1996.

  10. Is transcatheter aortic valve implantation (TAVI) a cost-effective treatment in patients who are ineligible for surgical aortic valve replacement? A systematic review of economic evaluations.

    PubMed

    Eaton, James; Mealing, Stuart; Thompson, Juliette; Moat, Neil; Kappetein, Pieter; Piazza, Nicolo; Busca, Rachele; Osnabrugge, Ruben

    2014-05-01

    Health Technology Assessment (HTA) agencies often undertake a review of economic evaluations of an intervention during an appraisal in order to identify published estimates of cost-effectiveness, to elicit comparisons with the results of their own model, and to support local reimbursement decision-making. The aim of this research is to determine whether Transcatheter Aortic Valve Implantation (TAVI) compared to medical management (MM) is cost-effective in patients ineligible for surgical aortic valve replacement (SAVR), across different jurisdictions and country-specific evaluations. A systematic review of the literature from 2007-2012 was performed in the MEDLINE, MEDLINE in-process, EMBASE, and UK NHS EED databases according to standard methods, supplemented by a search of published HTA models. All identified publications were reviewed independently by two health economists. The British Medical Journal (BMJ) 35-point checklist for economic evaluations was used to assess study reporting. To compare results, incremental cost effectiveness ratios (ICERs) were converted to 2012 dollars using purchasing power parity (PPP) techniques. Six studies were identified representing five reimbursement jurisdictions (England/Wales, Scotland, the US, Canada, and Belgium) and different modeling techniques. The identified economic evaluations represent different willingness-to-pay thresholds, discount rates, medical costs, and healthcare systems. In addition, the model structures, time horizons, and cycle lengths varied. When adjusting for differences in currencies, the ICERs ranged from $27K-$65K per QALY gained. Despite notable differences in modeling approach, under the thresholds defined by using either the local threshold value or that recommended by the World Health Organization (WHO) threshold value, each study showed that TAVI was likely to be a cost-effective intervention for patients ineligible for SAVR.

  11. Analysis of parenchymal patterns using conspicuous spatial frequency features in mammograms applied to the BI-RADS density rating scheme

    NASA Astrophysics Data System (ADS)

    Perconti, Philip; Loew, Murray

    2006-03-01

    Automatic classification of the density of breast parenchyma is shown using a measure that is correlated to the human observer performance, and compared against the BI-RADS density rating. Increasingly popular in the United States, the Breast Imaging Reporting and Data System (BI-RADS) is used to draw attention to the increased screening difficulty associated with greater breast density; however, the BI-RADS rating scheme is subjective and is not intended as an objective measure of breast density. So, while popular, BI-RADS does not define density classes using a standardized measure, which leads to increased variability among observers. The adaptive thresholding technique is a more quantitative approach for assessing the percentage breast density, but considerable reader interaction is required. We calculate an objective density rating that is derived using a measure of local feature salience. Previously, this measure was shown to correlate well with radiologists' localization and discrimination of true positive and true negative regions-of-interest. Using conspicuous spatial frequency features, an objective density rating is obtained and correlated with adaptive thresholding, and the subjectively ascertained BI-RADS density ratings. Using 100 cases, obtained from the University of South Florida's DDSM database, we show that an automated breast density measure can be derived that is correlated with the interactive thresholding method for continuous percentage breast density, but not with the BI-RADS density rating categories for the selected cases. Comparison between interactive thresholding and the new salience percentage density resulted in a Pearson correlation of 76.7%. Using a four-category scale equivalent to the BI-RADS density categories, a Spearman correlation coefficient of 79.8% was found.

  12. GOES Cloud Detection at the Global Hydrology and Climate Center

    NASA Technical Reports Server (NTRS)

    Laws, Kevin; Jedlovec, Gary J.; Arnold, James E. (Technical Monitor)

    2002-01-01

    The bi-spectral threshold (BTH) for cloud detection and height assignment is now operational at NASA's Global Hydrology and Climate Center (GHCC). This new approach is similar in principle to the bi-spectral spatial coherence (BSC) method with improvements made to produce a more robust cloud-filtering algorithm for nighttime cloud detection and subsequent 24-hour operational cloud top pressure assignment. The method capitalizes on cloud and surface emissivity differences from the GOES 3.9 and 10.7-micrometer channels to distinguish cloudy from clear pixels. Separate threshold values are determined for day and nighttime detection, and applied to a 20-day minimum composite difference image to better filter background effects and enhance differences in cloud properties. A cloud top pressure is assigned to each cloudy pixel by referencing the 10.7-micrometer channel temperature to a thermodynamic profile from a locally -run regional forecast model. This paper and supplemental poster will present an objective validation of nighttime cloud detection by the BTH approach in comparison with previous methods. The cloud top pressure will be evaluated by comparing to the NESDIS operational CO2 slicing approach.

  13. Power ramp induced iodine and cesium redistribution in LWR fuel rods

    NASA Astrophysics Data System (ADS)

    Sontheimer, F.; Vogl, W.; Ruyter, I.; Markgraf, J.

    1980-01-01

    Volatile fission product migration in LWR fuel rods which are power ramped above a certain threshold beyond the envelope of their previous power history, plays an important role in stress corrosion cracking of Zircaloy. This may cause fuel rods to fail already at stresses below the yield strength. In the HFR, Petten, many power ramp experiments have been performed with subsequent examination of the ramped rods for fission product distribution. This study describes the measurement of iodine and cesium distribution using γ-spectroscopy of I-131 and Cs-137. An evaluation method is presented which makes the determination of absolute amounts of I/Cs feasible. It is shown that a threshold for I/Cs redistribution exists beyond which it depends strongly on local fuel rod power and fuel type.

  14. Observation of electrostatically released DNA from gold electrodes with controlled threshold voltages.

    PubMed

    Takeishi, Shunsaku; Rant, Ulrich; Fujiwara, Tsuyoshi; Buchholz, Karin; Usuki, Tatsuya; Arinaga, Kenji; Takemoto, Kazuya; Yamaguchi, Yoshitaka; Tornow, Marc; Fujita, Shozo; Abstreiter, Gerhard; Yokoyama, Naoki

    2004-03-22

    DNA oligo-nucleotides, localized at Au metal electrodes in aqueous solution, are found to be released when applying a negative bias voltage to the electrode. The release was confirmed by monitoring the intensity of the fluorescence of cyanine dyes (Cy3) linked to the 5' end of the DNA. The threshold voltage of the release changes depending on the kind of linker added to the DNA 3'-terminal. The amount of released DNA depends on the duration of the voltage pulse. Using this technique, we can retain DNA at Au electrodes or Au needles, and release the desired amount of DNA at a precise location in a target. The results suggest that DNA injection into living cells is possible with this method. (c) 2004 American Institute of Physics

  15. Resonance Extraction from the Finite Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doring, Michael; Molina Peralta, Raquel

    2016-06-01

    The spectrum of excited hadrons becomes accessible in simulations of Quantum Chromodynamics on the lattice. Extensions of Lüscher's method allow to address multi-channel scattering problems using moving frames or modified boundary conditions to obtain more eigenvalues in finite volume. As these are at different energies, interpolations are needed to relate different eigenvalues and to help determine the amplitude. Expanding the T- or the K-matrix locally provides a controlled scheme by removing the known non-analyticities of thresholds. This can be stabilized by using Chiral Perturbation Theory. Different examples to determine resonance pole parameters and to disentangle resonances from thresholds are dis-more » cussed, like the scalar meson f0(980) and the excited baryons N(1535)1/2^- and Lambda(1405)1/2^-.« less

  16. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  17. Gas ultrasonic flow rate measurement through genetic-ant colony optimization based on the ultrasonic pulse received signal model

    NASA Astrophysics Data System (ADS)

    Hou, Huirang; Zheng, Dandan; Nie, Laixiao

    2015-04-01

    For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.

  18. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  19. Hot-spot selection and evaluation methods for whole slice images of meningiomas and oligodendrogliomas.

    PubMed

    Swiderska, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Slodkowska, Janina

    2015-01-01

    The paper presents a combined method for an automatic hot-spot areas selection based on penalty factor in the whole slide images to support the pathomorphological diagnostic procedure. The studied slides represent the meningiomas and oligodendrogliomas tumor on the basis of the Ki-67/MIB-1 immunohistochemical reaction. It allows determining the tumor proliferation index as well as gives an indication to the medical treatment and prognosis. The combined method based on mathematical morphology, thresholding, texture analysis and classification is proposed and verified. The presented algorithm includes building a specimen map, elimination of hemorrhages from them, two methods for detection of hot-spot fields with respect to an introduced penalty factor. Furthermore, we propose localization concordance measure to evaluation localization of hot spot selection by the algorithms in respect to the expert's results. Thus, the results of the influence of the penalty factor are presented and discussed. It was found that the best results are obtained for 0.2 value of them. They confirm effectiveness of applied approach.

  20. A revision of the gamma-evaluation concept for the comparison of dose distributions.

    PubMed

    Bakai, Annemarie; Alber, Markus; Nüsslin, Fridtjof

    2003-11-07

    A method for the quantitative four-dimensional (4D) evaluation of discrete dose data based on gradient-dependent local acceptance thresholds is presented. The method takes into account the local dose gradients of a reference distribution for critical appraisal of misalignment and collimation errors. These contribute to the maximum tolerable dose error at each evaluation point to which the local dose differences between comparison and reference data are compared. As shown, the presented concept is analogous to the gamma-concept of Low et al (1998a Med. Phys. 25 656-61) if extended to (3+1) dimensions. The pointwise dose comparisons of the reformulated concept are easier to perform and speed up the evaluation process considerably, especially for fine-grid evaluations of 3D dose distributions. The occurrences of false negative indications due to the discrete nature of the data are reduced with the method. The presented method was applied to film-measured, clinical data and compared with gamma-evaluations. 4D and 3D evaluations were performed. Comparisons prove that 4D evaluations have to be given priority, especially if complex treatment situations are verified, e.g., non-coplanar beam configurations.

  1. Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.

    PubMed

    Song, Li; Florea, Liliana

    2015-01-01

    Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.

  2. A Comparative Study of the Applied Methods for Estimating Deflection of the Vertical in Terrestrial Geodetic Measurements

    PubMed Central

    Vittuari, Luca; Tini, Maria Alessandra; Sarti, Pierguido; Serantoni, Eugenio; Borghi, Alessandra; Negusini, Monia; Guillaume, Sébastien

    2016-01-01

    This paper compares three different methods capable of estimating the deflection of the vertical (DoV): one is based on the joint use of high precision spirit leveling and Global Navigation Satellite Systems (GNSS), a second uses astro-geodetic measurements and the third gravimetric geoid models. The working data sets refer to the geodetic International Terrestrial Reference Frame (ITRF) co-location sites of Medicina (Northern, Italy) and Noto (Sicily), these latter being excellent test beds for our investigations. The measurements were planned and realized to estimate the DoV with a level of precision comparable to the angular accuracy achievable in high precision network measured by modern high-end total stations. The three methods are in excellent agreement, with an operational supremacy of the astro-geodetic method, being faster and more precise than the others. The method that combines leveling and GNSS has slightly larger standard deviations; although well within the 1 arcsec level, which was assumed as threshold. Finally, the geoid model based method, whose 2.5 arcsec standard deviations exceed this threshold, is also statistically consistent with the others and should be used to determine the DoV components where local ad hoc measurements are lacking. PMID:27104544

  3. Information Transmission and Anderson Localization in two-dimensional networks of firing-rate neurons

    NASA Astrophysics Data System (ADS)

    Natale, Joseph; Hentschel, George

    Firing-rate networks offer a coarse model of signal propagation in the brain. Here we analyze sparse, 2D planar firing-rate networks with no synapses beyond a certain cutoff distance. Additionally, we impose Dale's Principle to ensure that each neuron makes only or inhibitory outgoing connections. Using spectral methods, we find that the number of neurons participating in excitations of the network becomes insignificant whenever the connectivity cutoff is tuned to a value near or below the average interneuron separation. Further, neural activations exceeding a certain threshold stay confined to a small region of space. This behavior is an instance of Anderson localization, a disorder-induced phase transition by which an information channel is rendered unable to transmit signals. We discuss several potential implications of localization for both local and long-range computation in the brain. This work was supported in part by Grants JSMF/ 220020321 and NSF/IOS/1208126.

  4. Population exposure to trace elements in the Kilembe copper mine area, Western Uganda: A pilot study.

    PubMed

    Mwesigye, Abraham R; Young, Scott D; Bailey, Elizabeth H; Tumwebaze, Susan B

    2016-12-15

    The mining and processing of copper in Kilembe, Western Uganda, from 1956 to 1982 left over 15 Mt. of tailings containing cupriferous and cobaltiferous pyrite dumped within a mountain river valley. This pilot study was conducted to assess the nature and extent of risk to local populations from metal contamination arising from those mining activities. We determined trace element concentrations in mine tailings, soils, locally cultivated foods, house dust, drinking water and human biomarkers (toenails) using ICP-MS analysis of acid digested samples. The results showed that tailings, containing higher concentrations of Co, Cu, Ni and As compared with world average crust values had eroded and contaminated local soils. Pollution load indices revealed that 51% of agricultural soils sampled were contaminated with trace elements. Local water supplies were contaminated, with Co concentrations that exceeded Wisconsin (US) thresholds in 25% of domestic water supplies and 40% of Nyamwamba river water samples. Zinc exceeded WHO/FAO thresholds of 99.4mgkg -1 in 36% of Amaranthus vegetable samples, Cu exceeded EC thresholds of 20mgkg -1 in 19% of Amaranthus while Pb exceeded WHO thresholds of 0.3mgkg -1 in 47% of Amaranthus vegetables. In bananas, 20% of samples contained Pb concentrations that exceeded the WHO/FAO recommended threshold of 0.3mgkg -1 . However, risk assessment of local foods and water, based on hazard quotients (HQ values) revealed no potential health effects. The high external contamination of volunteers' toenails with some elements (even after a washing process) calls into question their use as a biomarker for metal exposure in human populations where feet are frequently exposed to soil dust. Any mitigation of Kilembe mine impacts should be aimed at remediation of agricultural soils, regulating the discharge of underground contaminated water but also containment of tailing erosion. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Methods of Muscle Activation Onset Timing Recorded During Spinal Manipulation.

    PubMed

    Currie, Stuart J; Myers, Casey A; Krishnamurthy, Ashok; Enebo, Brian A; Davidson, Bradley S

    2016-05-01

    The purpose of this study was to determine electromyographic threshold parameters that most reliably characterize the muscular response to spinal manipulation and compare 2 methods that detect muscle activity onset delay: the double-threshold method and cross-correlation method. Surface and indwelling electromyography were recorded during lumbar side-lying manipulations in 17 asymptomatic participants. Muscle activity onset delays in relation to the thrusting force were compared across methods and muscles using a generalized linear model. The threshold combinations that resulted in the lowest Detection Failures were the "8 SD-0 milliseconds" threshold (Detection Failures = 8) and the "8 SD-10 milliseconds" threshold (Detection Failures = 9). The average muscle activity onset delay for the double-threshold method across all participants was 149 ± 152 milliseconds for the multifidus and 252 ± 204 milliseconds for the erector spinae. The average onset delay for the cross-correlation method was 26 ± 101 for the multifidus and 67 ± 116 for the erector spinae. There were no statistical interactions, and a main effect of method demonstrated that the delays were higher when using the double-threshold method compared with cross-correlation. The threshold parameters that best characterized activity onset delays were an 8-SD amplitude and a 10-millisecond duration threshold. The double-threshold method correlated well with visual supervision of muscle activity. The cross-correlation method provides several advantages in signal processing; however, supervision was required for some results, negating this advantage. These results help standardize methods when recording neuromuscular responses of spinal manipulation and improve comparisons within and across investigations. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  6. Bayesian methods for estimating GEBVs of threshold traits

    PubMed Central

    Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q

    2013-01-01

    Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458

  7. Usage of fMRI for pre-surgical planning in brain tumor and vascular lesion patients: Task and statistical threshold effects on language lateralization☆☆☆

    PubMed Central

    Nadkarni, Tanvi N.; Andreoli, Matthew J.; Nair, Veena A.; Yin, Peng; Young, Brittany M.; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S.; Field, Aaron S.; Baskaya, Mustafa K.; Moritz, Chad H.; Meyerand, M. Elizabeth; Prabhakaran, Vivek

    2014-01-01

    Background and purpose Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Materials and methods Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Results Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Conclusion Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits. PMID:25685705

  8. SU-F-J-32: Do We Need KV Imaging During CBCT Based Patient Set-Up for Lung Radiation Therapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopal, A; Zhou, J; Prado, K

    Purpose: To evaluate the role of 2D kilovoltage (kV) imaging to complement cone beam CT (CBCT) imaging in a shift threshold based image guided radiation therapy (IGRT) strategy for conventional lung radiotherapy. Methods: A retrospective study was conducted by analyzing IGRT couch shift trends for 15 patients that received lung radiation therapy to evaluate the benefit of performing orthogonal kV imaging prior to CBCT imaging. Herein, a shift threshold based IGRT protocol was applied, which would mandate additional CBCT verification if the applied patient shifts exceeded 3 mm to avoid intraobserver variability in CBCT registration and to confirm table shifts.more » For each patient, two IGRT strategies: kV + CBCT and CBCT alone, were compared and the recorded patient shifts were categorized into whether additional CBCT acquisition would have been mandated or not. The effectiveness of either strategy was gauged by the likelihood of needing additional CBCT imaging for accurate patient set-up. Results: The use of CBCT alone was 6 times more likely to require an additional CBCT than KV+CBCT, for a 3 mm shift threshold (88% vs 14%). The likelihood of additional CBCT verification generally increased with lower shift thresholds, and was significantly lower when kV+CBCT was used (7% with 5 mm shift threshold, 36% with 2 mm threshold), than with CBCT alone (61% with 5 mm shift threshold, 97% with 2 mm threshold). With CBCT alone, treatment time increased by 2.2 min and dose increased by 1.9 cGy per fraction on average due to additional CBCT with a 3mm shift threshold. Conclusion: The benefit of kV imaging to screen for gross misalignments led to more accurate CBCT based patient localization compared with using CBCT alone. The subsequently reduced need for additional CBCT verification will minimize treatment time and result in less overall patient imaging dose.« less

  9. Ultrasound-triggered drug delivery using acoustic droplet vaporization

    NASA Astrophysics Data System (ADS)

    Fabiilli, Mario Leonardo

    The goal of targeted drug delivery is the spatial and temporal localization of a therapeutic agent and its associated bioeffects. One method of drug localization is acoustic droplet vaporization (ADV), whereby drug-laden perfluorocarbon (PFC) emulsions are vaporized into gas bubbles using ultrasound, thereby releasing drug locally. Transpulmonary droplets are converted into bubbles that occlude capillaries, sequestering the released drug within an organ or tumor. This research investigates the relationship between the ADV and inertial cavitation (IC) thresholds---relevant for drug delivery due to the bioffects generated by IC---and explores the delivery of lipophilic and hydrophilic compounds using PFC double emulsions. IC can positively and negatively affect ultrasound mediated drug delivery. The ADV and IC thresholds were determined for various bulk fluid, droplet, and acoustic parameters. At 3.5 MHz, the ADV threshold occurred at a lower rarefactional pressure than the IC threshold. The results suggest that ADV is a distinct phenomenon from IC, the ADV nucleus is internal to the droplet, and the IC nucleus is the bubble generated by ADV. The ADV triggered release of a lipophilic chemotherapeutic agent, chlorambucil (CHL), from a PFC-in-oil-in-water emulsion was explored using plated cells. Cells exposed to a CHL-loaded emulsion, without ADV, displayed 44% less growth inhibition than cells exposed to an equal concentration of CHL in solution. Upon ADV of the CHL-loaded emulsion, the growth inhibition increased to the same level as cells exposed to CHL in solution. A triblock copolymer was synthesized which enabled the formulation of stable water-in-PFC-in-water (W1/PFC/W2) emulsions. The encapsulation of fluorescein in the W1 phase significantly decreased the mass flux of fluorescein; ADV was shown to completely release the fluorescein from the emulsions. ADV was also shown to release thrombin, dissolved in the W1 phase, which could be used in vivo to extend synergistically the duration of ADV-generated, microbubble-based embolizations. Overall, the results suggest that PFC double emulsions can be used as an ultrasound-triggered drug delivery system. Compared to traditional drug delivery systems, ADV could be used to increase the therapeutic efficacy and decrease the systemic toxicity of drug therapy.

  10. An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Langston, C. A.

    2016-12-01

    Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.

  11. Recognition of speech in noise after application of time-frequency masks: Dependence on frequency and threshold parameters

    PubMed Central

    Sinex, Donal G.

    2013-01-01

    Binary time-frequency (TF) masks can be applied to separate speech from noise. Previous studies have shown that with appropriate parameters, ideal TF masks can extract highly intelligible speech even at very low speech-to-noise ratios (SNRs). Two psychophysical experiments provided additional information about the dependence of intelligibility on the frequency resolution and threshold criteria that define the ideal TF mask. Listeners identified AzBio Sentences in noise, before and after application of TF masks. Masks generated with 8 or 16 frequency bands per octave supported nearly-perfect identification. Word recognition accuracy was slightly lower and more variable with 4 bands per octave. When TF masks were generated with a local threshold criterion of 0 dB SNR, the mean speech reception threshold was −9.5 dB SNR, compared to −5.7 dB for unprocessed sentences in noise. Speech reception thresholds decreased by about 1 dB per dB of additional decrease in the local threshold criterion. Information reported here about the dependence of speech intelligibility on frequency and level parameters has relevance for the development of non-ideal TF masks for clinical applications such as speech processing for hearing aids. PMID:23556604

  12. Proposal on Calculation of Ventilation Threshold Using Non-contact Respiration Measurement with Pattern Light Projection

    NASA Astrophysics Data System (ADS)

    Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo

    We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.

  13. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  14. Thermal detection thresholds in 5-year-old preterm born children; IQ does matter.

    PubMed

    de Graaf, Joke; Valkenburg, Abraham J; Tibboel, Dick; van Dijk, Monique

    2012-07-01

    Experiencing pain at newborn age may have consequences on one's somatosensory perception later in life. Children's perception for cold and warm stimuli may be determined with the Thermal Sensory Analyzer (TSA) device by two different methods. This pilot study in 5-year-old children born preterm aimed at establishing whether the TSA method of limits, which is dependent of reaction time, and the method of levels, which is independent of reaction time, would yield different cold and warm detection thresholds. The second aim was to establish possible associations between intellectual ability and the detection thresholds obtained with either method. A convenience sample was drawn from the participants in an ongoing 5-year follow-up study of a randomized controlled trial on effects of morphine during mechanical ventilation. Thresholds were assessed using both methods and statistically compared. Possible associations between the child's intelligence quotient (IQ) and threshold levels were analyzed. The method of levels yielded more sensitive thresholds than did the method of limits, i.e. mean (SD) cold detection thresholds: 30.3 (1.4) versus 28.4 (1.7) (Cohen'sd=1.2, P=0.001) and warm detection thresholds; 33.9 (1.9) versus 35.6 (2.1) (Cohen's d=0.8, P=0.04). IQ was statistically significantly associated only with the detection thresholds obtained with the method of limits (cold: r=0.64, warm: r=-0.52). The TSA method of levels, is to be preferred over the method of limits in 5-year-old preterm born children, as it establishes more sensitive detection thresholds and is independent of IQ. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Humans and seasonal climate variability threaten large-bodied coral reef fish with small ranges

    PubMed Central

    Mellin, C.; Mouillot, D.; Kulbicki, M.; McClanahan, T. R.; Vigliola, L.; Bradshaw, C. J. A.; Brainard, R. E.; Chabanet, P.; Edgar, G. J.; Fordham, D. A.; Friedlander, A. M.; Parravicini, V.; Sequeira, A. M. M.; Stuart-Smith, R. D.; Wantiez, L.; Caley, M. J.

    2016-01-01

    Coral reefs are among the most species-rich and threatened ecosystems on Earth, yet the extent to which human stressors determine species occurrences, compared with biogeography or environmental conditions, remains largely unknown. With ever-increasing human-mediated disturbances on these ecosystems, an important question is not only how many species can inhabit local communities, but also which biological traits determine species that can persist (or not) above particular disturbance thresholds. Here we show that human pressure and seasonal climate variability are disproportionately and negatively associated with the occurrence of large-bodied and geographically small-ranging fishes within local coral reef communities. These species are 67% less likely to occur where human impact and temperature seasonality exceed critical thresholds, such as in the marine biodiversity hotspot: the Coral Triangle. Our results identify the most sensitive species and critical thresholds of human and climatic stressors, providing opportunity for targeted conservation intervention to prevent local extinctions. PMID:26839155

  16. Chronic Widespread Back Pain is Distinct From Chronic Local Back Pain: Evidence From Quantitative Sensory Testing, Pain Drawings, and Psychometrics.

    PubMed

    Gerhardt, Andreas; Eich, Wolfgang; Janke, Susanne; Leisner, Sabine; Treede, Rolf-Detlef; Tesarz, Jonas

    2016-07-01

    Whether chronic localized pain (CLP) and chronic widespread pain (CWP) have different mechanisms or to what extent they overlap in their pathophysiology is controversial. The study compared quantitative sensory testing profiles of nonspecific chronic back pain patients with CLP (n=48) and CWP (n=29) with and fibromyalgia syndrome (FMS) patients (n=90) and pain-free controls (n = 40). The quantitative sensory testing protocol of the "German-Research-Network-on-Neuropathic-Pain" was used to measure evoked pain on the painful area in the lower back and the pain-free hand (thermal and mechanical detection and pain thresholds, vibration threshold, pain sensitivity to sharp and blunt mechanical stimuli). Ongoing pain and psychometrics were captured with pain drawings and questionnaires. CLP patients did not differ from pain-free controls, except for lower pressure pain threshold (PPT) on the back. CWP and FMS patients showed lower heat pain threshold and higher wind-up ratio on the back and lower heat pain threshold and cold pain threshold on the hand. FMS showed lower PPT on back and hand, and higher comorbidity of anxiety and depression and more functional impairment than all other groups. Even after long duration CLP presents with a local hypersensitivity for PPT, suggesting a somatotopically specific sensitization of nociceptive processing. However, CWP patients show widespread ongoing pain and hyperalgesia for different stimuli that is generalized in space, suggesting the involvement of descending control systems, as also suggested for FMS patients. Because mechanisms in nonspecific chronic back pain with CLP and CWP differ, these patients should be distinguished in future research and allocated to different treatments.

  17. An efficient and near linear scaling pair natural orbital based local coupled cluster method.

    PubMed

    Riplinger, Christoph; Neese, Frank

    2013-01-21

    In previous publications, it was shown that an efficient local coupled cluster method with single- and double excitations can be based on the concept of pair natural orbitals (PNOs) [F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009)]. The resulting local pair natural orbital-coupled-cluster single double (LPNO-CCSD) method has since been proven to be highly reliable and efficient. For large molecules, the number of amplitudes to be determined is reduced by a factor of 10(5)-10(6) relative to a canonical CCSD calculation on the same system with the same basis set. In the original method, the PNOs were expanded in the set of canonical virtual orbitals and single excitations were not truncated. This led to a number of fifth order scaling steps that eventually rendered the method computationally expensive for large molecules (e.g., >100 atoms). In the present work, these limitations are overcome by a complete redesign of the LPNO-CCSD method. The new method is based on the combination of the concepts of PNOs and projected atomic orbitals (PAOs). Thus, each PNO is expanded in a set of PAOs that in turn belong to a given electron pair specific domain. In this way, it is possible to fully exploit locality while maintaining the extremely high compactness of the original LPNO-CCSD wavefunction. No terms are dropped from the CCSD equations and domains are chosen conservatively. The correlation energy loss due to the domains remains below <0.05%, which implies typically 15-20 but occasionally up to 30 atoms per domain on average. The new method has been given the acronym DLPNO-CCSD ("domain based LPNO-CCSD"). The method is nearly linear scaling with respect to system size. The original LPNO-CCSD method had three adjustable truncation thresholds that were chosen conservatively and do not need to be changed for actual applications. In the present treatment, no additional truncation parameters have been introduced. Any additional truncation is performed on the basis of the three original thresholds. There are no real-space cutoffs. Single excitations are truncated using singles-specific natural orbitals. Pairs are prescreened according to a multipole expansion of a pair correlation energy estimate based on local orbital specific virtual orbitals (LOSVs). Like its LPNO-CCSD predecessor, the method is completely of black box character and does not require any user adjustments. It is shown here that DLPNO-CCSD is as accurate as LPNO-CCSD while leading to computational savings exceeding one order of magnitude for larger systems. The largest calculations reported here featured >8800 basis functions and >450 atoms. In all larger test calculations done so far, the LPNO-CCSD step took less time than the preceding Hartree-Fock calculation, provided no approximations have been introduced in the latter. Thus, based on the present development reliable CCSD calculations on large molecules with unprecedented efficiency and accuracy are realized.

  18. Study on Diagnosing Three Dimensional Cloud Region

    NASA Astrophysics Data System (ADS)

    Cai, M., Jr.; Zhou, Y., Sr.

    2017-12-01

    Cloud mask and relative humidity (RH) provided by Cloudsat products from 2007 to 2008 are statistical analyzed to get RH Threshold between cloud and clear sky and its variation with height. A diagnosis method is proposed based on reanalysis data and applied to three-dimensional cloud field diagnosis of a real case. Diagnostic cloud field was compared to satellite, radar and other cloud precipitation observation. Main results are as follows. 1.Cloud region where cloud mask is bigger than 20 has a good space and time corresponding to the high value relative humidity region, which is provide by ECWMF AUX product. Statistical analysis of the RH frequency distribution within and outside cloud indicated that, distribution of RH in cloud at different height range shows single peak type, and the peak is near a RH value of 100%. Local atmospheric environment affects the RH distribution outside cloud, which leads to TH distribution vary in different region or different height. 2. RH threshold and its vertical distribution used for cloud diagnostic was analyzed from Threat Score method. The method is applied to a three dimension cloud diagnosis case study based on NCEP reanalysis data and th diagnostic cloud field is compared to satellite, radar and cloud precipitation observation on ground. It is found that, RH gradient is very big around cloud region and diagnosed cloud area by RH threshold method is relatively stable. Diagnostic cloud area has a good corresponding to updraft region. The cloud and clear sky distribution corresponds to satellite the TBB observations overall. Diagnostic cloud depth, or sum cloud layers distribution consists with optical thickness and precipitation on ground better. The cloud vertical profile reveals the relation between cloud vertical structure and weather system clearly. Diagnostic cloud distribution correspond to cloud observations on ground very well. 3. The method is improved by changing the vertical interval from altitude to temperature. The result shows that, the five factors , including TS score for clear sky, empty forecast, missed forecast, and especially TS score for cloud region and the accurate rate increased obviously. So, the RH threshold and its vertical distribution with temperature is better than with altitude. More tests and comparision should be done to assess the diagnosis method.

  19. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  20. Secure Distributed Detection under Energy Constraint in IoT-Oriented Sensor Networks.

    PubMed

    Zhang, Guomei; Sun, Hao

    2016-12-16

    We study the secure distributed detection problems under energy constraint for IoT-oriented sensor networks. The conventional channel-aware encryption (CAE) is an efficient physical-layer secure distributed detection scheme in light of its energy efficiency, good scalability and robustness over diverse eavesdropping scenarios. However, in the CAE scheme, it remains an open problem of how to optimize the key thresholds for the estimated channel gain, which are used to determine the sensor's reporting action. Moreover, the CAE scheme does not jointly consider the accuracy of local detection results in determining whether to stay dormant for a sensor. To solve these problems, we first analyze the error probability and derive the optimal thresholds in the CAE scheme under a specified energy constraint. These results build a convenient mathematic framework for our further innovative design. Under this framework, we propose a hybrid secure distributed detection scheme. Our proposal can satisfy the energy constraint by keeping some sensors inactive according to the local detection confidence level, which is characterized by likelihood ratio. In the meanwhile, the security is guaranteed through randomly flipping the local decisions forwarded to the fusion center based on the channel amplitude. We further optimize the key parameters of our hybrid scheme, including two local decision thresholds and one channel comparison threshold. Performance evaluation results demonstrate that our hybrid scheme outperforms the CAE under stringent energy constraints, especially in the high signal-to-noise ratio scenario, while the security is still assured.

  1. Secure Distributed Detection under Energy Constraint in IoT-Oriented Sensor Networks

    PubMed Central

    Zhang, Guomei; Sun, Hao

    2016-01-01

    We study the secure distributed detection problems under energy constraint for IoT-oriented sensor networks. The conventional channel-aware encryption (CAE) is an efficient physical-layer secure distributed detection scheme in light of its energy efficiency, good scalability and robustness over diverse eavesdropping scenarios. However, in the CAE scheme, it remains an open problem of how to optimize the key thresholds for the estimated channel gain, which are used to determine the sensor’s reporting action. Moreover, the CAE scheme does not jointly consider the accuracy of local detection results in determining whether to stay dormant for a sensor. To solve these problems, we first analyze the error probability and derive the optimal thresholds in the CAE scheme under a specified energy constraint. These results build a convenient mathematic framework for our further innovative design. Under this framework, we propose a hybrid secure distributed detection scheme. Our proposal can satisfy the energy constraint by keeping some sensors inactive according to the local detection confidence level, which is characterized by likelihood ratio. In the meanwhile, the security is guaranteed through randomly flipping the local decisions forwarded to the fusion center based on the channel amplitude. We further optimize the key parameters of our hybrid scheme, including two local decision thresholds and one channel comparison threshold. Performance evaluation results demonstrate that our hybrid scheme outperforms the CAE under stringent energy constraints, especially in the high signal-to-noise ratio scenario, while the security is still assured. PMID:27999282

  2. Exceedance probability map: a tool helping the definition of arsenic Natural Background Level (NBL) within the Drainage Basin to the Venice Lagoon (NE Italy)

    NASA Astrophysics Data System (ADS)

    Dalla Libera, Nico; Fabbri, Paolo; Mason, Leonardo; Piccinini, Leonardo; Pola, Marco

    2017-04-01

    Arsenic groundwater contamination affects worldwide shallower groundwater bodies. Starting from the actual knowledges around arsenic origin into groundwater, we know that the major part of dissolved arsenic is naturally occurring through the dissolution of As-bearing minerals and ores. Several studies on the shallow aquifers of both the regional Venetian Plain (NE Italy) and the local Drainage Basin to the Venice Lagoon (DBVL) show local high arsenic concentration related to peculiar geochemical conditions, which drive arsenic mobilization. The uncertainty of arsenic spatial distribution makes difficult both the evaluation of the processes involved in arsenic mobilization and the stakeholders' decision about environmental management. Considering the latter aspect, the present study treats the problem of the Natural Background Level (NBL) definition as the threshold discriminating the natural contamination from the anthropogenic pollution. Actually, the UE's Directive 2006/118/EC suggests the procedures and criteria to set up the water quality standards guaranteeing a healthy status and reversing any contamination trends. In addition, the UE's BRIDGE project proposes some criteria, based on the 90th percentile of the contaminant's concentrations dataset, to estimate the NBL. Nevertheless, these methods provides just a statistical NBL for the whole area without considering the spatial variation of the contaminant's concentration. In this sense, we would reinforce the NBL concept using a geostatistical approach, which is able to give some detailed information about the distribution of arsenic concentrations and unveiling zones with high concentrations referred to the Italian drinking water standard (IDWS = 10 µg/liter). Once obtained the spatial information about arsenic distribution, we can apply the 90th percentile methods to estimate some Local NBL referring to every zones with arsenic higher than IDWS. The indicator kriging method was considered because it estimates the spatial distribution of the exceedance probabilities respect some pre-defined thresholds. This approach is largely mentioned in literature to face similar environmental problems. To test the validity of the procedure, we used the dataset from "A.Li.Na" project (founded by the Regional Environmental Agency) that defined regional NBLs of As, Fe, Mn and NH4+ into DBVL's groundwater. Primarily, we defined two thresholds corresponding respectively to the IDWS and the median of the data over the IDWS. These values were decided basing on the dataset's statistical structure and the quality criteria of the GWD 2006/118/EC. Subsequently, we evaluated the spatial distribution of the probability to exceed the defined thresholds using the Indicator kriging. The results highlight different zones with high exceedance probability ranging from 75% to 95% respect both the IDWS and the median value. Considering the geological setting of the DBVL, these probability values correspond with the occurrence of both organic matter and reducing conditions. In conclusion, the spatial prediction of the exceedance probability could be useful to define the areas in which estimate the local NBLs, enhancing the procedure of NBL definition. In that way, the NBL estimation could be more realistic because it considers the spatial distribution of the studied contaminant, distinguishing areas with high natural concentrations from polluted ones.

  3. Altered cortical and subcortical connectivity due to infrasound administered near the hearing threshold - Evidence from fMRI.

    PubMed

    Weichenberger, Markus; Bauer, Martin; Kühler, Robert; Hensel, Johannes; Forlim, Caroline Garcia; Ihlenfeld, Albrecht; Ittermann, Bernd; Gallinat, Jürgen; Koch, Christian; Kühn, Simone

    2017-01-01

    In the present study, the brain's response towards near- and supra-threshold infrasound (IS) stimulation (sound frequency < 20 Hz) was investigated under resting-state fMRI conditions. The study involved two consecutive sessions. In the first session, 14 healthy participants underwent a hearing threshold-as well as a categorical loudness scaling measurement in which the individual loudness perception for IS was assessed across different sound pressure levels (SPL). In the second session, these participants underwent three resting-state acquisitions, one without auditory stimulation (no-tone), one with a monaurally presented 12-Hz IS tone (near-threshold) and one with a similar tone above the individual hearing threshold corresponding to a 'medium loud' hearing sensation (supra-threshold). Data analysis mainly focused on local connectivity measures by means of regional homogeneity (ReHo), but also involved independent component analysis (ICA) to investigate inter-regional connectivity. ReHo analysis revealed significantly higher local connectivity in right superior temporal gyrus (STG) adjacent to primary auditory cortex, in anterior cingulate cortex (ACC) and, when allowing smaller cluster sizes, also in the right amygdala (rAmyg) during the near-threshold, compared to both the supra-threshold and the no-tone condition. Additional independent component analysis (ICA) revealed large-scale changes of functional connectivity, reflected in a stronger activation of the right amygdala (rAmyg) in the opposite contrast (no-tone > near-threshold) as well as the right superior frontal gyrus (rSFG) during the near-threshold condition. In summary, this study is the first to demonstrate that infrasound near the hearing threshold may induce changes of neural activity across several brain regions, some of which are known to be involved in auditory processing, while others are regarded as keyplayers in emotional and autonomic control. These findings thus allow us to speculate on how continuous exposure to (sub-)liminal IS could exert a pathogenic influence on the organism, yet further (especially longitudinal) studies are required in order to substantialize these findings.

  4. Intensity-duration threshold of rainfall-triggered debris flows in the Wenchuan Earthquake affected area, China

    NASA Astrophysics Data System (ADS)

    Guo, Xiaojun; Cui, Peng; Li, Yong; Ma, Li; Ge, Yonggang; Mahoney, William B.

    2016-01-01

    The Ms 8.0 Wenchuan Earthquake has greatly altered the rainfall threshold for debris flows in the affected areas. This study explores the local intensity-duration (I-D) relationship based on 252 post-earthquake debris flows. It was found that I = 5.25 D-0.76 accounts for more than 98% of the debris flow occurrences with rainfall duration between 1 and 135 h; therefore the curve defines the threshold for debris flows in the study area. This gives much lower thresholds than those proposed by the previous studies, suggesting that the earthquake has greatly decreased the thresholds in the past years. Moreover, the rainfall thresholds appear to increase annually in the period of 2008-2013, and present a logarithmic increasing tendency, indicating that the thresholds will recover in the future decades.

  5. Studying precipitation recycling over the Tibetan Plateau using evaporation-tagging and back-trajectory analysis

    NASA Astrophysics Data System (ADS)

    Gao, Y.

    2017-12-01

    Regional precipitation recycling (i.e., the contribution of local evaporation to local precipitation) is an important component of water cycle over the Tibetan Plateau (TP). Two methods were used to investigate regional precipitation recycling: 1) tracking of tagged atmospheric water parcels originating from evaporation in a source region (i.e., E-tagging), and 2) back-trajectory approach to track the evaporative sources contributed to precipitation in a specific region. These two methods were applied to Weather Research and Forecasting (WRF) regional climate simulations to quantify the precipitation recycling ratio in the TP for three selected years: climatologically normal, dry and wet year. The simulation region is characterized by high average elevation above 4000 m and complex terrain. The back-trajectory approach is also calculated over three sub-regions over the TP: namely western, northeastern and southeastern TP, and the E-tagging approach could provide recycling-ratio distributions over the whole TP. Three aspects are investigated to characterize the precipitation recycling: annual mean, seasonal variations and spatial distributions. Averaged over the TP, the precipitation recycling ratio estimated by the E-tagging approach is higher than that from the back-trajectory method. The back-trajectory approach uses a precipitation threshold as total precipitation in five days divided by a random number, and this number was set to 500 as a tread off between equilibrium and computational efficiency. Lower recycling ratio derived from the back-trajectory approach is related to the precipitation threshold used. The E-tagging, however, tracks every air parcel of evaporation regardless of the precipitation amount. There is no obvious seasonal variation in the recycling ratio using both methods. The E-tagging approach shows high recycling ratios in the center TP, indicating stronger land-atmospheric interactions than elsewhere.

  6. Analytical model of threshold voltage degradation due to localized charges in gate material engineered Schottky barrier cylindrical GAA MOSFETs

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Haldar, Subhasis; Gupta, Mridula; Gupta, R. S.

    2016-10-01

    The threshold voltage degradation due to the hot carrier induced localized charges (LC) is a major reliability concern for nanoscale Schottky barrier (SB) cylindrical gate all around (GAA) metal-oxide-semiconductor field-effect transistors (MOSFETs). The degradation physics of gate material engineered (GME)-SB-GAA MOSFETs due to LC is still unexplored. An explicit threshold voltage degradation model for GME-SB-GAA-MOSFETs with the incorporation of localized charges (N it) is developed. To accurately model the threshold voltage the minimum channel carrier density has been taken into account. The model renders how +/- LC affects the device subthreshold performance. One-dimensional (1D) Poisson’s and 2D Laplace equations have been solved for two different regions (fresh and damaged) with two different gate metal work-functions. LCs are considered at the drain side with low gate metal work-function as N it is more vulnerable towards the drain. For the reduction of carrier mobility degradation, a lightly doped channel has been considered. The proposed model also includes the effect of barrier height lowering at the metal-semiconductor interface. The developed model results have been verified using numerical simulation data obtained by the ATLAS-3D device simulator and excellent agreement is observed between analytical and simulation results.

  7. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  8. PT-symmetry breaking with divergent potentials: Lattice and continuum cases

    NASA Astrophysics Data System (ADS)

    Joglekar, Yogesh N.; Scott, Derek D.; Saxena, Avadh

    2014-09-01

    We investigate the parity- and time-reversal (PT-) symmetry breaking in lattice models in the presence of long-ranged, non-Hermitian, PT-symmetric potentials that remain finite or become divergent in the continuum limit. By scaling analysis of the fragile PT threshold for an open finite lattice, we show that continuum loss-gain potentials Vα(x)∝i|x|αsgn(x) have a positive PT-breaking threshold for α >-2, and a zero threshold for α ≤-2. When α <0 localized states with complex (conjugate) energies in the continuum energy band occur at higher loss-gain strengths. We investigate the signatures of PT-symmetry breaking in coupled waveguides, and show that the emergence of localized states dramatically shortens the relevant time scale in the PT-symmetry broken region.

  9. Extraction of the number of peroxisomes in yeast cells by automated image analysis.

    PubMed

    Niemistö, Antti; Selinummi, Jyrki; Saleem, Ramsey; Shmulevich, Ilya; Aitchison, John; Yli-Harja, Olli

    2006-01-01

    An automated image analysis method for extracting the number of peroxisomes in yeast cells is presented. Two images of the cell population are required for the method: a bright field microscope image from which the yeast cells are detected and the respective fluorescent image from which the number of peroxisomes in each cell is found. The segmentation of the cells is based on clustering the local mean-variance space. The watershed transformation is thereafter employed to separate cells that are clustered together. The peroxisomes are detected by thresholding the fluorescent image. The method is tested with several images of a budding yeast Saccharomyces cerevisiae population, and the results are compared with manually obtained results.

  10. Noise thresholds for optical quantum computers.

    PubMed

    Dawson, Christopher M; Haselgrove, Henry L; Nielsen, Michael A

    2006-01-20

    In this Letter we numerically investigate the fault-tolerant threshold for optical cluster-state quantum computing. We allow both photon loss noise and depolarizing noise (as a general proxy for all local noise), and obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible for photon loss probabilities <3 x 10(-3), and for depolarization probabilities <10(-4).

  11. Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars

    NASA Astrophysics Data System (ADS)

    Ruml, Mirjana; Vuković, Ana; Milatović, Dragan

    2010-07-01

    The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.

  12. Optimising health care within given budgets: primary prevention of cardiovascular disease in different regions of Sweden.

    PubMed

    Löfroth, Emil; Lindholm, Lars; Wilhelmsen, Lars; Rosén, Måns

    2006-01-01

    This study investigated the consequences of applying strict health maximisation to the choice between three different interventions with a defined budget. We analysed three interventions of preventing cardiovascular diseases, through doctor's advice on smoking secession, through blood-pressure-lowering drugs, and through lipid-lowering drugs. A state transition model has been used to estimate the cost-utility ratios for entire population in three different county councils in Sweden, where the populations were stratified into mutually excluding risk groups. The incremental cost-utility ratios are being presented in a league table and combined with the local resources and the local epidemiological data as a proxy for need for treatment. All interventions with an incremental cost-utility ratio exceeding the threshold ratios are excluded from being funded. The threshold varied between 1687 Euro and 6192 Euro. The general reallocation of resources between the three interventions was a 60% reduction of blood-pressure-lowering drugs with redistribution of resources to advice on smoking secession and to lipid-lowering drugs. One advantage of this method is that the results are very concrete. Recommendations can thereby be more precise which hopefully will create a public debate between decision-makers, practising physicians and patient groups.

  13. Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.

    PubMed

    Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari

    2014-07-01

    [Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.

  14. In search of functional association from time-series microarray data based on the change trend and level of gene expression

    PubMed Central

    He, Feng; Zeng, An-Ping

    2006-01-01

    Background The increasing availability of time-series expression data opens up new possibilities to study functional linkages of genes. Present methods used to infer functional linkages between genes from expression data are mainly based on a point-to-point comparison. Change trends between consecutive time points in time-series data have been so far not well explored. Results In this work we present a new method based on extracting main features of the change trend and level of gene expression between consecutive time points. The method, termed as trend correlation (TC), includes two major steps: 1, calculating a maximal local alignment of change trend score by dynamic programming and a change trend correlation coefficient between the maximal matched change levels of each gene pair; 2, inferring relationships of gene pairs based on two statistical extraction procedures. The new method considers time shifts and inverted relationships in a similar way as the local clustering (LC) method but the latter is merely based on a point-to-point comparison. The TC method is demonstrated with data from yeast cell cycle and compared with the LC method and the widely used Pearson correlation coefficient (PCC) based clustering method. The biological significance of the gene pairs is examined with several large-scale yeast databases. Although the TC method predicts an overall lower number of gene pairs than the other two methods at a same p-value threshold, the additional number of gene pairs inferred by the TC method is considerable: e.g. 20.5% compared with the LC method and 49.6% with the PCC method for a p-value threshold of 2.7E-3. Moreover, the percentage of the inferred gene pairs consistent with databases by our method is generally higher than the LC method and similar to the PCC method. A significant number of the gene pairs only inferred by the TC method are process-identity or function-similarity pairs or have well-documented biological interactions, including 443 known protein interactions and some known cell cycle related regulatory interactions. It should be emphasized that the overlapping of gene pairs detected by the three methods is normally not very high, indicating a necessity of combining the different methods in search of functional association of genes from time-series data. For a p-value threshold of 1E-5 the percentage of process-identity and function-similarity gene pairs among the shared part of the three methods reaches 60.2% and 55.6% respectively, building a good basis for further experimental and functional study. Furthermore, the combined use of methods is important to infer more complete regulatory circuits and network as exemplified in this study. Conclusion The TC method can significantly augment the current major methods to infer functional linkages and biological network and is well suitable for exploring temporal relationships of gene expression in time-series data. PMID:16478547

  15. Evaluation of auditory functions for Royal Canadian Mounted Police officers.

    PubMed

    Vaillancourt, Véronique; Laroche, Chantal; Giguère, Christian; Beaulieu, Marc-André; Legault, Jean-Pierre

    2011-06-01

    Auditory fitness for duty (AFFD) testing is an important element in an assessment of workers' ability to perform job tasks safely and effectively. Functional hearing is particularly critical to job performance in law enforcement. Most often, assessment is based on pure-tone detection thresholds; however, its validity can be questioned and challenged in court. In an attempt to move beyond the pure-tone audiogram, some organizations like the Royal Canadian Mounted Police (RCMP) are incorporating additional testing to supplement audiometric data in their AFFD protocols, such as measurements of speech recognition in quiet and/or in noise, and sound localization. This article reports on the assessment of RCMP officers wearing hearing aids in speech recognition and sound localization tasks. The purpose was to quantify individual performance in different domains of hearing identified as necessary components of fitness for duty, and to document the type of hearing aids prescribed in the field and their benefit for functional hearing. The data are to help RCMP in making more informed decisions regarding AFFD in officers wearing hearing aids. The proposed new AFFD protocol included unaided and aided measures of speech recognition in quiet and in noise using the Hearing in Noise Test (HINT) and sound localization in the left/right (L/R) and front/back (F/B) horizontal planes. Sixty-four officers were identified and selected by the RCMP to take part in this study on the basis of hearing thresholds exceeding current audiometrically based criteria. This article reports the results of 57 officers wearing hearing aids. Based on individual results, 49% of officers were reclassified from nonoperational status to operational with limitations on fine hearing duties, given their unaided and/or aided performance. Group data revealed that hearing aids (1) improved speech recognition thresholds on the HINT, the effects being most prominent in Quiet and in conditions of spatial separation between target and noise (Noise Right and Noise Left) and least considerable in Noise Front; (2) neither significantly improved nor impeded L/R localization; and (3) substantially increased F/B errors in localization in a number of cases. Additional analyses also pointed to the poor ability of threshold data to predict functional abilities for speech in noise (r² = 0.26 to 0.33) and sound localization (r² = 0.03 to 0.28). Only speech in quiet (r² = 0.68 to 0.85) is predicted adequately from threshold data. Combined with previous findings, results indicate that the use of hearing aids can considerably affect F/B localization abilities in a number of individuals. Moreover, speech understanding in noise and sound localization abilities were poorly predicted from pure-tone thresholds, demonstrating the need to specifically test these abilities, both unaided and aided, when assessing AFFD. Finally, further work is needed to develop empirically based hearing criteria for the RCMP and identify best practices in hearing aid fittings for optimal functional hearing abilities. American Academy of Audiology.

  16. WegoLoc: accurate prediction of protein subcellular localization using weighted Gene Ontology terms.

    PubMed

    Chi, Sang-Mun; Nam, Dougu

    2012-04-01

    We present an accurate and fast web server, WegoLoc for predicting subcellular localization of proteins based on sequence similarity and weighted Gene Ontology (GO) information. A term weighting method in the text categorization process is applied to GO terms for a support vector machine classifier. As a result, WegoLoc surpasses the state-of-the-art methods for previously used test datasets. WegoLoc supports three eukaryotic kingdoms (animals, fungi and plants) and provides human-specific analysis, and covers several sets of cellular locations. In addition, WegoLoc provides (i) multiple possible localizations of input protein(s) as well as their corresponding probability scores, (ii) weights of GO terms representing the contribution of each GO term in the prediction, and (iii) a BLAST E-value for the best hit with GO terms. If the similarity score does not meet a given threshold, an amino acid composition-based prediction is applied as a backup method. WegoLoc and User's guide are freely available at the website http://www.btool.org/WegoLoc smchiks@ks.ac.kr; dougnam@unist.ac.kr Supplementary data is available at http://www.btool.org/WegoLoc.

  17. A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.

    2018-02-01

    Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.

  18. An integrative perspective of the anaerobic threshold.

    PubMed

    Sales, Marcelo Magalhães; Sousa, Caio Victor; da Silva Aguiar, Samuel; Knechtle, Beat; Nikolaidis, Pantelis Theodoros; Alves, Polissandro Mortoza; Simões, Herbert Gustavo

    2017-12-14

    The concept of anaerobic threshold (AT) was introduced during the nineteen sixties. Since then, several methods to identify the anaerobic threshold (AT) have been studied and suggested as novel 'thresholds' based upon the variable used for its detection (i.e. lactate threshold, ventilatory threshold, glucose threshold). These different techniques have brought some confusion about how we should name this parameter, for instance, anaerobic threshold or the physiological measure used (i.e. lactate, ventilation). On the other hand, the modernization of scientific methods and apparatus to detect AT, as well as the body of literature formed in the past decades, could provide a more cohesive understanding over the AT and the multiple physiological systems involved. Thus, the purpose of this review was to provide an integrative perspective of the methods to determine AT. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Valley and channel networks extraction based on local topographic curvature and k-means clustering of contours

    NASA Astrophysics Data System (ADS)

    Hooshyar, Milad; Wang, Dingbao; Kim, Seoyoung; Medeiros, Stephen C.; Hagen, Scott C.

    2016-10-01

    A method for automatic extraction of valley and channel networks from high-resolution digital elevation models (DEMs) is presented. This method utilizes both positive (i.e., convergent topography) and negative (i.e., divergent topography) curvature to delineate the valley network. The valley and ridge skeletons are extracted using the pixels' curvature and the local terrain conditions. The valley network is generated by checking the terrain for the existence of at least one ridge between two intersecting valleys. The transition from unchannelized to channelized sections (i.e., channel head) in each first-order valley tributary is identified independently by categorizing the corresponding contours using an unsupervised approach based on k-means clustering. The method does not require a spatially constant channel initiation threshold (e.g., curvature or contributing area). Moreover, instead of a point attribute (e.g., curvature), the proposed clustering method utilizes the shape of contours, which reflects the entire cross-sectional profile including possible banks. The method was applied to three catchments: Indian Creek and Mid Bailey Run in Ohio and Feather River in California. The accuracy of channel head extraction from the proposed method is comparable to state-of-the-art channel extraction methods.

  20. Local signaling from a retinal prosthetic in a rodent retinitis pigmentosa model in vivo

    NASA Astrophysics Data System (ADS)

    Fransen, James W.; Pangeni, Gobinda; Pardue, Machelle T.; McCall, Maureen A.

    2014-08-01

    Objective. In clinical trials, retinitis pigmentosa patients implanted with a retinal prosthetic device show enhanced spatial vision, including the ability to read large text and navigate. New prosthetics aim to increase spatial resolution by decreasing pixel/electrode size and limiting current spread. To examine spatial resolution of a new prosthetic design, we characterized and compared two photovoltaic array (PVA) designs and their interaction with the retina after subretinal implantation in transgenic S334ter line 3 rats (Tg S334ter-3). Approach. PVAs were implanted subretinally at two stages of degeneration and assessed in vivo using extracellular recordings in the superior colliculus (SC). Several aspects of this interaction were evaluated by varying duration, irradiance and position of a near infrared laser focused on the PVA. These characteristics included: activation threshold, response linearity, SC signal topography and spatial localization. The major design difference between the two PVA designs is the inclusion of local current returns in the newer design. Main results. When tested in vivo, PVA-evoked response thresholds were independent of pixel/electrode size, but differ between the new and old PVA designs. Response thresholds were independent of implantation age and duration (⩽7.5 months). For both prosthesis designs, threshold intensities were within established safety limits. PVA-evoked responses require inner retina synaptic transmission and do not directly activate retinal ganglion cells. The new PVA design evokes local retinal activation, which is not found with the older PVA design that lacks local current returns. Significance. Our study provides in vivo evidence that prosthetics make functional contacts with the inner nuclear layer at several stages of degeneration. The new PVA design enhances local activation within the retina and SC. Together these results predict that the new design can potentially harness the inherent processing within the retina and is likely to produce higher spatial resolution in patients.

  1. Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.

    PubMed

    Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela

    Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  3. Influence of sound source location on the behavior and physiology of the precedence effect in cats.

    PubMed

    Dent, Micheal L; Tollin, Daniel J; Yin, Tom C T

    2009-08-01

    Psychophysical experiments on the precedence effect (PE) in cats have shown that they localize pairs of auditory stimuli presented from different locations in space based on the spatial position of the stimuli and the interstimulus delay (ISD) between the stimuli in a manner similar to humans. Cats exhibit localization dominance for pairs of transient stimuli with |ISDs| from approximately 0.4 to 10 ms, summing localization for |ISDs| < 0.4 ms and breakdown of fusion for |ISDs| > 10 ms, which is the approximate echo threshold. The neural correlates to the PE have been described in both anesthetized and unanesthetized animals at many levels from auditory nerve to cortex. Single-unit recordings from the inferior colliculus (IC) and auditory cortex of cats demonstrate that neurons respond to both lead and lag sounds at ISDs above behavioral echo thresholds, but the response to the lag is reduced at shorter ISDs, consistent with localization dominance. Here the influence of the relative locations of the leading and lagging sources on the PE was measured behaviorally in a psychophysical task and physiologically in the IC of awake behaving cats. At all configurations of lead-lag stimulus locations, the cats behaviorally exhibited summing localization, localization dominance, and breakdown of fusion. Recordings from the IC of awake behaving cats show neural responses paralleling behavioral measurements. Both behavioral and physiological results suggest systematically shorter echo thresholds when stimuli are further apart in space.

  4. Influence of Sound Source Location on the Behavior and Physiology of the Precedence Effect in Cats

    PubMed Central

    Dent, Micheal L.; Tollin, Daniel J.; Yin, Tom C. T.

    2009-01-01

    Psychophysical experiments on the precedence effect (PE) in cats have shown that they localize pairs of auditory stimuli presented from different locations in space based on the spatial position of the stimuli and the interstimulus delay (ISD) between the stimuli in a manner similar to humans. Cats exhibit localization dominance for pairs of transient stimuli with |ISDs| from ∼0.4 to 10 ms, summing localization for |ISDs| < 0.4 ms and breakdown of fusion for |ISDs| > 10 ms, which is the approximate echo threshold. The neural correlates to the PE have been described in both anesthetized and unanesthetized animals at many levels from auditory nerve to cortex. Single-unit recordings from the inferior colliculus (IC) and auditory cortex of cats demonstrate that neurons respond to both lead and lag sounds at ISDs above behavioral echo thresholds, but the response to the lag is reduced at shorter ISDs, consistent with localization dominance. Here the influence of the relative locations of the leading and lagging sources on the PE was measured behaviorally in a psychophysical task and physiologically in the IC of awake behaving cats. At all configurations of lead-lag stimulus locations, the cats behaviorally exhibited summing localization, localization dominance, and breakdown of fusion. Recordings from the IC of awake behaving cats show neural responses paralleling behavioral measurements. Both behavioral and physiological results suggest systematically shorter echo thresholds when stimuli are further apart in space. PMID:19439668

  5. Expression quantitative trait loci: replication, tissue- and sex-specificity in mice.

    PubMed

    van Nas, Atila; Ingram-Drake, Leslie; Sinsheimer, Janet S; Wang, Susanna S; Schadt, Eric E; Drake, Thomas; Lusis, Aldons J

    2010-07-01

    By treating the transcript abundance as a quantitative trait, gene expression can be mapped to local or distant genomic regions relative to the gene encoding the transcript. Local expression quantitative trait loci (eQTL) generally act in cis (that is, control the expression of only the contiguous structural gene), whereas distal eQTL act in trans. Distal eQTL are more difficult to identify with certainty due to the fact that significant thresholds are very high since all regions of the genome must be tested, and confounding factors such as batch effects can produce false positives. Here, we compare findings from two large genetic crosses between mouse strains C3H/HeJ and C57BL/6J to evaluate the reliability of distal eQTL detection, including "hotspots" influencing the expression of multiple genes in trans. We found that >63% of local eQTL and >18% of distal eQTL were replicable at a threshold of LOD > 4.3 between crosses and 76% of local and >24% of distal eQTL at a threshold of LOD > 6. Additionally, at LOD > 4.3 four tissues studied (adipose, brain, liver, and muscle) exhibited >50% preservation of local eQTL and >17% preservation of distal eQTL. We observed replicated distal eQTL hotspots between the crosses on chromosomes 9 and 17. Finally, >69% of local eQTL and >10% of distal eQTL were preserved in most tissues between sexes. We conclude that most local eQTL are highly replicable between mouse crosses, tissues, and sex as compared to distal eQTL, which exhibited modest replicability.

  6. A Combined Atmospheric Rivers and Geopotential Height Analysis for the Detection of High Streamflow Event Probability Occurrence in UK and Germany

    NASA Astrophysics Data System (ADS)

    Rosario Conticello, Federico; Cioffi, Francesco; Lall, Upmanu; Merz, Bruno

    2017-04-01

    The role of atmospheric rivers (ARs) in inducing High Streamflow Events (HSEs) in Europe has been confirmed by numerous studies. Here, we assume as HSEs the streamflows exceeding the 99th percentile of daily flowrate time series measured at streamflow gauges. Among the indicators of ARs are: the Integrated Water Vapor (IWV) and Integrated Water Vapor Transport (IVT). For both indicators the literature suggests thresholds in order to identify ARs. Furthermore, local thresholds of such indices are used to assess the occurrence of HSEs in a given region. Recent research on ARs still leaves room for open issues: 1) The literature is not unanimous in defining which of the two indicators is better. 2) The selection of the thresholds is based on subjective assessments. 3) The predictability of HSEs at the local scale associated with these indices seems to be weak and to exist only in the winter months. In order to address these issues, we propose an original methodology: (i) to choose between the two indicators which one is the most suitable for HSEs predictions; (ii) to select IWT and/or IVT (IVT/IWV) local thresholds in a more objective way; (iii) to implement an algorithm able to determine whether a IVT/IWV configuration is inducing HSEs, regardless of the season. In pursuing this goal, besides IWV and IVT fields, we introduce as further predictor the geopotential height at 850 hPa (GPH850) field, that implicitly contains information about the pattern of temperature, direction and intensity of the winds. In fact, the introduction of the GPH850 would help to improve the assessment of the occurrence of HSEs throughout the year. It is also plausible to hypothesize, that IVT/IWV local thresholds could vary in dependence of the GPH850 configuration. In this study, we propose a model to statistically relate these predictors, IVT/IWV and GPH850, to the simultaneous occurrence of HSEs in one or more streamflow gauges in UK and Germany. Historical data from 57 streamflow gauges in UK and 61 streamflow gauges in Germany, as well as reanalysis data of the 850 hPa geopotential fields bounded from 90W to 70E and from 20N to 80N are used. The common period is 1960 to 2012. The link between GPH850 and HSEs, and more precisely, the identification of the GPH850 states potentially able to generate HSEs is performed by a combined Kohonen Networks (Self Organized Map, SOM) and Event Syncronization approach. Complex network and modularity methods are used to cluster streamflow gauges that share common GPH850 configurations. Then a model based on a conditional Poisson distribution is carried out, in which the parameter of the Poisson distribution is assumed to be a nonlinear function of GPH850 state and IVT/ IWV. This model allows for the identification of the threshold of IVT/IWV beyond which there is the HSE highest probability.

  7. Remote pedestrians detection at night time in FIR Image using contrast filtering and locally projected region based CNN

    NASA Astrophysics Data System (ADS)

    Kim, Taehwan; Kim, Sungho

    2017-02-01

    This paper presents a novel method to detect the remote pedestrians. After producing the human temperature based brightness enhancement image using the temperature data input, we generates the regions of interest (ROIs) by the multiscale contrast filtering based approach including the biased hysteresis threshold and clustering, remote pedestrian's height, pixel area and central position information. Afterwards, we conduct local vertical and horizontal projection based ROI refinement and weak aspect ratio based ROI limitation to solve the problem of region expansion in the contrast filtering stage. Finally, we detect the remote pedestrians by validating the final ROIs using transfer learning with convolutional neural network (CNN) feature, following non-maximal suppression (NMS) with strong aspect ratio limitation to improve the detection performance. In the experimental results, we confirmed that the proposed contrast filtering and locally projected region based CNN (CFLP-CNN) outperforms the baseline method by 8% in term of logaveraged miss rate. Also, the proposed method is more effective than the baseline approach and the proposed method provides the better regions that are suitably adjusted to the shape and appearance of remote pedestrians, which makes it detect the pedestrian that didn't find in the baseline approach and are able to help detect pedestrians by splitting the people group into a person.

  8. LCS-TA to identify similar fragments in RNA 3D structures.

    PubMed

    Wiedemann, Jakub; Zok, Tomasz; Milostan, Maciej; Szachniuk, Marta

    2017-10-23

    In modern structural bioinformatics, comparison of molecular structures aimed to identify and assess similarities and differences between them is one of the most commonly performed procedures. It gives the basis for evaluation of in silico predicted models. It constitutes the preliminary step in searching for structural motifs. In particular, it supports tracing the molecular evolution. Faced with an ever-increasing amount of available structural data, researchers need a range of methods enabling comparative analysis of the structures from either global or local perspective. Herein, we present a new, superposition-independent method which processes pairs of RNA 3D structures to identify their local similarities. The similarity is considered in the context of structure bending and bonds' rotation which are described by torsion angles. In the analyzed RNA structures, the method finds the longest continuous segments that show similar torsion within a user-defined threshold. The length of the segment is provided as local similarity measure. The method has been implemented as LCS-TA algorithm (Longest Continuous Segments in Torsion Angle space) and is incorporated into our MCQ4Structures application, freely available for download from http://www.cs.put.poznan.pl/tzok/mcq/ . The presented approach ties torsion-angle-based method of structure analysis with the idea of local similarity identification by handling continuous 3D structure segments. The first method, implemented in MCQ4Structures, has been successfully utilized in RNA-Puzzles initiative. The second one, originally applied in Euclidean space, is a component of LGA (Local-Global Alignment) algorithm commonly used in assessing protein models submitted to CASP. This unique combination of concepts implemented in LCS-TA provides a new perspective on structure quality assessment in local and quantitative aspect. A series of computational experiments show the first results of applying our method to comparison of RNA 3D models. LCS-TA can be used for identifying strengths and weaknesses in the prediction of RNA tertiary structures.

  9. Stability of radiomic features in CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Bogowicz, M.; Riesterer, O.; Bundschuh, R. A.; Veit-Haibach, P.; Hüllner, M.; Studer, G.; Stieb, S.; Glatz, S.; Pruschy, M.; Guckenberger, M.; Tanadini-Lang, S.

    2016-12-01

    This study aimed to identify a set of stable radiomic parameters in CT perfusion (CTP) maps with respect to CTP calculation factors and image discretization, as an input for future prognostic models for local tumor response to chemo-radiotherapy. Pre-treatment CTP images of eleven patients with oropharyngeal carcinoma and eleven patients with non-small cell lung cancer (NSCLC) were analyzed. 315 radiomic parameters were studied per perfusion map (blood volume, blood flow and mean transit time). Radiomics robustness was investigated regarding the potentially standardizable (image discretization method, Hounsfield unit (HU) threshold, voxel size and temporal resolution) and non-standardizable (artery contouring and noise threshold) perfusion calculation factors using the intraclass correlation (ICC). To gain added value for our model radiomic parameters correlated with tumor volume, a well-known predictive factor for local tumor response to chemo-radiotherapy, were excluded from the analysis. The remaining stable radiomic parameters were grouped according to inter-parameter Spearman correlations and for each group the parameter with the highest ICC was included in the final set. The acceptance level was 0.9 and 0.7 for the ICC and correlation, respectively. The image discretization method using fixed number of bins or fixed intervals gave a similar number of stable radiomic parameters (around 40%). The potentially standardizable factors introduced more variability into radiomic parameters than the non-standardizable ones with 56-98% and 43-58% instability rates, respectively. The highest variability was observed for voxel size (instability rate  >97% for both patient cohorts). Without standardization of CTP calculation factors none of the studied radiomic parameters were stable. After standardization with respect to non-standardizable factors ten radiomic parameters were stable for both patient cohorts after correction for inter-parameter correlations. Voxel size, image discretization, HU threshold and temporal resolution have to be standardized to build a reliable predictive model based on CTP radiomics analysis.

  10. Rainfall control of debris-flow triggering in the Réal Torrent, Southern French Prealps

    NASA Astrophysics Data System (ADS)

    Bel, Coraline; Liébault, Frédéric; Navratil, Oldrich; Eckert, Nicolas; Bellot, Hervé; Fontaine, Firmin; Laigle, Dominique

    2017-08-01

    This paper investigates the occurrence of debris flow due to rainfall forcing in the Réal Torrent, a very active debris flow-prone catchment in the Southern French Prealps. The study is supported by a 4-year record of flow responses and rainfall events, from three high-frequency monitoring stations equipped with geophones, flow stage sensors, digital cameras, and rain gauges measuring rainfall at 5-min intervals. The classic method of rainfall intensity-duration (ID) threshold was used, and a specific emphasis was placed on the objective identification of rainfall events, as well as on the discrimination of flow responses observed above the ID threshold. The results show that parameters used to identify rainfall events significantly affect the ID threshold and are likely to explain part of the threshold variability reported in the literature. This is especially the case regarding the minimum duration of rain interruption (MDRI) between two distinct rainfall events. In the Réal Torrent, a 3-h MDRI appears to be representative of the local rainfall regime. A systematic increase in the ID threshold with drainage area was also observed from the comparison of the three stations, as well as from the compilation of data from experimental debris-flow catchments. A logistic regression used to separate flow responses above the ID threshold, revealed that the best predictors are the 5-min maximum rainfall intensity, the 48-h antecedent rainfall, the rainfall amount and the number of days elapsed since the end of winter (used as a proxy of sediment supply). This emphasizes the critical role played by short intense rainfall sequences that are only detectable using high time-resolution rainfall records. It also highlights the significant influence of antecedent conditions and the seasonal fluctuations of sediment supply.

  11. California sea lion (Zalophus californianus) aerial hearing sensitivity measured using auditory steady-state response and psychophysical methods.

    PubMed

    Mulsow, Jason; Finneran, James J; Houser, Dorian S

    2011-04-01

    Although electrophysiological methods of measuring the hearing sensitivity of pinnipeds are not yet as refined as those for dolphins and porpoises, they appear to be a promising supplement to traditional psychophysical procedures. In order to further standardize electrophysiological methods with pinnipeds, a within-subject comparison of psychophysical and auditory steady-state response (ASSR) measures of aerial hearing sensitivity was conducted with a 1.5-yr-old California sea lion. The psychophysical audiogram was similar to those previously reported for otariids, with a U-shape, and thresholds near 10 dB re 20 μPa at 8 and 16 kHz. ASSR thresholds measured using both single and multiple simultaneous amplitude-modulated tones closely reproduced the psychophysical audiogram, although the mean ASSR thresholds were elevated relative to psychophysical thresholds. Differences between psychophysical and ASSR thresholds were greatest at the low- and high-frequency ends of the audiogram. Thresholds measured using the multiple ASSR method were not different from those measured using the single ASSR method. The multiple ASSR method was more rapid than the single ASSR method, and allowed for threshold measurements at seven frequencies in less than 20 min. The multiple ASSR method may be especially advantageous for hearing sensitivity measurements with otariid subjects that are untrained for psychophysical procedures.

  12. Magnetic Flux Leakage Sensing and Artificial Neural Network Pattern Recognition-Based Automated Damage Detection and Quantification for Wire Rope Non-Destructive Evaluation.

    PubMed

    Kim, Ju-Won; Park, Seunghee

    2018-01-02

    In this study, a magnetic flux leakage (MFL) method, known to be a suitable non-destructive evaluation (NDE) method for continuum ferromagnetic structures, was used to detect local damage when inspecting steel wire ropes. To demonstrate the proposed damage detection method through experiments, a multi-channel MFL sensor head was fabricated using a Hall sensor array and magnetic yokes to adapt to the wire rope. To prepare the damaged wire-rope specimens, several different amounts of artificial damages were inflicted on wire ropes. The MFL sensor head was used to scan the damaged specimens to measure the magnetic flux signals. After obtaining the signals, a series of signal processing steps, including the enveloping process based on the Hilbert transform (HT), was performed to better recognize the MFL signals by reducing the unexpected noise. The enveloped signals were then analyzed for objective damage detection by comparing them with a threshold that was established based on the generalized extreme value (GEV) distribution. The detected MFL signals that exceed the threshold were analyzed quantitatively by extracting the magnetic features from the MFL signals. To improve the quantitative analysis, damage indexes based on the relationship between the enveloped MFL signal and the threshold value were also utilized, along with a general damage index for the MFL method. The detected MFL signals for each damage type were quantified by using the proposed damage indexes and the general damage indexes for the MFL method. Finally, an artificial neural network (ANN) based multi-stage pattern recognition method using extracted multi-scale damage indexes was implemented to automatically estimate the severity of the damage. To analyze the reliability of the MFL-based automated wire rope NDE method, the accuracy and reliability were evaluated by comparing the repeatedly estimated damage size and the actual damage size.

  13. Ship Detection from Ocean SAR Image Based on Local Contrast Variance Weighted Information Entropy

    PubMed Central

    Huang, Yulin; Pei, Jifang; Zhang, Qian; Gu, Qin; Yang, Jianyu

    2018-01-01

    Ship detection from synthetic aperture radar (SAR) images is one of the crucial issues in maritime surveillance. However, due to the varying ocean waves and the strong echo of the sea surface, it is very difficult to detect ships from heterogeneous and strong clutter backgrounds. In this paper, an innovative ship detection method is proposed to effectively distinguish the vessels from complex backgrounds from a SAR image. First, the input SAR image is pre-screened by the maximally-stable extremal region (MSER) method, which can obtain the ship candidate regions with low computational complexity. Then, the proposed local contrast variance weighted information entropy (LCVWIE) is adopted to evaluate the complexity of those candidate regions and the dissimilarity between the candidate regions with their neighborhoods. Finally, the LCVWIE values of the candidate regions are compared with an adaptive threshold to obtain the final detection result. Experimental results based on measured ocean SAR images have shown that the proposed method can obtain stable detection performance both in strong clutter and heterogeneous backgrounds. Meanwhile, it has a low computational complexity compared with some existing detection methods. PMID:29652863

  14. Model-free estimation of the psychometric function

    PubMed Central

    Żychaluk, Kamila; Foster, David H.

    2009-01-01

    A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355

  15. Mesh refinement strategy for optimal control problems

    NASA Astrophysics Data System (ADS)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  16. Receptive fields selection for binary feature description.

    PubMed

    Fan, Bin; Kong, Qingqun; Trzcinski, Tomasz; Wang, Zhiheng; Pan, Chunhong; Fua, Pascal

    2014-06-01

    Feature description for local image patch is widely used in computer vision. While the conventional way to design local descriptor is based on expert experience and knowledge, learning-based methods for designing local descriptor become more and more popular because of their good performance and data-driven property. This paper proposes a novel data-driven method for designing binary feature descriptor, which we call receptive fields descriptor (RFD). Technically, RFD is constructed by thresholding responses of a set of receptive fields, which are selected from a large number of candidates according to their distinctiveness and correlations in a greedy way. Using two different kinds of receptive fields (namely rectangular pooling area and Gaussian pooling area) for selection, we obtain two binary descriptors RFDR and RFDG .accordingly. Image matching experiments on the well-known patch data set and Oxford data set demonstrate that RFD significantly outperforms the state-of-the-art binary descriptors, and is comparable with the best float-valued descriptors at a fraction of processing time. Finally, experiments on object recognition tasks confirm that both RFDR and RFDG successfully bridge the performance gap between binary descriptors and their floating-point competitors.

  17. Direct Injection of Blood Products Versus Gelatin Sponge as a Technique for Local Hemostasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haaga, John; Rahim, Shiraz, E-mail: Shiraz.rahim@uhhospitals.org

    PurposeTo provide a method of reducing risk of minimally invasive procedures on patients with abnormal hemostasis and evaluate efficacy of direct fresh frozen plasma injection through a procedure needle tract compared to Gelfoam (gelatin sponge) administration.Materials and MethodsEighty patients with elevated international standardized ratio (INR) undergoing minimally invasive procedures using imaging guidance were selected retrospectively. Forty patients had received Gelfoam as a means of tract embolization during the procedure. The other 40 received local fresh frozen plasma (FFP) through the needle tract. The number of complications and clinically significant bleeding events were recorded. A threshold of 30 cc of blood lossmore » after a procedure was used to identify excess bleeding.ResultsNo patients experienced clinically significant bleeding after administration of FFP. Five patients experienced postoperative drops in hemoglobin or hematomas after administration of Gelfoam.ConclusionLocal injection of blood products can reduce postprocedure bleeding in patients undergoing minimally invasive procedures and provides a safe alternative to the use of synthetic fibrin plugs.« less

  18. A liver cirrhosis classification on B-mode ultrasound images by the use of higher order local autocorrelation features

    NASA Astrophysics Data System (ADS)

    Sasaki, Kenya; Mitani, Yoshihiro; Fujita, Yusuke; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-02-01

    In this paper, in order to classify liver cirrhosis on regions of interest (ROIs) images from B-mode ultrasound images, we have proposed to use the higher order local autocorrelation (HLAC) features. In a previous study, we tried to classify liver cirrhosis by using a Gabor filter based approach. However, the classification performance of the Gabor feature was poor from our preliminary experimental results. In order accurately to classify liver cirrhosis, we examined to use the HLAC features for liver cirrhosis classification. The experimental results show the effectiveness of HLAC features compared with the Gabor feature. Furthermore, by using a binary image made by an adaptive thresholding method, the classification performance of HLAC features has improved.

  19. Many-body localization in a long range XXZ model with random-field

    NASA Astrophysics Data System (ADS)

    Li, Bo

    2016-12-01

    Many-body localization (MBL) in a long range interaction XXZ model with random field are investigated. Using the exact diagonal method, the MBL phase diagram with different tuning parameters and interaction range is obtained. It is found that the phase diagram of finite size results supplies strong evidence to confirm that the threshold interaction exponent α = 2. The tuning parameter Δ can efficiently change the MBL edge in high energy density stats, thus the system can be controlled to transfer from thermal phase to MBL phase by changing Δ. The energy level statistics data are consistent with result of the MBL phase diagram. However energy level statistics data cannot detect the thermal phase correctly in extreme long range case.

  20. Modeling epidemic spread with awareness and heterogeneous transmission rates in networks.

    PubMed

    Shang, Yilun

    2013-06-01

    During an epidemic outbreak in a human population, susceptibility to infection can be reduced by raising awareness of the disease. In this paper, we investigate the effects of three forms of awareness (i.e., contact, local, and global) on the spread of a disease in a random network. Connectivity-correlated transmission rates are assumed. By using the mean-field theory and numerical simulation, we show that both local and contact awareness can raise the epidemic thresholds while the global awareness cannot, which mirrors the recent results of Wu et al. The obtained results point out that individual behaviors in the presence of an infectious disease has a great influence on the epidemic dynamics. Our method enriches mean-field analysis in epidemic models.

  1. How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    PubMed

    Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang

    2018-08-01

    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Accurate Image Analysis of the Retina Using Hessian Matrix and Binarisation of Thresholded Entropy with Application of Texture Mapping

    PubMed Central

    Yin, Xiaoxia; Ng, Brian W-H; He, Jing; Zhang, Yanchun; Abbott, Derek

    2014-01-01

    In this paper, we demonstrate a comprehensive method for segmenting the retinal vasculature in camera images of the fundus. This is of interest in the area of diagnostics for eye diseases that affect the blood vessels in the eye. In a departure from other state-of-the-art methods, vessels are first pre-grouped together with graph partitioning, using a spectral clustering technique based on morphological features. Local curvature is estimated over the whole image using eigenvalues of Hessian matrix in order to enhance the vessels, which appear as ridges in images of the retina. The result is combined with a binarized image, obtained using a threshold that maximizes entropy, to extract the retinal vessels from the background. Speckle type noise is reduced by applying a connectivity constraint on the extracted curvature based enhanced image. This constraint is varied over the image according to each region's predominant blood vessel size. The resultant image exhibits the central light reflex of retinal arteries and veins, which prevents the segmentation of whole vessels. To address this, the earlier entropy-based binarization technique is repeated on the original image, but crucially, with a different threshold to incorporate the central reflex vessels. The final segmentation is achieved by combining the segmented vessels with and without central light reflex. We carry out our approach on DRIVE and REVIEW, two publicly available collections of retinal images for research purposes. The obtained results are compared with state-of-the-art methods in the literature using metrics such as sensitivity (true positive rate), selectivity (false positive rate) and accuracy rates for the DRIVE images and measured vessel widths for the REVIEW images. Our approach out-performs the methods in the literature. PMID:24781033

  3. Localization Transition Induced by Learning in Random Searches

    NASA Astrophysics Data System (ADS)

    Falcón-Cortés, Andrea; Boyer, Denis; Giuggioli, Luca; Majumdar, Satya N.

    2017-10-01

    We solve an adaptive search model where a random walker or Lévy flight stochastically resets to previously visited sites on a d -dimensional lattice containing one trapping site. Because of reinforcement, a phase transition occurs when the resetting rate crosses a threshold above which nondiffusive stationary states emerge, localized around the inhomogeneity. The threshold depends on the trapping strength and on the walker's return probability in the memoryless case. The transition belongs to the same class as the self-consistent theory of Anderson localization. These results show that similarly to many living organisms and unlike the well-studied Markovian walks, non-Markov movement processes can allow agents to learn about their environment and promise to bring adaptive solutions in search tasks.

  4. An augmented parametric response map with consideration of image registration error: towards guidance of locally adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Lausch, Anthony; Chen, Jeff; Ward, Aaron D.; Gaede, Stewart; Lee, Ting-Yim; Wong, Eugene

    2014-11-01

    Parametric response map (PRM) analysis is a voxel-wise technique for predicting overall treatment outcome, which shows promise as a tool for guiding personalized locally adaptive radiotherapy (RT). However, image registration error (IRE) introduces uncertainty into this analysis which may limit its use for guiding RT. Here we extend the PRM method to include an IRE-related PRM analysis confidence interval and also incorporate multiple graded classification thresholds to facilitate visualization. A Gaussian IRE model was used to compute an expected value and confidence interval for PRM analysis. The augmented PRM (A-PRM) was evaluated using CT-perfusion functional image data from patients treated with RT for glioma and hepatocellular carcinoma. Known rigid IREs were simulated by applying one thousand different rigid transformations to each image set. PRM and A-PRM analyses of the transformed images were then compared to analyses of the original images (ground truth) in order to investigate the two methods in the presence of controlled IRE. The A-PRM was shown to help visualize and quantify IRE-related analysis uncertainty. The use of multiple graded classification thresholds also provided additional contextual information which could be useful for visually identifying adaptive RT targets (e.g. sub-volume boosts). The A-PRM should facilitate reliable PRM guided adaptive RT by allowing the user to identify if a patient’s unique IRE-related PRM analysis uncertainty has the potential to influence target delineation.

  5. Experimental and environmental factors affect spurious detection of ecological thresholds

    USGS Publications Warehouse

    Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.

    2012-01-01

    Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.

  6. Species extinction thresholds in the face of spatially correlated periodic disturbance.

    PubMed

    Liao, Jinbao; Ying, Zhixia; Hiebeler, David E; Wang, Yeqiao; Takada, Takenori; Nijs, Ivan

    2015-10-20

    The spatial correlation of disturbance is gaining attention in landscape ecology, but knowledge is still lacking on how species traits determine extinction thresholds under spatially correlated disturbance regimes. Here we develop a pair approximation model to explore species extinction risk in a lattice-structured landscape subject to aggregated periodic disturbance. Increasing disturbance extent and frequency accelerated population extinction irrespective of whether dispersal was local or global. Spatial correlation of disturbance likewise increased species extinction risk, but only for local dispersers. This indicates that models based on randomly simulated disturbances (e.g., mean-field or non-spatial models) may underestimate real extinction rates. Compared to local dispersal, species with global dispersal tolerated more severe disturbance, suggesting that the spatial correlation of disturbance favors long-range dispersal from an evolutionary perspective. Following disturbance, intraspecific competition greatly enhanced the extinction risk of distance-limited dispersers, while it surprisingly did not influence the extinction thresholds of global dispersers, apart from decreasing population density to some degree. As species respond differently to disturbance regimes with different spatiotemporal properties, different regimes may accommodate different species.

  7. Development of automated extraction method of biliary tract from abdominal CT volumes based on local intensity structure analysis

    NASA Astrophysics Data System (ADS)

    Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku

    2014-03-01

    In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.

  8. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  9. SU-E-T-647: Quality Assurance of VMAT by Gamma Analysis Dependence On Low-Dose Threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, J; Kim, M; Lee, S

    2015-06-15

    Purpose: The AAPM TG-119 instructed institutions to use low-dose threshold (LDT) of 10% or a ROI determined by the jaw when they collected gamma analysis QA data of planar dose distribution. Also, based on a survey by Nelms and Simon, more than 70% of institutions use a LDT between 0% and 10% for gamma analysis. However, there are no clinical data to quantitatively demonstrate the impact of the LDT on the gamma index. Therefore, we performed a gamma analysis with LDTs of 0% to 15% according to both global and local normalization and different acceptance criteria: 3%/3 mm, 2%/2 mm,more » and 1%/1 mm. Methods: A total of 30 treatment plans—10 head and neck, 10 brain, and 10 prostate cancer cases—were randomly selected from the Varian Eclipse TPS, retrospectively. For the gamma analysis, a predicted portal image was acquired through a portal dose calculation algorithm in the Eclipse TPS, and a measured portal image was obtained using a Varian Clinac iX and an EPID. Then, the gamma analysis was performed using the Portal Dosimetry software. Results: For the global normalization, the gamma passing rate (%GP) decreased as the LDT increased, and all cases of low-dose thresholds exhibited a %GP above 95% for both the 3%/3 mm and 2%/2 mm criteria. However, for local normalization, the %GP increased as LDT increased. The gamma passing rate with LDT of 10% increased by 6.86%, 9.22% and 6.14% compared with the 0% in the case of the head and neck, brain and prostate for 3%/3 mm criteria, respectively. Conclusion: Applying the LDT in the global normalization does not have critical impact to judge patient-specific QA results. However, LDT for the local normalization should be carefully selected because applying the LDT could affect the average of the %GP to increase rapidly.« less

  10. Underwater hearing and sound localization with and without an air interface.

    PubMed

    Shupak, Avi; Sharoni, Zohara; Yanir, Yoav; Keynan, Yoav; Alfie, Yechezkel; Halpern, Pinchas

    2005-01-01

    Underwater hearing acuity and sound localization are improved by the presence of an air interface around the pinnae and inside the external ear canals. Hearing threshold and the ability to localize sound sources are reduced underwater. The resonance frequency of the external ear is lowered when the external ear canal is filled with water, and the impedance-matching ability of the middle ear is significantly reduced due to elevation of the ambient pressure, the water-mass load on the tympanic membrane, and the addition of a fluid-air interface during submersion. Sound lateralization on land is largely explained by the mechanisms of interaural intensity differences and interaural temporal or phase differences. During submersion, these differences are largely lost due to the increase in underwater sound velocity and cancellation of the head's acoustic shadow effect because of the similarity between the impedance of the skull and the surrounding water. Ten scuba divers wearing a regular opaque face mask or an opaque ProEar 2000 (Safe Dive, Ltd., Hofit, Israel) mask that enables the presence of air at ambient pressure in and around the ear made a dive to a depth of 3 m in the open sea. Four underwater speakers arranged on the horizontal plane at 90-degree intervals and at a distance of 5 m from the diver were used for testing pure-tone hearing thresholds (PTHT), the reception threshold for the recorded sound of a rubber-boat engine, and sound localization. For sound localization, the sound of the rubber boat's engine was randomly delivered by one speaker at a time at 40 dB HL above the recorded sound of a rubber-boat engine, and the diver was asked to point to the sound source. The azimuth was measured by the diver's companion using a navigation board. Underwater PTHT with both masks were significantly higher for frequencies of 250 to 6000 Hz when compared with the thresholds on land (p <0.0001). No differences were found in the PTHT or the reception threshold for the recorded sound of a rubber-boat engine for dry or wet ear conditions. There was no difference in the sound localization error between the regular mask and the ProEar 2000 mask. The presence of air around the pinna and inside the external ear canal did not improve underwater hearing sensitivity or sound localization. These results support the argument that bone conduction plays the main role in underwater hearing.

  11. Condition monitoring of 3G cellular networks through competitive neural models.

    PubMed

    Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo

    2005-09-01

    We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.

  12. Laser-Induced Breakdown Spectroscopy (LIBS) for the Measurement of Spatial Structures and Fuel Distribution in Flames.

    PubMed

    Kotzagianni, Maria; Kakkava, Eirini; Couris, Stelios

    2016-04-01

    Laser-induced breakdown spectroscopy (LIBS) is used for the mapping of local structures (i.e., reactants and products zones) and for the determination of fuel distribution by means of the local equivalence ratio ϕ in laminar, premixed air-hydrocarbon flames. The determination of laser threshold energy to induce breakdown in the different zones of flames is employed for the identification and demarcation of the local structures of a premixed laminar flame, while complementary results about fuel concentration were obtained from measurements of the cyanogen (CN) band Β(2)Σ(+)--Χ(2)Σ(+), (Δυ = 0) at 388.3 nm and the ratio of the atomic lines of hydrogen (Hα) and oxygen (O(I)), Hα/O. The combination of these LIBS-based methods provides a relatively simple to use, rapid, and accurate tool for online and in situ combustion diagnostics, providing valuable information about the fuel distribution and the spatial variations of the local structures of a flame. © The Author(s) 2016.

  13. Low-threshold parametric excitation of the upper hybrid wave in experiments on electron-cyclotron resonance heating by an ordinary wave

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sysoeva, E. V., E-mail: tinlit@yandex.ru; Gusakov, E. Z.; Simonchik, L. V.

    2016-07-15

    The possibility of the low-threshold decay of an ordinary wave into an upper hybrid wave localized in a plasma column (or in an axisymmetric plasma filament) and a low-frequency wave is analyzed. It is shown that the threshold for such a decay, accompanied by the excitation of an ion-acoustic wave, can easily be overcome for plasma parameters typical of model experiments on the Granit linear plasma facility.

  14. Willingness to pay per quality-adjusted life year: is one threshold enough for decision-making?: results from a study in patients with chronic prostatitis.

    PubMed

    Zhao, Fei-Li; Yue, Ming; Yang, Hua; Wang, Tian; Wu, Jiu-Hong; Li, Shu-Chuen

    2011-03-01

    To estimate the willingness to pay (WTP) per quality-adjusted life year (QALY) ratio with the stated preference data and compare the results obtained between chronic prostatitis (CP) patients and general population (GP). WTP per QALY was calculated with the subjects' own health-related utility and the WTP value. Two widely used preference-based health-related quality of life instruments, EuroQol (EQ-5D) and Short Form 6D (SF-6D), were used to elicit utility for participants' own health. The monthly WTP values for moving from participants' current health to a perfect health were elicited using closed-ended iterative bidding contingent valuation method. A total of 268 CP patients and 364 participants from GP completed the questionnaire. We obtained 4 WTP/QALY ratios ranging from $4700 to $7400, which is close to the lower bound of local gross domestic product per capita, a threshold proposed by World Health Organization. Nevertheless, these values were lower than other proposed thresholds and published empirical researches on diseases with mortality risk. Furthermore, the WTP/QALY ratios from the GP were significantly lower than those from the CP patients, and different determinants were associated with the within group variation identified by multiple linear regression. Preference elicitation methods are acceptable and feasible in the socio-cultural context of an Asian environment and the calculation of WTP/QALY ratio produced meaningful answers. The necessity of considering the QALY type or disease-specific QALY in estimating WTP/QALY ratio was highlighted and 1 to 3 times of gross domestic product/capita recommended by World Health Organization could potentially serve as a benchmark for threshold in this Asian context.

  15. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  16. Verification of the tumor volume delineation method using a fixed threshold of peak standardized uptake value.

    PubMed

    Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro

    2017-09-01

    We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.

  17. Comparison of software and human observers in reading images of the CDMAM test object to assess digital mammography systems

    NASA Astrophysics Data System (ADS)

    Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde

    2006-03-01

    European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.

  18. Reference Guide to Odor Thresholds for Hazardous Air Pollutants Listed in the Clean Air Act Amendments of 1990.

    EPA Science Inventory

    In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. Discussion...

  19. Effects of cement alkalinity, exposure conditions and steel-concrete interface on the time-to-corrosion and chloride threshold for reinforcing steel in concrete

    NASA Astrophysics Data System (ADS)

    Nam, Jingak

    Effects of (1) cement alkalinity (low, normal and high), (2) exposure conditions (RH and temperature), (3) rebar surface condition (as-received versus cleaned) and (4) density and distribution of air voids at the steel-concrete interface on the chloride threshold and time-to-corrosion for reinforcing steel in concrete have been studied. Also, experiments were performed to evaluate effects of RH and temperature on the diffusion of chloride in concrete and develop a method for ex-situ pH measurement of concrete pore water. Once specimens were fabricated and exposed to a corrosive chloride solution, various experimental techniques were employed to determine time-to-corrosion, chloride threshold, diffusion coefficient and void density along the rebar trace as well as pore water pH. Based upon the resultant data, several findings related to the above parameters have been obtained as summarized below. First, time for the corrosion initiation was longest for G109 concrete specimens with high alkalinity cement (HA). Also, chloride threshold increased with increasing time-to-corrosion and cement alkalinity. Consequently, the HA specimens exhibited the highest chloride threshold compared to low and normal alkalinity ones. Second, high temperature and temperature variations reduced time-to-corrosion of reinforcing steel in concrete since chloride diffusion was accelerated at higher temperature and possibly by temperature variations. The lowest chloride threshold values were found for outdoor exposed specimens suggesting that variation of RH or temperature (or both) facilitated rapid chloride diffusion. Third, an elevated time-to-corrosion and chloride threshold values were found for the wire brushed steel specimens compared to as-received ones. The higher ratio of [OH-]/[Fe n+] on the wire brushed steel surface compared to that of as-received case can be the possible cause because the higher ratio of this parameter enables the formation of a more protective passive film on the rebar. Fourth, voids at the steel-concrete interface facilitated passive film breakdown and onset of localized corrosion. This tendency for corrosion initiation increased in proportion to void size irrespective of specimen type. Also, [Cl -]th decreased with increasing void diameter. In addition, new ex-situ leaching method for determining concrete pore water alkalinity was developed.

  20. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank

    2016-03-01

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.

  1. New approach to estimating variability in visual field data using an image processing technique.

    PubMed Central

    Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P

    1995-01-01

    AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196

  2. Central and rear-edge populations can be equally vulnerable to warming

    NASA Astrophysics Data System (ADS)

    Bennett, Scott; Wernberg, Thomas; Arackal Joy, Bijo; de Bettignies, Thibaut; Campbell, Alexandra H.

    2015-12-01

    Rear (warm) edge populations are often considered more susceptible to warming than central (cool) populations because of the warmer ambient temperatures they experience, but this overlooks the potential for local variation in thermal tolerances. Here we provide conceptual models illustrating how sensitivity to warming is affected throughout a species' geographical range for locally adapted and non-adapted populations. We test these models for a range-contracting seaweed using observations from a marine heatwave and a 12-month experiment, translocating seaweeds among central, present and historic range edge locations. Growth, reproductive development and survivorship display different temperature thresholds among central and rear-edge populations, but share a 2.5 °C anomaly threshold. Range contraction, therefore, reflects variation in local anomalies rather than differences in absolute temperatures. This demonstrates that warming sensitivity can be similar throughout a species geographical range and highlights the importance of incorporating local adaptation and acclimatization into climate change vulnerability assessments.

  3. Differential expression of type X collagen in a mechanically active 3-D chondrocyte culture system: a quantitative study

    PubMed Central

    Yang, Xu; Vezeridis, Peter S; Nicholas, Brian; Crisco, Joseph J; Moore, Douglas C; Chen, Qian

    2006-01-01

    Objective Mechanical loading of cartilage influences chondrocyte metabolism and gene expression. The gene encoding type X collagen is expressed specifically by hypertrophic chondrocytes and up regulated during osteoarthritis. In this study we tested the hypothesis that the mechanical microenvironment resulting from higher levels of local strain in a three dimensional cell culture construct would lead to an increase in the expression of type X collagen mRNA by chondrocytes in those areas. Methods Hypertrophic chondrocytes were isolated from embryonic chick sterna and seeded onto rectangular Gelfoam sponges. Seeded sponges were subjected to various levels of cyclic uniaxial tensile strains at 1 Hz with the computer-controlled Bio-Stretch system. Strain distribution across the sponge was quantified by digital image analysis. After mechanical loading, sponges were cut and the end and center regions were separated according to construct strain distribution. Total RNA was extracted from the cells harvested from these regions, and real-time quantitative RT-PCR was performed to quantify mRNA levels for type X collagen and a housing-keeping gene 18S RNA. Results Chondrocytes distributed in high (9%) local strain areas produced more than two times type X collagen mRNA compared to the those under no load conditions, while chondrocytes located in low (2.5%) local strain areas had no appreciable difference in type X collagen mRNA production in comparison to non-loaded samples. Increasing local strains above 2.5%, either in the center or end regions of the sponge, resulted in increased expression of Col X mRNA by chondrocytes in that region. Conclusion These findings suggest that the threshold of chondrocyte sensitivity to inducing type X collagen mRNA production is more than 2.5% local strain, and that increased local strains above the threshold results in an increase of Col X mRNA expression. Such quantitative analysis has important implications for our understanding of mechanosensitivity of cartilage and mechanical regulation of chondrocyte gene expression. PMID:17150098

  4. An artifacts removal post-processing for epiphyseal region-of-interest (EROI) localization in automated bone age assessment (BAA)

    PubMed Central

    2011-01-01

    Background Segmentation is the most crucial part in the computer-aided bone age assessment. A well-known type of segmentation performed in the system is adaptive segmentation. While providing better result than global thresholding method, the adaptive segmentation produces a lot of unwanted noise that could affect the latter process of epiphysis extraction. Methods A proposed method with anisotropic diffusion as pre-processing and a novel Bounded Area Elimination (BAE) post-processing algorithm to improve the algorithm of ossification site localization technique are designed with the intent of improving the adaptive segmentation result and the region-of interest (ROI) localization accuracy. Results The results are then evaluated by quantitative analysis and qualitative analysis using texture feature evaluation. The result indicates that the image homogeneity after anisotropic diffusion has improved averagely on each age group for 17.59%. Results of experiments showed that the smoothness has been improved averagely 35% after BAE algorithm and the improvement of ROI localization has improved for averagely 8.19%. The MSSIM has improved averagely 10.49% after performing the BAE algorithm on the adaptive segmented hand radiograph. Conclusions The result indicated that hand radiographs which have undergone anisotropic diffusion have greatly reduced the noise in the segmented image and the result as well indicated that the BAE algorithm proposed is capable of removing the artifacts generated in adaptive segmentation. PMID:21952080

  5. Threshold selection for classification of MR brain images by clustering method

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita

    2015-12-01

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  6. Evaluating simplified methods for liquefaction assessment for loss estimation

    NASA Astrophysics Data System (ADS)

    Kongar, Indranil; Rossetto, Tiziana; Giovinazzi, Sonia

    2017-06-01

    Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at optimal thresholds. This paper also considers two models (HAZUS and EPOLLS) for estimation of the scale of liquefaction in terms of permanent ground deformation but finds that both models perform poorly, with correlations between observations and forecasts lower than 0.4 in all cases. Therefore these models potentially provide negligible additional value to loss estimation analysis outside of the regions for which they have been developed.

  7. Influenza surveillance in Europe: establishing epidemic thresholds by the Moving Epidemic Method

    PubMed Central

    Vega, Tomás; Lozano, Jose Eugenio; Meerhoff, Tamara; Snacken, René; Mott, Joshua; Ortiz de Lejarazu, Raul; Nunes, Baltazar

    2012-01-01

    Please cite this paper as: Vega et al. (2012) Influenza surveillance in Europe: establishing epidemic thresholds by the moving epidemic method. Influenza and Other Respiratory Viruses 7(4), 546–558. Background  Timely influenza surveillance is important to monitor influenza epidemics. Objectives  (i) To calculate the epidemic threshold for influenza‐like illness (ILI) and acute respiratory infections (ARI) in 19 countries, as well as the thresholds for different levels of intensity. (ii) To evaluate the performance of these thresholds. Methods  The moving epidemic method (MEM) has been developed to determine the baseline influenza activity and an epidemic threshold. False alerts, detection lags and timeliness of the detection of epidemics were calculated. The performance was evaluated using a cross‐validation procedure. Results  The overall sensitivity of the MEM threshold was 71·8% and the specificity was 95·5%. The median of the timeliness was 1 week (range: 0–4·5). Conclusions  The method produced a robust and specific signal to detect influenza epidemics. The good balance between the sensitivity and specificity of the epidemic threshold to detect seasonal epidemics and avoid false alerts has advantages for public health purposes. This method may serve as standard to define the start of the annual influenza epidemic in countries in Europe. PMID:22897919

  8. Comparison of alternatives to amplitude thresholding for onset detection of acoustic emission signals

    NASA Astrophysics Data System (ADS)

    Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.

    2017-02-01

    Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.

  9. Evaluation of Maryland abutment scour equation through selected threshold velocity methods

    USGS Publications Warehouse

    Benedict, S.T.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.

  10. Fault-tolerance in Two-dimensional Topological Systems

    NASA Astrophysics Data System (ADS)

    Anderson, Jonas T.

    This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev's surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical CNOT gates can be performed by code deformation in a single block instead of between pairs of blocks, the threshold for fault-tolerant quantum memory for these codes is also the threshold for fault-tolerant quantum computation with them. Since the advent of a threshold theorem for quantum computers much has been improved upon. Thresholds have increased, architectures have become more local, and gate sets have been simplified. The overhead for magic-state distillation has been studied, but not nearly to the extent of the aforementioned topics. A method for greatly reducing this overhead, known as reusable magic states, is studied here. While examples of reusable magic states exist for Clifford gates, I give strong reasons to believe they do not exist for non-Clifford gates.

  11. Exploring new topography-based subgrid spatial structures for improving land surface modeling

    DOE PAGES

    Tesfa, Teklu K.; Leung, Lai-Yung Ruby

    2017-02-22

    Topography plays an important role in land surface processes through its influence on atmospheric forcing, soil and vegetation properties, and river network topology and drainage area. Land surface models with a spatial structure that captures spatial heterogeneity, which is directly affected by topography, may improve the representation of land surface processes. Previous studies found that land surface modeling, using subbasins instead of structured grids as computational units, improves the scalability of simulated runoff and streamflow processes. In this study, new land surface spatial structures are explored by further dividing subbasins into subgrid structures based on topographic properties, including surface elevation,more » slope and aspect. Two methods (local and global) of watershed discretization are applied to derive two types of subgrid structures (geo-located and non-geo-located) over the topographically diverse Columbia River basin in the northwestern United States. In the global method, a fixed elevation classification scheme is used to discretize subbasins. The local method utilizes concepts of hypsometric analysis to discretize each subbasin, using different elevation ranges that also naturally account for slope variations. The relative merits of the two methods and subgrid structures are investigated for their ability to capture topographic heterogeneity and the implications of this on representations of atmospheric forcing and land cover spatial patterns. Results showed that the local method reduces the standard deviation (SD) of subgrid surface elevation in the study domain by 17 to 19 % compared to the global method, highlighting the relative advantages of the local method for capturing subgrid topographic variations. The comparison between the two types of subgrid structures showed that the non-geo-located subgrid structures are more consistent across different area threshold values than the geo-located subgrid structures. Altogether the local method and non-geo-located subgrid structures effectively and robustly capture topographic, climatic and vegetation variability, which is important for land surface modeling.« less

  12. Exploring new topography-based subgrid spatial structures for improving land surface modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tesfa, Teklu K.; Leung, Lai-Yung Ruby

    Topography plays an important role in land surface processes through its influence on atmospheric forcing, soil and vegetation properties, and river network topology and drainage area. Land surface models with a spatial structure that captures spatial heterogeneity, which is directly affected by topography, may improve the representation of land surface processes. Previous studies found that land surface modeling, using subbasins instead of structured grids as computational units, improves the scalability of simulated runoff and streamflow processes. In this study, new land surface spatial structures are explored by further dividing subbasins into subgrid structures based on topographic properties, including surface elevation,more » slope and aspect. Two methods (local and global) of watershed discretization are applied to derive two types of subgrid structures (geo-located and non-geo-located) over the topographically diverse Columbia River basin in the northwestern United States. In the global method, a fixed elevation classification scheme is used to discretize subbasins. The local method utilizes concepts of hypsometric analysis to discretize each subbasin, using different elevation ranges that also naturally account for slope variations. The relative merits of the two methods and subgrid structures are investigated for their ability to capture topographic heterogeneity and the implications of this on representations of atmospheric forcing and land cover spatial patterns. Results showed that the local method reduces the standard deviation (SD) of subgrid surface elevation in the study domain by 17 to 19 % compared to the global method, highlighting the relative advantages of the local method for capturing subgrid topographic variations. The comparison between the two types of subgrid structures showed that the non-geo-located subgrid structures are more consistent across different area threshold values than the geo-located subgrid structures. Altogether the local method and non-geo-located subgrid structures effectively and robustly capture topographic, climatic and vegetation variability, which is important for land surface modeling.« less

  13. Using pixel intensity as a self-regulating threshold for deterministic image sampling in Milano Retinex: the T-Rex algorithm

    NASA Astrophysics Data System (ADS)

    Lecca, Michela; Modena, Carla Maria; Rizzi, Alessandro

    2018-01-01

    Milano Retinexes are spatial color algorithms, part of the Retinex family, usually employed for image enhancement. They modify the color of each pixel taking into account the surrounding colors and their position, catching in this way the local spatial color distribution relevant to image enhancement. We present T-Rex (from the words threshold and Retinex), an implementation of Milano Retinex, whose main novelty is the use of the pixel intensity as a self-regulating threshold to deterministically sample local color information. The experiments, carried out on real-world pictures, show that T-Rex image enhancement performance are in line with those of the Milano Retinex family: T-Rex increases the brightness, the contrast, and the flatness of the channel distributions of the input image, making more intelligible the content of pictures acquired under difficult light conditions.

  14. Multisampling suprathreshold perimetry: a comparison with conventional suprathreshold and full-threshold strategies by computer simulation.

    PubMed

    Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David

    2003-06-01

    To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.

  15. Parafoveal Target Detectability Reversal Predicted by Local Luminance and Contrast Gain Control

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    This project is part of a program to develop image discrimination models for the prediction of the detectability of objects in a range of backgrounds. We wanted to see if the models could predict parafoveal object detection as well as they predict detection in foveal vision. We also wanted to make our simplified models more general by local computation of luminance and contrast gain control. A signal image (0.78 x 0.17 deg) was made by subtracting a simulated airport runway scene background image (2.7 deg square) from the same scene containing an obstructing aircraft. Signal visibility contrast thresholds were measured in a fully crossed factorial design with three factors: eccentricity (0 deg or 4 deg), background (uniform or runway scene background), and fixed-pattern white noise contrast (0%, 5%, or 10%). Three experienced observers responded to three repetitions of 60 2IFC trials in each condition and thresholds were estimated by maximum likelihood probit analysis. In the fovea the average detection contrast threshold was 4 dB lower for the runway background than for the uniform background, but in the parafovea, the average threshold was 6 dB higher for the runway background than for the uniform background. This interaction was similar across the different noise levels and for all three observers. A likely reason for the runway background giving a lower threshold in the fovea is the low luminance near the signal in that scene. In our model, the local luminance computation is controlled by a spatial spread parameter. When this parameter and a corresponding parameter for the spatial spread of contrast gain were increased for the parafoveal predictions, the model predicts the interaction of background with eccentricity.

  16. Motor units in the human medial gastrocnemius muscle are not spatially localized or functionally grouped.

    PubMed

    Héroux, Martin E; Brown, Harrison J; Inglis, J Timothy; Siegmund, Gunter P; Blouin, Jean-Sébastien

    2015-08-15

    Human medial gastrocnemius (MG) motor units (MUs) are thought to occupy small muscle territories or regions, with low-threshold units preferentially located distally. We used intramuscular recordings to measure the territory of muscle fibres from MG MUs and determine whether these MUs are grouped by recruitment threshold or joint action (ankle plantar flexion and knee flexion). The territory of MUs from the MG muscle varied from somewhat localized to highly distributed, with approximately half the MUs spanning at least half the length and width of the muscle. There was also no evidence of regional muscle activity based on MU recruitment thresholds or joint action. The CNS does not have the means to selectively activate regions of the MG muscle based on task requirements. Human medial gastrocnemius (MG) motor units (MUs) are thought to occupy small muscle territories, with low-threshold units preferentially located distally. In this study, subjects (n = 8) performed ramped and sustained isometric contractions (ankle plantar flexion and knee flexion; range: ∼1-40% maximal voluntary contraction) and we measured MU territory size with spike-triggered averages from fine-wire electrodes inserted along the length (seven electrodes) or across the width (five electrodes) of the MG muscle. Of 69 MUs identified along the length of the muscle, 32 spanned at least half the muscle length (≥ 6.9 cm), 11 of which spanned all recording sites (13.6-17.9 cm). Distal fibres had smaller pennation angles (P < 0.05), which were accompanied by larger territories in MUs with fibres located distally (P < 0.05). There was no distal-to-proximal pattern of muscle activation in ramp contraction (P = 0.93). Of 36 MUs identified across the width of the muscle, 24 spanned at least half the muscle width (≥ 4.0 cm), 13 of which spanned all recording sites (8.0-10.8 cm). MUs were not localized (length or width) based on recruitment threshold or contraction type, nor was there a relationship between MU territory size and recruitment threshold (Spearman's rho = -0.20 and 0.13, P > 0.18). MUs in the human MG have larger territories than previously reported and are not localized based on recruitment threshold or joint action. This indicates that the CNS does not have the means to selectively activate regions of the MG muscle based on task requirements. © 2015 The Authors. The Journal of Physiology © 2015 The Physiological Society.

  17. A dual-adaptive support-based stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  18. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  19. Accuracy of cancellous bone volume fraction measured by micro-CT scanning.

    PubMed

    Ding, M; Odgaard, A; Hvid, I

    1999-03-01

    Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.

  20. Threshold-Voltage-Shift Compensation and Suppression Method Using Hydrogenated Amorphous Silicon Thin-Film Transistors for Large Active Matrix Organic Light-Emitting Diode Displays

    NASA Astrophysics Data System (ADS)

    Oh, Kyonghwan; Kwon, Oh-Kyong

    2012-03-01

    A threshold-voltage-shift compensation and suppression method for active matrix organic light-emitting diode (AMOLED) displays fabricated using a hydrogenated amorphous silicon thin-film transistor (TFT) backplane is proposed. The proposed method compensates for the threshold voltage variation of TFTs due to different threshold voltage shifts during emission time and extends the lifetime of the AMOLED panel. Measurement results show that the error range of emission current is from -1.1 to +1.7% when the threshold voltage of TFTs varies from 1.2 to 3.0 V.

  1. Saturation of low-threshold two-plasmon parametric decay leading to excitation of one localized upper hybrid wave

    NASA Astrophysics Data System (ADS)

    Gusakov, E. Z.; Popov, A. Yu.; Saveliev, A. N.

    2018-06-01

    We analyze the saturation of the low-threshold absolute parametric decay instability of an extraordinary pump wave leading to the excitation of two upper hybrid (UH) waves, only one of which is trapped in the vicinity of a local maximum of the plasma density profile. The pump depletion and the secondary decay of the localized daughter UH wave are treated as the most likely moderators of a primary two-plasmon decay instability. The reduced equations describing the nonlinear saturation phenomena are derived. The general analytical consideration is accompanied by the numerical analysis performed under the experimental conditions typical of the off-axis X2-mode ECRH experiments at TEXTOR. The possibility of substantial (up to 20%) anomalous absorption of the pump wave is predicted.

  2. Reliability of the method of levels for determining cutaneous temperature sensitivity

    NASA Astrophysics Data System (ADS)

    Jakovljević, Miroljub; Mekjavić, Igor B.

    2012-09-01

    Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.

  3. Reference guide to odor thresholds for hazardous air pollutants listed in the Clean Air Act amendments of 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.

    1992-03-01

    In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less

  4. 76 FR 60789 - Local Number Portability Porting Interval and Validation Requirements; Telephone Number Portability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-30

    ... (NANC) recommending a set of standard thresholds and intervals for non-simple ports and ``projects... comment on whether the thresholds and processing timelines for non-simple ports and projects are...: Interested parties may submit comments, identified by WC Docket No. 07-244 and CC Docket No. 95-116, by any...

  5. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  6. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    PubMed

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  7. Received Signal Strength Recovery in Green WLAN Indoor Positioning System Using Singular Value Thresholding

    PubMed Central

    Ma, Lin; Xu, Yubin

    2015-01-01

    Green WLAN is a promising technique for accessing future indoor Internet services. It is designed not only for high-speed data communication purposes but also for energy efficiency. The basic strategy of green WLAN is that all the access points are not always powered on, but rather work on-demand. Though powering off idle access points does not affect data communication, a serious asymmetric matching problem will arise in a WLAN indoor positioning system due to the fact the received signal strength (RSS) readings from the available access points are different in their offline and online phases. This asymmetry problem will no doubt invalidate the fingerprint algorithm used to estimate the mobile device location. Therefore, in this paper we propose a green WLAN indoor positioning system, which can recover RSS readings and achieve good localization performance based on singular value thresholding (SVT) theory. By solving the nuclear norm minimization problem, SVT recovers not only the radio map, but also online RSS readings from a sparse matrix by sensing only a fraction of the RSS readings. We have implemented the method in our lab and evaluated its performances. The experimental results indicate the proposed system could recover the RSS readings and achieve good localization performance. PMID:25587977

  8. Threshold selection for classification of MR brain images by clustering method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less

  9. Local load-sharing fiber bundle model in higher dimensions.

    PubMed

    Sinha, Santanu; Kjellstadli, Jonas T; Hansen, Alex

    2015-08-01

    We consider the local load-sharing fiber bundle model in one to five dimensions. Depending on the breaking threshold distribution of the fibers, there is a transition where the fracture process becomes localized. In the localized phase, the model behaves as the invasion percolation model. The difference between the local load-sharing fiber bundle model and the equal load-sharing fiber bundle model vanishes with increasing dimensionality with the characteristics of a power law.

  10. Altered cortical and subcortical connectivity due to infrasound administered near the hearing threshold – Evidence from fMRI

    PubMed Central

    Weichenberger, Markus; Bauer, Martin; Kühler, Robert; Hensel, Johannes; Forlim, Caroline Garcia; Ihlenfeld, Albrecht; Ittermann, Bernd; Gallinat, Jürgen; Koch, Christian; Kühn, Simone

    2017-01-01

    In the present study, the brain’s response towards near- and supra-threshold infrasound (IS) stimulation (sound frequency < 20 Hz) was investigated under resting-state fMRI conditions. The study involved two consecutive sessions. In the first session, 14 healthy participants underwent a hearing threshold—as well as a categorical loudness scaling measurement in which the individual loudness perception for IS was assessed across different sound pressure levels (SPL). In the second session, these participants underwent three resting-state acquisitions, one without auditory stimulation (no-tone), one with a monaurally presented 12-Hz IS tone (near-threshold) and one with a similar tone above the individual hearing threshold corresponding to a ‘medium loud’ hearing sensation (supra-threshold). Data analysis mainly focused on local connectivity measures by means of regional homogeneity (ReHo), but also involved independent component analysis (ICA) to investigate inter-regional connectivity. ReHo analysis revealed significantly higher local connectivity in right superior temporal gyrus (STG) adjacent to primary auditory cortex, in anterior cingulate cortex (ACC) and, when allowing smaller cluster sizes, also in the right amygdala (rAmyg) during the near-threshold, compared to both the supra-threshold and the no-tone condition. Additional independent component analysis (ICA) revealed large-scale changes of functional connectivity, reflected in a stronger activation of the right amygdala (rAmyg) in the opposite contrast (no-tone > near-threshold) as well as the right superior frontal gyrus (rSFG) during the near-threshold condition. In summary, this study is the first to demonstrate that infrasound near the hearing threshold may induce changes of neural activity across several brain regions, some of which are known to be involved in auditory processing, while others are regarded as keyplayers in emotional and autonomic control. These findings thus allow us to speculate on how continuous exposure to (sub-)liminal IS could exert a pathogenic influence on the organism, yet further (especially longitudinal) studies are required in order to substantialize these findings. PMID:28403175

  11. Spatiotemporal Quantification of Local Drug Delivery Using MRI

    PubMed Central

    Giers, Morgan B.; McLaren, Alex C.; Plasencia, Jonathan D.; McLemore, Ryan; Caplan, Michael R.

    2013-01-01

    Controlled release formulations for local, in vivo drug delivery are of growing interest to device manufacturers, research scientists, and clinicians; however, most research characterizing controlled release formulations occurs in vitro because the spatial and temporal distribution of drug delivery is difficult to measure in vivo. In this work, in vivo magnetic resonance imaging (MRI) of local drug delivery was performed to visualize and quantify the time resolved distribution of MRI contrast agents. Three-dimensional T 1 maps (generated from T 1-weighted images with varied T R) were processed using noise-reducing filtering. A segmented region of contrast, from a thresholded image, was converted to concentration maps using the equation 1/T 1 = 1/T 1,0 + R 1 C, where T 1,0 and T 1 are the precontrast and postcontrast T 1 map values, respectively. In this technique, a uniform estimated value for T 1,0 was used. Error estimations were performed for each step. The practical usefulness of this method was assessed using comparisons between devices located in different locations both with and without contrast. The method using a uniform T 1,0, requiring no registration of pre- and postcontrast image volumes, was compared to a method using either affine or deformation registrations. PMID:23710248

  12. Robust skin color-based moving object detection for video surveillance

    NASA Astrophysics Data System (ADS)

    Kaliraj, Kalirajan; Manimaran, Sudha

    2016-07-01

    Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.

  13. Setting local rank constraints by orthogonal projections for image resolution analysis: application to the determination of a low dose pharmaceutical compound.

    PubMed

    Boiret, Mathieu; de Juan, Anna; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel

    2015-09-10

    Raman chemical imaging provides chemical and spatial information about pharmaceutical drug product. By using resolution methods on acquired spectra, the objective is to calculate pure spectra and distribution maps of image compounds. With multivariate curve resolution-alternating least squares, constraints are used to improve the performance of the resolution and to decrease the ambiguity linked to the final solution. Non negativity and spatial local rank constraints have been identified as the most powerful constraints to be used. In this work, an alternative method to set local rank constraints is proposed. The method is based on orthogonal projections pretreatment. For each drug product compound, raw Raman spectra are orthogonally projected to a basis including all the variability from the formulation compounds other than the product of interest. Presence or absence of the compound of interest is obtained by observing the correlations between the orthogonal projected spectra and a pure spectrum orthogonally projected to the same basis. By selecting an appropriate threshold, maps of presence/absence of compounds can be set up for all the product compounds. This method appears as a powerful approach to identify a low dose compound within a pharmaceutical drug product. The maps of presence/absence of compounds can be used as local rank constraints in resolution methods, such as multivariate curve resolution-alternating least squares process in order to improve the resolution of the system. The method proposed is particularly suited for pharmaceutical systems, where the identity of all compounds in the formulations is known and, therefore, the space of interferences can be well defined. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.

    PubMed

    Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth

    2018-03-26

    When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.

  15. Renormalized coupled cluster approaches in the cluster-in-molecule framework: predicting vertical electron binding energies of the anionic water clusters (H2O)(n)(-).

    PubMed

    Xu, Peng; Gordon, Mark S

    2014-09-04

    Anionic water clusters are generally considered to be extremely challenging to model using fragmentation approaches due to the diffuse nature of the excess electron distribution. The local correlation coupled cluster (CC) framework cluster-in-molecule (CIM) approach combined with the completely renormalized CR-CC(2,3) method [abbreviated CIM/CR-CC(2,3)] is shown to be a viable alternative for computing the vertical electron binding energies (VEBE). CIM/CR-CC(2,3) with the threshold parameter ζ set to 0.001, as a trade-off between accuracy and computational cost, demonstrates the reliability of predicting the VEBE, with an average percentage error of ∼15% compared to the full ab initio calculation at the same level of theory. The errors are predominantly from the electron correlation energy. The CIM/CR-CC(2,3) approach provides the ease of a black-box type calculation with few threshold parameters to manipulate. The cluster sizes that can be studied by high-level ab initio methods are significantly increased in comparison with full CC calculations. Therefore, the VEBE computed by the CIM/CR-CC(2,3) method can be used as benchmarks for testing model potential approaches in small-to-intermediate-sized water clusters.

  16. Localization-delocalization transition of electrons at the percolation threshold of semiconductor GaAs1-xNx alloys: The appearance of a mobility edge

    NASA Astrophysics Data System (ADS)

    Alberi, K.; Fluegel, B.; Beaton, D. A.; Ptak, A. J.; Mascarenhas, A.

    2012-07-01

    Electrons in semiconductor alloys have generally been described in terms of Bloch states that evolve from constructive interference of electron waves scattering from perfectly periodic potentials, despite the loss of structural periodicity that occurs on alloying. Using the semiconductor alloy GaAs1-xNx as a prototype, we demonstrate a localized to delocalized transition of the electronic states at a percolation threshold, the emergence of a mobility edge, and the onset of an abrupt perturbation to the host GaAs electronic structure, shedding light on the evolution of electronic structure in these abnormal alloys.

  17. Cost-effectiveness thresholds: methods for setting and examples from around the world.

    PubMed

    Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano

    2018-06-01

    Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.

  18. A study of the threshold method utilizing raingage data

    NASA Technical Reports Server (NTRS)

    Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David

    1993-01-01

    The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.

  19. Improvement in the measurement error of the specific binding ratio in dopamine transporter SPECT imaging due to exclusion of the cerebrospinal fluid fraction using the threshold of voxel RI count.

    PubMed

    Mizumura, Sunao; Nishikawa, Kazuhiro; Murata, Akihiro; Yoshimura, Kosei; Ishii, Nobutomo; Kokubo, Tadashi; Morooka, Miyako; Kajiyama, Akiko; Terahara, Atsuro

    2018-05-01

    In Japan, the Southampton method for dopamine transporter (DAT) SPECT is widely used to quantitatively evaluate striatal radioactivity. The specific binding ratio (SBR) is the ratio of specific to non-specific binding observed after placing pentagonal striatal voxels of interest (VOIs) as references. Although the method can reduce the partial volume effect, the SBR may fluctuate due to the presence of low-count areas of cerebrospinal fluid (CSF), caused by brain atrophy, in the striatal VOIs. We examined the effect of the exclusion of low-count VOIs on SBR measurement. We retrospectively reviewed DAT imaging of 36 patients with parkinsonian syndromes performed after injection of 123 I-FP-CIT. SPECT data were reconstructed using three conditions. We defined the CSF area in each SPECT image after segmenting the brain tissues. A merged image of gray and white matter images was constructed from each patient's magnetic resonance imaging (MRI) to create an idealized brain image that excluded the CSF fraction (MRI-mask method). We calculated the SBR and asymmetric index (AI) in the MRI-mask method for each reconstruction condition. We then calculated the mean and standard deviation (SD) of voxel RI counts in the reference VOI without the striatal VOIs in each image, and determined the SBR by excluding the low-count pixels (threshold method) using five thresholds: mean-0.0SD, mean-0.5SD, mean-1.0SD, mean-1.5SD, and mean-2.0SD. We also calculated the AIs from the SBRs measured using the threshold method. We examined the correlation among the SBRs of the threshold method, between the uncorrected SBRs and the SBRs of the MRI-mask method, and between the uncorrected AIs and the AIs of the MRI-mask method. The intraclass correlation coefficient indicated an extremely high correlation among the SBRs and among the AIs of the MRI-mask and threshold methods at thresholds between mean-2.0D and mean-1.0SD, regardless of the reconstruction correction. The differences among the SBRs and the AIs of the two methods were smallest at thresholds between man-2.0SD and mean-1.0SD. The SBR calculated using the threshold method was highly correlated with the MRI-SBR. These results suggest that the CSF correction of the threshold method is effective for the calculation of idealized SBR and AI values.

  20. Stable Extraction of Threshold Voltage Using Transconductance Change Method for CMOS Modeling, Simulation and Characterization

    NASA Astrophysics Data System (ADS)

    Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook

    2004-04-01

    We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.

  1. Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.

    2003-01-01

    A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.

  2. Reducing noise component on medical images

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana

    2018-04-01

    Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.

  3. Acoustic localization at large scales: a promising method for grey wolf monitoring.

    PubMed

    Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle

    2018-01-01

    The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied significantly among different nights in each study area. Our results confirm the potential of using acoustic methods to localize wolves with high accuracy, in different natural environments and at large spatial scales. Passive acoustic methods are suitable for monitoring the dynamics of grey wolf recolonization and so, will contribute to enhance conservation and management plans.

  4. Linking removal targets to the ecological effects of invaders: a predictive model and field test.

    PubMed

    Green, Stephanie J; Dulvy, Nicholas K; Brooks, Annabelle M L; Akins, John L; Cooper, Andrew B; Miller, Skylar; Côté, Isabelle M

    Species invasions have a range of negative effects on recipient ecosystems, and many occur at a scale and magnitude that preclude complete eradication. When complete extirpation is unlikely with available management resources, an effective strategy may be to suppress invasive populations below levels predicted to cause undesirable ecological change. We illustrated this approach by developing and testing targets for the control of invasive Indo-Pacific lionfish (Pterois volitans and P. miles) on Western Atlantic coral reefs. We first developed a size-structured simulation model of predation by lionfish on native fish communities, which we used to predict threshold densities of lionfish beyond which native fish biomass should decline. We then tested our predictions by experimentally manipulating lionfish densities above or below reef-specific thresholds, and monitoring the consequences for native fish populations on 24 Bahamian patch reefs over 18 months. We found that reducing lionfish below predicted threshold densities effectively protected native fish community biomass from predation-induced declines. Reductions in density of 25–92%, depending on the reef, were required to suppress lionfish below levels predicted to overconsume prey. On reefs where lionfish were kept below threshold densities, native prey fish biomass increased by 50–70%. Gains in small (<6 cm) size classes of native fishes translated into lagged increases in larger size classes over time. The biomass of larger individuals (>15 cm total length), including ecologically important grazers and economically important fisheries species, had increased by 10–65% by the end of the experiment. Crucially, similar gains in prey fish biomass were realized on reefs subjected to partial and full removal of lionfish, but partial removals took 30% less time to implement. By contrast, the biomass of small native fishes declined by >50% on all reefs with lionfish densities exceeding reef-specific thresholds. Large inter-reef variation in the biomass of prey fishes at the outset of the study, which influences the threshold density of lionfish, means that we could not identify a single rule of thumb for guiding control efforts. However, our model provides a method for setting reef-specific targets for population control using local monitoring data. Our work is the first to demonstrate that for ongoing invasions, suppressing invaders below densities that cause environmental harm can have a similar effect, in terms of protecting the native ecosystem on a local scale, to achieving complete eradication.

  5. Discriminating the precipitation phase based on different temperature thresholds in the Songhua River Basin, China

    NASA Astrophysics Data System (ADS)

    Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao

    2018-06-01

    Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4.0 °C at most stations. The results of this study provide information for climate change research and hydrological process simulations in the SRB, as well as provide reference information for discriminating precipitation phase in other regions.

  6. Threshold Fatigue Crack Growth in Ti-6Al-2Sn-4Zr-6Mo.

    DTIC Science & Technology

    1987-12-01

    vii I. Introduction ................... ........ ........... 1 Overviev .................................... 1 Background...threshold region. 7. All experiments were conducted under fully automated I’ computer control using a laser interferometric displacement gage (IDG) to...reduction in the local driving force. This non-linear crack 0 appears to grow slower than a linear crack and therefore results in lover than actual computed

  7. DYNAMIC PATTERN RECOGNITION BY MEANS OF THRESHOLD NETS,

    DTIC Science & Technology

    A method is expounded for the recognition of visual patterns. A circuit diagram of a device is described which is based on a multilayer threshold ...structure synthesized in accordance with the proposed method. Coded signals received each time an image is displayed are transmitted to the threshold ...circuit which distinguishes the signs, and from there to the layers of threshold resolving elements. The image at each layer is made to correspond

  8. Rainfall Threshold for Flash Flood Early Warning Based on Rational Equation: A Case Study of Zuojiao Watershed in Yunnan Province

    NASA Astrophysics Data System (ADS)

    Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.

    2017-12-01

    Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.

  9. Robust segmentation of trabecular bone for in vivo CT imaging using anisotropic diffusion and multi-scale morphological reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Jin, Dakai; Zhang, Xiaoliu; Levy, Steven M.; Saha, Punam K.

    2017-03-01

    Osteoporosis is associated with an increased risk of low-trauma fractures. Segmentation of trabecular bone (TB) is essential to assess TB microstructure, which is a key determinant of bone strength and fracture risk. Here, we present a new method for TB segmentation for in vivo CT imaging. The method uses Hessian matrix-guided anisotropic diffusion to improve local separability of trabecular structures, followed by a new multi-scale morphological reconstruction algorithm for TB segmentation. High sensitivity (0.93), specificity (0.93), and accuracy (0.92) were observed for the new method based on regional manual thresholding on in vivo CT images. Mechanical tests have shown that TB segmentation using the new method improved the ability of derived TB spacing measure for predicting actual bone strength (R2=0.83).

  10. A threshold method for immunological correlates of protection

    PubMed Central

    2013-01-01

    Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322

  11. Zone-size nonuniformity of 18F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer.

    PubMed

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Lee, Li-yu; Chang, Joseph Tung-Chieh; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Liao, Chun-Ta; Yang, Lan-Yan; Hsu, Ching-Han; Yen, Tzu-Chen

    2015-03-01

    The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUVmax 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment (18)F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUVmax 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification.

  12. Image guidance doses delivered during radiotherapy: Quantification, management, and reduction: Report of the AAPM Therapy Physics Committee Task Group 180.

    PubMed

    Ding, George X; Alaei, Parham; Curran, Bruce; Flynn, Ryan; Gossman, Michael; Mackie, T Rock; Miften, Moyed; Morin, Richard; Xu, X George; Zhu, Timothy C

    2018-05-01

    With radiotherapy having entered the era of image guidance, or image-guided radiation therapy (IGRT), imaging procedures are routinely performed for patient positioning and target localization. The imaging dose delivered may result in excessive dose to sensitive organs and potentially increase the chance of secondary cancers and, therefore, needs to be managed. This task group was charged with: a) providing an overview on imaging dose, including megavoltage electronic portal imaging (MV EPI), kilovoltage digital radiography (kV DR), Tomotherapy MV-CT, megavoltage cone-beam CT (MV-CBCT) and kilovoltage cone-beam CT (kV-CBCT), and b) providing general guidelines for commissioning dose calculation methods and managing imaging dose to patients. We briefly review the dose to radiotherapy (RT) patients resulting from different image guidance procedures and list typical organ doses resulting from MV and kV image acquisition procedures. We provide recommendations for managing the imaging dose, including different methods for its calculation, and techniques for reducing it. The recommended threshold beyond which imaging dose should be considered in the treatment planning process is 5% of the therapeutic target dose. Although the imaging dose resulting from current kV acquisition procedures is generally below this threshold, the ALARA principle should always be applied in practice. Medical physicists should make radiation oncologists aware of the imaging doses delivered to patients under their care. Balancing ALARA with the requirement for effective target localization requires that imaging dose be managed based on the consideration of weighing risks and benefits to the patient. © 2018 American Association of Physicists in Medicine.

  13. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  14. Full melting of a two-dimensional complex plasma crystal triggered by localized pulsed laser heating

    NASA Astrophysics Data System (ADS)

    Couëdel, L.; Nosenko, V.; Rubin-Zuzic, M.; Zhdanov, S.; Elskens, Y.; Hall, T.; Ivlev, A. V.

    2018-04-01

    The full melting of a two-dimensional plasma crystal was induced in a principally stable monolayer by localized laser stimulation. Two distinct behaviors of the crystal after laser stimulation were observed depending on the amount of injected energy: (i) below a well-defined threshold, the laser melted area recrystallized; (ii) above the threshold, it expanded outwards in a similar fashion to mode-coupling instability-induced melting, rapidly destroying the crystalline order of the whole complex plasma monolayer. The reported experimental observations are due to the fluid mode-coupling instability, which can pump energy into the particle monolayer at a rate surpassing the heat transport and damping rates in the energetic localized melted spot, resulting in its further growth. This behavior exhibits remarkable similarities with impulsive spot heating in ordinary reactive matter.

  15. Mapping Shallow Landslide Slope Inestability at Large Scales Using Remote Sensing and GIS

    NASA Astrophysics Data System (ADS)

    Avalon Cullen, C.; Kashuk, S.; Temimi, M.; Suhili, R.; Khanbilvardi, R.

    2015-12-01

    Rainfall induced landslides are one of the most frequent hazards on slanted terrains. They lead to great economic losses and fatalities worldwide. Most factors inducing shallow landslides are local and can only be mapped with high levels of uncertainty at larger scales. This work presents an attempt to determine slope instability at large scales. Buffer and threshold techniques are used to downscale areas and minimize uncertainties. Four static parameters (slope angle, soil type, land cover and elevation) for 261 shallow rainfall-induced landslides in the continental United States are examined. ASTER GDEM is used as bases for topographical characterization of slope and buffer analysis. Slope angle threshold assessment at the 50, 75, 95, 98, and 99 percentiles is tested locally. Further analysis of each threshold in relation to other parameters is investigated in a logistic regression environment for the continental U.S. It is determined that lower than 95-percentile thresholds under-estimate slope angles. Best regression fit can be achieved when utilizing the 99-threshold slope angle. This model predicts the highest number of cases correctly at 87.0% accuracy. A one-unit rise in the 99-threshold range increases landslide likelihood by 11.8%. The logistic regression model is carried over to ArcGIS where all variables are processed based on their corresponding coefficients. A regional slope instability map for the continental United States is created and analyzed against the available landslide records and their spatial distributions. It is expected that future inclusion of dynamic parameters like precipitation and other proxies like soil moisture into the model will further improve accuracy.

  16. Methods for SBS Threshold Reduction

    DTIC Science & Technology

    1994-01-30

    We have investigated methods for reducing the threshold for stimulated Brillouin scattering (SBS) using a frequency-narrowed Cr,Tm,Ho:YAG laser...operating at 2.12 micrometers. The SBS medium was carbon disulfide. Single-focus SBS and threshold reduction by using two foci, a loop, and a ring have

  17. Defining ADHD symptom persistence in adulthood: optimizing sensitivity and specificity.

    PubMed

    Sibley, Margaret H; Swanson, James M; Arnold, L Eugene; Hechtman, Lily T; Owens, Elizabeth B; Stehli, Annamarie; Abikoff, Howard; Hinshaw, Stephen P; Molina, Brooke S G; Mitchell, John T; Jensen, Peter S; Howard, Andrea L; Lakes, Kimberley D; Pelham, William E

    2017-06-01

    Longitudinal studies of children diagnosed with ADHD report widely ranging ADHD persistence rates in adulthood (5-75%). This study documents how information source (parent vs. self-report), method (rating scale vs. interview), and symptom threshold (DSM vs. norm-based) influence reported ADHD persistence rates in adulthood. Five hundred seventy-nine children were diagnosed with DSM-IV ADHD-Combined Type at baseline (ages 7.0-9.9 years) 289 classmates served as a local normative comparison group (LNCG), 476 and 241 of whom respectively were evaluated in adulthood (Mean Age = 24.7). Parent and self-reports of symptoms and impairment on rating scales and structured interviews were used to investigate ADHD persistence in adulthood. Persistence rates were higher when using parent rather than self-reports, structured interviews rather than rating scales (for self-report but not parent report), and a norm-based (NB) threshold of 4 symptoms rather than DSM criteria. Receiver-Operating Characteristics (ROC) analyses revealed that sensitivity and specificity were optimized by combining parent and self-reports on a rating scale and applying a NB threshold. The interview format optimizes young adult self-reporting when parent reports are not available. However, the combination of parent and self-reports from rating scales, using an 'or' rule and a NB threshold optimized the balance between sensitivity and specificity. With this definition, 60% of the ADHD group demonstrated symptom persistence and 41% met both symptom and impairment criteria in adulthood. © 2016 Association for Child and Adolescent Mental Health.

  18. Single-sensor system for spatially resolved, continuous, and multiparametric optical mapping of cardiac tissue

    PubMed Central

    Lee, Peter; Bollensdorff, Christian; Quinn, T. Alexander; Wuskell, Joseph P.; Loew, Leslie M.; Kohl, Peter

    2011-01-01

    Background Simultaneous optical mapping of multiple electrophysiologically relevant parameters in living myocardium is desirable for integrative exploration of mechanisms underlying heart rhythm generation under normal and pathophysiologic conditions. Current multiparametric methods are technically challenging, usually involving multiple sensors and moving parts, which contributes to high logistic and economic thresholds that prevent easy application of the technique. Objective The purpose of this study was to develop a simple, affordable, and effective method for spatially resolved, continuous, simultaneous, and multiparametric optical mapping of the heart, using a single camera. Methods We present a new method to simultaneously monitor multiple parameters using inexpensive off-the-shelf electronic components and no moving parts. The system comprises a single camera, commercially available optical filters, and light-emitting diodes (LEDs), integrated via microcontroller-based electronics for frame-accurate illumination of the tissue. For proof of principle, we illustrate measurement of four parameters, suitable for ratiometric mapping of membrane potential (di-4-ANBDQPQ) and intracellular free calcium (fura-2), in an isolated Langendorff-perfused rat heart during sinus rhythm and ectopy, induced by local electrical or mechanical stimulation. Results The pilot application demonstrates suitability of this imaging approach for heart rhythm research in the isolated heart. In addition, locally induced excitation, whether stimulated electrically or mechanically, gives rise to similar ventricular propagation patterns. Conclusion Combining an affordable camera with suitable optical filters and microprocessor-controlled LEDs, single-sensor multiparametric optical mapping can be practically implemented in a simple yet powerful configuration and applied to heart rhythm research. The moderate system complexity and component cost is destined to lower the threshold to broader application of functional imaging and to ease implementation of more complex optical mapping approaches, such as multiparametric panoramic imaging. A proof-of-principle application confirmed that although electrically and mechanically induced excitation occur by different mechanisms, their electrophysiologic consequences downstream from the point of activation are not dissimilar. PMID:21459161

  19. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  20. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.

  1. A new method to detect transitory signatures and local time/space variability structures in the climate system: the scale-dependent correlation analysis

    NASA Astrophysics Data System (ADS)

    Rodó, Xavier; Rodríguez-Arias, Miquel-Àngel

    2006-10-01

    The study of transitory signals and local variability structures in both/either time and space and their role as sources of climatic memory, is an important but often neglected topic in climate research despite its obvious importance and extensive coverage in the literature. Transitory signals arise either from non-linearities, in the climate system, transitory atmosphere-ocean couplings, and other processes in the climate system evolving after a critical threshold is crossed. These temporary interactions that, though intense, may not last long, can be responsible for a large amount of unexplained variability but are normally considered of limited relevance and often, discarded. With most of the current techniques at hand these typology of signatures are difficult to isolate because the low signal-to-noise ratio in midlatitudes, the limited recurrence of the transitory signals during a customary interval of data considered. Also, there is often a serious problem arising from the smoothing of local or transitory processes if statistical techniques are applied, that consider all the length of data available, rather than taking into account the size of the specific variability structure under investigation. Scale-dependent correlation (SDC) analysis is a new statistical method capable of highlighting the presence of transitory processes, these former being understood as temporary significant lag-dependent autocovariance in a single series, or covariance structures between two series. This approach, therefore, complements other approaches such as those resulting from the families of wavelet analysis, singular-spectrum analysis and recurrence plots. A main feature of SDC is its high-performance for short time series, its ability to characterize phase-relationships and thresholds in the bivariate domain. Ultimately, SDC helps tracking short-lagged relationships among processes that locally or temporarily couple and uncouple. The use of SDC is illustrated in the present paper by means of some synthetic time-series examples of increasing complexity, and it is compared with wavelet analysis in order to provide a well-known reference of its capabilities. A comparison between SDC and companion techniques is also addressed and results are exemplified for the specific case of some relevant El Niño-Southern Oscillation teleconnections.

  2. Enforcing positivity in intrusive PC-UQ methods for reactive ODE systems

    DOE PAGES

    Najm, Habib N.; Valorani, Mauro

    2014-04-12

    We explore the relation between the development of a non-negligible probability of negative states and the instability of numerical integration of the intrusive Galerkin ordinary differential equation system describing uncertain chemical ignition. To prevent this instability without resorting to either multi-element local polynomial chaos (PC) methods or increasing the order of the PC representation in time, we propose a procedure aimed at modifying the amplitude of the PC modes to bring the probability of negative state values below a user-defined threshold. This modification can be effectively described as a filtering procedure of the spectral PC coefficients, which is applied on-the-flymore » during the numerical integration when the current value of the probability of negative states exceeds the prescribed threshold. We demonstrate the filtering procedure using a simple model of an ignition process in a batch reactor. This is carried out by comparing different observables and error measures as obtained by non-intrusive Monte Carlo and Gauss-quadrature integration and the filtered intrusive procedure. Lastly, the filtering procedure has been shown to effectively stabilize divergent intrusive solutions, and also to improve the accuracy of stable intrusive solutions which are close to the stability limits.« less

  3. Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing).

    PubMed

    Potts, Lisa G; Skinner, Margaret W; Litovsky, Ruth A; Strube, Michael J; Kuk, Francis

    2009-06-01

    The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. A repeated-measures correlational study was completed. Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six-eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant-only and hearing aid-only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1-3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid.

  4. A new disaster victim identification management strategy targeting "near identification-threshold" cases: Experiences from the Boxing Day tsunami.

    PubMed

    Wright, Kirsty; Mundorff, Amy; Chaseling, Janet; Forrest, Alexander; Maguire, Christopher; Crane, Denis I

    2015-05-01

    The international disaster victim identification (DVI) response to the Boxing Day tsunami, led by the Royal Thai Police in Phuket, Thailand, was one of the largest and most complex in DVI history. Referred to as the Thai Tsunami Victim Identification operation, the group comprised a multi-national, multi-agency, and multi-disciplinary team. The traditional DVI approach proved successful in identifying a large number of victims quickly. However, the team struggled to identify certain victims due to incomplete or poor quality ante-mortem and post-mortem data. In response to these challenges, a new 'near-threshold' DVI management strategy was implemented to target presumptive identifications and improve operational efficiency. The strategy was implemented by the DNA Team, therefore DNA kinship matches that just failed to reach the reporting threshold of 99.9% were prioritized, however the same approach could be taken by targeting, for example, cases with partial fingerprint matches. The presumptive DNA identifications were progressively filtered through the Investigation, Dental and Fingerprint Teams to add additional information necessary to either strengthen or conclusively exclude the identification. Over a five-month period 111 victims from ten countries were identified using this targeted approach. The new identifications comprised 87 adults, 24 children and included 97 Thai locals. New data from the Fingerprint Team established nearly 60% of the total near-threshold identifications and the combined DNA/Physical method was responsible for over 30%. Implementing the new strategy, targeting near-threshold cases, had positive management implications. The process initiated additional ante-mortem information collections, and established a much-needed, distinct "end-point" for unresolved cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Quantifying ecological thresholds from response surfaces

    Treesearch

    Heather E. Lintz; Bruce McCune; Andrew N. Gray; Katherine A. McCulloh

    2011-01-01

    Ecological thresholds are abrupt changes of ecological state. While an ecological threshold is a widely accepted concept, most empirical methods detect them in time or across geographic space. Although useful, these approaches do not quantify the direct drivers of threshold response. Causal understanding of thresholds detected empirically requires their investigation...

  6. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  7. Methods for automatic trigger threshold adjustment

    DOEpatents

    Welch, Benjamin J; Partridge, Michael E

    2014-03-18

    Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.

  8. Thermal perception thresholds among workers in a cold climate.

    PubMed

    Burström, Lage; Björ, Bodil; Nilsson, Tohr; Pettersson, Hans; Rödin, Ingemar; Wahlström, Jens

    2017-10-01

    To investigate whether exposure to cold could influence the thermal perception thresholds in a working population. This cross-sectional study was comprised of 251 males and females and was carried out at two mines in the northern part of Norway and Sweden. The testing included a baseline questionnaire, a clinical examination and measurements of thermal perception thresholds, on both hands, the index (Digit 2) and little (Digit 5) fingers, for heat and cold. The thermal perception thresholds were affected by age, gender and test site. The thresholds were impaired by experiences of frostbite in the fingers and the use of medication that potentially could affect neurosensory functions. No differences were found between the calculated normative values for these workers and those in other comparative investigations conducted in warmer climates. The study provided no support for the hypothesis that living and working in cold climate will lead to impaired thermal perception thresholds. Exposure to cold that had caused localized damage in the form of frostbite was shown to lead to impaired thermal perception.

  9. Threshold response using modulated continuous wave illumination for multilayer 3D optical data storage

    NASA Astrophysics Data System (ADS)

    Saini, A.; Christenson, C. W.; Khattab, T. A.; Wang, R.; Twieg, R. J.; Singer, K. D.

    2017-01-01

    In order to achieve a high capacity 3D optical data storage medium, a nonlinear or threshold writing process is necessary to localize data in the axial dimension. To this end, commercial multilayer discs use thermal ablation of metal films or phase change materials to realize such a threshold process. This paper addresses a threshold writing mechanism relevant to recently reported fluorescence-based data storage in dye-doped co-extruded multilayer films. To gain understanding of the essential physics, single layer spun coat films were used so that the data is easily accessible by analytical techniques. Data were written by attenuating the fluorescence using nanosecond-range exposure times from a 488 nm continuous wave laser overlapping with the single photon absorption spectrum. The threshold writing process was studied over a range of exposure times and intensities, and with different fluorescent dyes. It was found that all of the dyes have a common temperature threshold where fluorescence begins to attenuate, and the physical nature of the thermal process was investigated.

  10. Impact of intrapatient variability (IPV) in tacrolimus trough levels on long-term renal transplant function: multicentre collaborative retrospective cohort study protocol

    PubMed Central

    Goldsmith, Petra M; Bottomley, Matthew J; Okechukwu, Okidi; Ross, Victoria C; Ghita, Ryan; Wandless, David; Falconer, Stuart J; Papachristos, Stavros; Nash, Philip; Androshchuk, Vitaliy; Clancy, Marc

    2017-01-01

    Introduction High intrapatient variability (IPV) in tacrolimus trough levels has been shown to be associated with higher rates of renal transplant failure. There is no consensus on what level of IPV constitutes a risk of graft loss. The establishment of such a threshold could help to guide clinicians in identifying at-risk patients to receive targeted interventions to improve IPV and thus outcomes. Methods and analysis A multicentre Transplant Audit Collaborative has been established to conduct a retrospective study examining tacrolimus IPV and renal transplant outcomes. Patients in receipt of a renal transplant at participating centres between 2009 and 2014 and fulfilling the inclusion criteria will be included in the study. The aim is to recruit a minimum of 1600 patients with follow-up spanning at least 2 years in order to determine a threshold IPV above which a renal transplant recipient would be considered at increased risk of graft loss. The study also aims to determine any national or regional trends in IPV and any demographic associations. Ethics and dissemination Consent will not be sought from patients whose data are used in this study as no additional procedures or information will be required from participants beyond that which would normally take place as part of clinical care. The study will be registered locally in each participating centre in line with local research and development protocols. It is anticipated that the results of this audit will be disseminated locally, in participating NHS Trusts, through national and international meetings and publications in peer-reviewed journals. PMID:28756385

  11. Improving of local ozone forecasting by integrated models.

    PubMed

    Gradišar, Dejan; Grašič, Boštjan; Božnar, Marija Zlata; Mlakar, Primož; Kocijan, Juš

    2016-09-01

    This paper discuss the problem of forecasting the maximum ozone concentrations in urban microlocations, where reliable alerting of the local population when thresholds have been surpassed is necessary. To improve the forecast, the methodology of integrated models is proposed. The model is based on multilayer perceptron neural networks that use as inputs all available information from QualeAria air-quality model, WRF numerical weather prediction model and onsite measurements of meteorology and air pollution. While air-quality and meteorological models cover large geographical 3-dimensional space, their local resolution is often not satisfactory. On the other hand, empirical methods have the advantage of good local forecasts. In this paper, integrated models are used for improved 1-day-ahead forecasting of the maximum hourly value of ozone within each day for representative locations in Slovenia. The WRF meteorological model is used for forecasting meteorological variables and the QualeAria air-quality model for gas concentrations. Their predictions, together with measurements from ground stations, are used as inputs to a neural network. The model validation results show that integrated models noticeably improve ozone forecasts and provide better alert systems.

  12. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    PubMed Central

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  13. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.

    PubMed

    Zhong, Shan; Liu, Quan; Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2 -regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.

  14. Thresholds for the cost-effectiveness of interventions: alternative approaches.

    PubMed

    Marseille, Elliot; Larson, Bruce; Kazi, Dhruv S; Kahn, James G; Rosen, Sydney

    2015-02-01

    Many countries use the cost-effectiveness thresholds recommended by the World Health Organization's Choosing Interventions that are Cost-Effective project (WHO-CHOICE) when evaluating health interventions. This project sets the threshold for cost-effectiveness as the cost of the intervention per disability-adjusted life-year (DALY) averted less than three times the country's annual gross domestic product (GDP) per capita. Highly cost-effective interventions are defined as meeting a threshold per DALY averted of once the annual GDP per capita. We argue that reliance on these thresholds reduces the value of cost-effectiveness analyses and makes such analyses too blunt to be useful for most decision-making in the field of public health. Use of these thresholds has little theoretical justification, skirts the difficult but necessary ranking of the relative values of locally-applicable interventions and omits any consideration of what is truly affordable. The WHO-CHOICE thresholds set such a low bar for cost-effectiveness that very few interventions with evidence of efficacy can be ruled out. The thresholds have little value in assessing the trade-offs that decision-makers must confront. We present alternative approaches for applying cost-effectiveness criteria to choices in the allocation of health-care resources.

  15. Toward a generalized theory of epidemic awareness in social networks

    NASA Astrophysics Data System (ADS)

    Wu, Qingchu; Zhu, Wenfang

    We discuss the dynamics of a susceptible-infected-susceptible (SIS) model with local awareness in networks. Individual awareness to the infectious disease is characterized by a general function of epidemic information in its neighborhood. We build a high-accuracy approximate equation governing the spreading dynamics and derive an approximate epidemic threshold above which the epidemic spreads over the whole network. Our results extend the previous work and show that the epidemic threshold is dependent on the awareness function in terms of one infectious neighbor. Interestingly, when a pow-law awareness function is chosen, the epidemic threshold can emerge in infinite networks.

  16. The asymmetric reactions of mean and volatility of stock returns to domestic and international information based on a four-regime double-threshold GARCH model

    NASA Astrophysics Data System (ADS)

    Chen, Cathy W. S.; Yang, Ming Jing; Gerlach, Richard; Jim Lo, H.

    2006-07-01

    In this paper, we investigate the asymmetric reactions of mean and volatility of stock returns in five major markets to their own local news and the US information via linear and nonlinear models. We introduce a four-regime Double-Threshold GARCH (DTGARCH) model, which allows asymmetry in both the conditional mean and variance equations simultaneously by employing two threshold variables, to analyze the stock markets’ reactions to different types of information (good/bad news) generated from the domestic markets and the US stock market. By applying the four-regime DTGARCH model, this study finds that the interaction between the information of domestic and US stock markets leads to the asymmetric reactions of stock returns and their variability. In addition, this research also finds that the positive autocorrelation reported in the previous studies of financial markets may in fact be mis-specified, and actually due to the local market's positive response to the US stock market.

  17. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    PubMed

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  18. Adaptive thresholding image series from fluorescence confocal scanning laser microscope using orientation intensity profiles

    NASA Astrophysics Data System (ADS)

    Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.

    2004-05-01

    Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.

  19. Analyses of Fatigue Crack Growth and Closure Near Threshold Conditions for Large-Crack Behavior

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.

    1999-01-01

    A plasticity-induced crack-closure model was used to study fatigue crack growth and closure in thin 2024-T3 aluminum alloy under constant-R and constant-K(sub max) threshold testing procedures. Two methods of calculating crack-opening stresses were compared. One method was based on a contact-K analyses and the other on crack-opening-displacement (COD) analyses. These methods gave nearly identical results under constant-amplitude loading, but under threshold simulations the contact-K analyses gave lower opening stresses than the contact COD method. Crack-growth predictions tend to support the use of contact-K analyses. Crack-growth simulations showed that remote closure can cause a rapid rise in opening stresses in the near threshold regime for low-constraint and high applied stress levels. Under low applied stress levels and high constraint, a rise in opening stresses was not observed near threshold conditions. But crack-tip-opening displacement (CTOD) were of the order of measured oxide thicknesses in the 2024 alloy under constant-R simulations. In contrast, under constant-K(sub max) testing the CTOD near threshold conditions were an order-of-magnitude larger than measured oxide thicknesses. Residual-plastic deformations under both constant-R and constant-K(sub max) threshold simulations were several times larger than the expected oxide thicknesses. Thus, residual-plastic deformations, in addition to oxide and roughness, play an integral part in threshold development.

  20. Robust Adaptive Thresholder For Document Scanning Applications

    NASA Astrophysics Data System (ADS)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  1. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still twomore » important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.« less

  2. Built-Up Area Detection from High-Resolution Satellite Images Using Multi-Scale Wavelet Transform and Local Spatial Statistics

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Zhang, Y.; Gao, J.; Yuan, Y.; Lv, Z.

    2018-04-01

    Recently, built-up area detection from high-resolution satellite images (HRSI) has attracted increasing attention because HRSI can provide more detailed object information. In this paper, multi-resolution wavelet transform and local spatial autocorrelation statistic are introduced to model the spatial patterns of built-up areas. First, the input image is decomposed into high- and low-frequency subbands by wavelet transform at three levels. Then the high-frequency detail information in three directions (horizontal, vertical and diagonal) are extracted followed by a maximization operation to integrate the information in all directions. Afterward, a cross-scale operation is implemented to fuse different levels of information. Finally, local spatial autocorrelation statistic is introduced to enhance the saliency of built-up features and an adaptive threshold algorithm is used to achieve the detection of built-up areas. Experiments are conducted on ZY-3 and Quickbird panchromatic satellite images, and the results show that the proposed method is very effective for built-up area detection.

  3. Statistical evaluation of the Local Lymph Node Assay.

    PubMed

    Hothorn, Ludwig A; Vohr, Hans-Werner

    2010-04-01

    In the Local Lymph Node Assay measured endpoints for each animal, such as cell proliferation, cell counts and/or lymph node weight should be evaluated separately. The primary criterion for a positive response is when the estimated stimulation index is larger than a specified relative threshold that is endpoint- and strain-specific. When the lower confidence limit for ratio-to-control comparisons is larger than a relevance threshold, a biologically relevant increase can be concluded according to the proof of hazard. Alternatively, when the upper confidence limit for ratio-to-control comparisons is smaller than a tolerable margin, harmlessness can be concluded according to a proof of safety. Copyright 2009 Elsevier Inc. All rights reserved.

  4. Self-localized structures in vertical-cavity surface-emitting lasers with external feedback.

    PubMed

    Paulau, P V; Gomila, D; Ackemann, T; Loiko, N A; Firth, W J

    2008-07-01

    In this paper, we analyze a model of broad area vertical-cavity surface-emitting lasers subjected to frequency-selective optical feedback. In particular, we analyze the spatio-temporal regimes arising above threshold and the existence and dynamical properties of cavity solitons. We build the bifurcation diagram of stationary self-localized states, finding that branches of cavity solitons emerge from the degenerate Hopf bifurcations marking the homogeneous solutions with maximal and minimal gain. These branches collide in a saddle-node bifurcation, defining a maximum pump current for soliton existence that lies below the threshold of the laser without feedback. The properties of these cavity solitons are in good agreement with those observed in recent experiments.

  5. Localization-delocalization transition of electrons at the percolation threshold of semiconductor GaAs 1–xN x alloys: The appearance of a mobility edge

    DOE PAGES

    Alberi, K.; Fluegel, B.; Beaton, D. A.; ...

    2012-07-09

    Electrons in semiconductor alloys have generally been described in terms of Bloch states that evolve from constructive interference of electron waves scattering from perfectly periodic potentials, despite the loss of structural periodicity that occurs on alloying. Using the semiconductor alloy GaAs₁₋ xN x as a prototype, we demonstrate a localized to delocalized transition of the electronic states at a percolation threshold, the emergence of a mobility edge, and the onset of an abrupt perturbation to the host GaAs electronic structure, shedding light on the evolution of electronic structure in these abnormal alloys.

  6. Detection of exudates in fundus imagery using a constant false-alarm rate (CFAR) detector

    NASA Astrophysics Data System (ADS)

    Khanna, Manish; Kapoor, Elina

    2014-05-01

    Diabetic retinopathy is the leading cause of blindness in adults in the United States. The presence of exudates in fundus imagery is the early sign of diabetic retinopathy so detection of these lesions is essential in preventing further ocular damage. In this paper we present a novel technique to automatically detect exudates in fundus imagery that is robust against spatial and temporal variations of background noise. The detection threshold is adjusted dynamically, based on the local noise statics around the pixel under test in order to maintain a pre-determined, constant false alarm rate (CFAR). The CFAR detector is often used to detect bright targets in radar imagery where the background clutter can vary considerably from scene to scene and with angle to the scene. Similarly, the CFAR detector addresses the challenge of detecting exudate lesions in RGB and multispectral fundus imagery where the background clutter often exhibits variations in brightness and texture. These variations present a challenge to common, global thresholding detection algorithms and other methods. Performance of the CFAR algorithm is tested against a publicly available, annotated, diabetic retinopathy database and preliminary testing suggests that performance of the CFAR detector proves to be superior to techniques such as Otsu thresholding.

  7. Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot

    PubMed Central

    Vanhoutte, Erik; Mafrica, Stefano; Ruffier, Franck; Bootsma, Reinoud J.; Serres, Julien

    2017-01-01

    For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M2APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6×10−7 to 1.6×10−2 W·cm−2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M2APix sensor. While both algorithms adequately measured optical flow between 25 ∘/s and 1000 ∘/s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. PMID:28287484

  8. Coral bleaching at Little Cayman, Cayman Islands 2009

    NASA Astrophysics Data System (ADS)

    van Hooidonk, Ruben J.; Manzello, Derek P.; Moye, Jessica; Brandt, Marilyn E.; Hendee, James C.; McCoy, Croy; Manfrino, Carrie

    2012-06-01

    The global rise in sea temperature through anthropogenic climate change is affecting coral reef ecosystems through a phenomenon known as coral bleaching; that is, the whitening of corals due to the loss of the symbiotic zooxanthellae which impart corals with their characteristic vivid coloration. We describe aspects of the most prevalent episode of coral bleaching ever recorded at Little Cayman, Cayman Islands, during the fall of 2009. The most susceptible corals were found to be, in order, Siderastrea siderea, Montastraea annularis, and Montastraea faveolata, while Diplora strigosa and Agaricia spp. were less so, yet still showed considerable bleaching prevalence and severity. Those found to be least susceptible were Porites porites, Porites astreoides, and Montastraea cavernosa. These observations and other reported observations of coral bleaching, together with 29 years (1982-2010) of satellite-derived sea surface temperatures, were used to optimize bleaching predictions at this location. To do this a Degree Heating Weeks (DHW) and Peirce Skill Score (PSS) analysis was employed to calculate a local bleaching threshold above which bleaching was expected to occur. A threshold of 4.2 DHW had the highest skill, with a PSS of 0.70. The method outlined here could be applied to other regions to find the optimal bleaching threshold and improve bleaching predictions.

  9. Fitting psychometric functions using a fixed-slope parameter: an advanced alternative for estimating odor thresholds with data generated by ASTM E679.

    PubMed

    Peng, Mei; Jaeger, Sara R; Hautus, Michael J

    2014-03-01

    Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.

  10. A Novel Binarization Algorithm for Ballistics Firearm Identification

    NASA Astrophysics Data System (ADS)

    Li, Dongguang

    The identification of ballistics specimens from imaging systems is of paramount importance in criminal investigation. Binarization plays a key role in preprocess of recognizing cartridges in the ballistic imaging systems. Unfortunately, it is very difficult to get the satisfactory binary image using existing binary algorithms. In this paper, we utilize the global and local thresholds to enhance the image binarization. Importantly, we present a novel criterion for effectively detecting edges in the images. Comprehensive experiments have been conducted over sample ballistic images. The empirical results demonstrate the proposed method can provide a better solution than existing binary algorithms.

  11. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    NASA Astrophysics Data System (ADS)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  12. Quantum walks with an anisotropic coin I: spectral theory

    NASA Astrophysics Data System (ADS)

    Richard, S.; Suzuki, A.; Tiedra de Aldecoa, R.

    2018-02-01

    We perform the spectral analysis of the evolution operator U of quantum walks with an anisotropic coin, which include one-defect models, two-phase quantum walks, and topological phase quantum walks as special cases. In particular, we determine the essential spectrum of U, we show the existence of locally U-smooth operators, we prove the discreteness of the eigenvalues of U outside the thresholds, and we prove the absence of singular continuous spectrum for U. Our analysis is based on new commutator methods for unitary operators in a two-Hilbert spaces setting, which are of independent interest.

  13. Rapid prototyping of three-dimensional microstructures from multiwalled carbon nanotubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hung, W.H.; Kumar, Rajay; Bushmaker, Adam

    The authors report a method for creating three-dimensional carbon nanotube structures, whereby a focused laser beam is used to selectively burn local regions of a dense forest of multiwalled carbon nanotubes. Raman spectroscopy and scanning electron microscopy are used to quantify the threshold for laser burnout and depth of burnout. The minimum power density for burning carbon nanotubes in air is found to be 244 {mu}W/{mu}m{sup 2}. We create various three-dimensional patterns using this method, illustrating its potential use for the rapid prototyping of carbon nanotube microstructures. Undercut profiles, changes in nanotube density, and nanoparticle formation are observed after lasermore » surface treatment and provide insight into the dynamic process of the burnout mechanism.« less

  14. Minimal-Drift Heading Measurement using a MEMS Gyro for Indoor Mobile Robots.

    PubMed

    Hong, Sung Kyung; Park, Sungsu

    2008-11-17

    To meet the challenges of making low-cost MEMS yaw rate gyros for the precise self-localization of indoor mobile robots, this paper examines a practical and effective method of minimizing drift on the heading angle that relies solely on integration of rate signals from a gyro. The main idea of the proposed approach is consists of two parts; 1) self-identification of calibration coefficients that affects long-term performance, and 2) threshold filter to reject the broadband noise component that affects short-term performance. Experimental results with the proposed phased method applied to Epson XV3500 gyro demonstrate that it effectively yields minimal drift heading angle measurements getting over major error sources in the MEMS gyro output.

  15. Statistical approaches for the definition of landslide rainfall thresholds and their uncertainty using rain gauge and satellite data

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-05-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.

  16. Statistical Approaches for the Definition of Landslide Rainfall Thresholds and their Uncertainty Using Rain Gauge and Satellite Data

    NASA Technical Reports Server (NTRS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-01-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.

  17. Optimal Threshold Determination for Interpreting Semantic Similarity and Particularity: Application to the Comparison of Gene Sets and Metabolic Pathways Using GO and ChEBI

    PubMed Central

    Bettembourg, Charles; Diot, Christian; Dameron, Olivier

    2015-01-01

    Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274

  18. A technique for setting analytical thresholds in massively parallel sequencing-based forensic DNA analysis

    PubMed Central

    2017-01-01

    Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338

  19. A technique for setting analytical thresholds in massively parallel sequencing-based forensic DNA analysis.

    PubMed

    Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi

    2017-01-01

    Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.

  20. Magnetic resonance imaging for the exploitation of bubble-enhanced heating by high-intensity focused ultrasound: a feasibility study in ex vivo liver.

    PubMed

    Elbes, Delphine; Denost, Quentin; Robert, Benjamin; Köhler, Max O; Tanter, Mickaël; Bruno, Quesson

    2014-05-01

    Bubble-enhanced heating (BEH) may be exploited to improve the heating efficiency of high-intensity focused ultrasound in liver and to protect tissues located beyond the focal point. The objectives of this study, performed in ex vivo pig liver, were (i) to develop a method to determine the acoustic power threshold for induction of BEH from displacement images measured by magnetic resonance acoustic radiation force imaging (MR-ARFI), and (ii) to compare temperature distribution with MR thermometry for HIFU protocols with and without BEH. The acoustic threshold for generation of BEH was determined in ex vivo pig liver from MR-ARFI calibration curves of local tissue displacement resulting from sonication at different powers. Temperature distributions (MR thermometry) resulting from "conventional" sonications (20 W, 30 s) were compared with those from "composite" sonications performed at identical parameters, but after a HIFU burst pulse (0.5 s, acoustic power over the threshold for induction of BEH). Displacement images (MR-ARFI) were acquired between sonications to measure potential modifications of local tissue displacement associated with modifications of tissue acoustic characteristics induced by the burst HIFU pulse. The acoustic threshold for induction of BEH corresponded to a displacement amplitude of approximately 50 μm in ex vivo liver. The displacement and temperature images of the composite group exhibited a nearly spherical pattern, shifted approximately 4 mm toward the transducer, in contrast to elliptical shapes centered on the natural focal position for the conventional group. The gains in maximum temperature and displacement values were 1.5 and 2, and the full widths at half-maximum of the displacement data were 1.7 and 2.2 times larger than in the conventional group in directions perpendicular to ultrasound propagation axes. Combination of MR-ARFI and MR thermometry for calibration and exploitation of BEH appears to increase the efficiency and safety of HIFU treatment. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  1. When do Indians feel hot? Internet searches indicate seasonality suppresses adaptation to heat

    NASA Astrophysics Data System (ADS)

    Singh, Tanya; Siderius, Christian; Van der Velde, Ype

    2018-05-01

    In a warming world an increasing number of people are being exposed to heat, making a comfortable thermal environment an important need. This study explores the potential of using Regional Internet Search Frequencies (RISF) for air conditioning devices as an indicator for thermal discomfort (i.e. dissatisfaction with the thermal environment) with the aim to quantify the adaptation potential of individuals living across different climate zones and at the high end of the temperature range, in India, where access to health data is limited. We related RISF for the years 2011–2015 to daily daytime outdoor temperature in 17 states and determined at which temperature RISF for air conditioning starts to peak, i.e. crosses a ‘heat threshold’, in each state. Using the spatial variation in heat thresholds, we explored whether people continuously exposed to higher temperatures show a lower response to heat extremes through adaptation (e.g. physiological, behavioural or psychological). State-level heat thresholds ranged from 25.9 °C in Madhya Pradesh to 31.0 °C in Orissa. Local adaptation was found to occur at state level: the higher the average temperature in a state, the higher the heat threshold; and the higher the intra-annual temperature range (warmest minus coldest month) the lower the heat threshold. These results indicate there is potential within India to adapt to warmer temperatures, but that a large intra-annual temperature variability attenuates this potential to adapt to extreme heat. This winter ‘reset’ mechanism should be taken into account when assessing the impact of global warming, with changes in minimum temperatures being an important factor in addition to the change in maximum temperatures itself. Our findings contribute to a better understanding of local heat thresholds and people’s adaptive capacity, which can support the design of local thermal comfort standards and early heat warning systems.

  2. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less

  3. Automated reconstruction of rainfall events responsible for shallow landslides

    NASA Astrophysics Data System (ADS)

    Vessia, G.; Parise, M.; Brunetti, M. T.; Peruccacci, S.; Rossi, M.; Vennari, C.; Guzzetti, F.

    2014-04-01

    Over the last 40 years, many contributions have been devoted to identifying the empirical rainfall thresholds (e.g. intensity vs. duration ID, cumulated rainfall vs. duration ED, cumulated rainfall vs. intensity EI) for the initiation of shallow landslides, based on local as well as worldwide inventories. Although different methods to trace the threshold curves have been proposed and discussed in literature, a systematic study to develop an automated procedure to select the rainfall event responsible for the landslide occurrence has rarely been addressed. Nonetheless, objective criteria for estimating the rainfall responsible for the landslide occurrence (effective rainfall) play a prominent role on the threshold values. In this paper, two criteria for the identification of the effective rainfall events are presented: (1) the first is based on the analysis of the time series of rainfall mean intensity values over one month preceding the landslide occurrence, and (2) the second on the analysis of the trend in the time function of the cumulated mean intensity series calculated from the rainfall records measured through rain gauges. The two criteria have been implemented in an automated procedure written in R language. A sample of 100 shallow landslides collected in Italy by the CNR-IRPI research group from 2002 to 2012 has been used to calibrate the proposed procedure. The cumulated rainfall E and duration D of rainfall events that triggered the documented landslides are calculated through the new procedure and are fitted with power law in the (D,E) diagram. The results are discussed by comparing the (D,E) pairs calculated by the automated procedure and the ones by the expert method.

  4. Damage threshold in adult rabbit eyes after scleral cross-linking by riboflavin/blue light application.

    PubMed

    Iseli, Hans Peter; Körber, Nicole; Karl, Anett; Koch, Christian; Schuldt, Carsten; Penk, Anja; Liu, Qing; Huster, Daniel; Käs, Josef; Reichenbach, Andreas; Wiedemann, Peter; Francke, Mike

    2015-10-01

    Several scleral cross-linking (SXL) methods were suggested to increase the biomechanical stiffness of scleral tissue and therefore, to inhibit axial eye elongation in progressive myopia. In addition to scleral cross-linking and biomechanical effects caused by riboflavin and light irradiation such a treatment might induce tissue damage, dependent on the light intensity used. Therefore, we characterized the damage threshold and mechanical stiffening effect in rabbit eyes after application of riboflavin combined with various blue light intensities. Adult pigmented and albino rabbits were treated with riboflavin (0.5 %) and varying blue light (450 ± 50 nm) dosages from 18 to 780 J/cm(2) (15 to 650 mW/cm(2) for 20 min). Scleral, choroidal and retinal tissue alterations were detected by means of light microscopy, electron microscopy and immunohistochemistry. Biomechanical changes were measured by shear rheology. Blue light dosages of 480 J/cm(2) (400 mW/cm(2)) and beyond induced pathological changes in ocular tissues; the damage threshold was defined by the light intensities which induced cellular degeneration and/or massive collagen structure changes. At such high dosages, we observed alterations of the collagen structure in scleral tissue, as well as pigment aggregation, internal hemorrhages, and collapsed blood vessels. Additionally, photoreceptor degenerations associated with microglia activation and macroglia cell reactivity in the retina were detected. These pathological alterations were locally restricted to the treated areas. Pigmentation of rabbit eyes did not change the damage threshold after a treatment with riboflavin and blue light but seems to influence the vulnerability for blue light irradiations. Increased biomechanical stiffness of scleral tissue could be achieved with blue light intensities below the characterized damage threshold. We conclude that riboflavin and blue light application increased the biomechanical stiffness of scleral tissue at blue light energy levels below the damage threshold. Therefore, applied blue light intensities below the characterized damage threshold might define a therapeutic blue light tolerance range. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. New constraints on mechanisms of remotely triggered seismicity at Long Valley Caldera

    USGS Publications Warehouse

    Brodsky, E.E.; Prejean, S.G.

    2005-01-01

    Regional-scale triggering of local earthquakes in the crust by seismic waves from distant main shocks has now been robustly documented for over a decade. Some of the most thoroughly recorded examples of repeated triggering of a single site from multiple, large earthquakes are measured in geothermal fields of the western United States like Long Valley Caldera. As one of the few natural cases where the causality of an earthquake sequence is apparent, triggering provides fundamental constraints on the failure processes in earthquakes. We show here that the observed triggering by seismic waves is inconsistent with any mechanism that depends on cumulative shaking as measured by integrated energy density. We also present evidence for a frequency-dependent triggering threshold. On the basis of the seismic records of 12 regional and teleseismic events recorded at Long Valley Caldera, long-period waves (>30 s) are more effective at generating local seismicity than short-period waves of comparable amplitude. If the properties of the system are stationary over time, the failure threshold for long-period waves is ~0.05 cm/s vertical shaking. Assuming a phase velocity of 3.5 km/s and an elastic modulus of 3.5 x 1010Pa, the threshold in terms of stress is 5 kPa. The frequency dependence is due in part to the attenuation of the surface waves with depth. Fluid flow through a porous medium can produce the rest of the observed frequency dependence of the threshold. If the threshold is not stationary with time, pore pressures that are >99.5% of lithostatic and vary over time by a factor of 4 could explain the observations with no frequency dependence of the triggering threshold. Copyright 2005 by the American Geophysical Union.

  6. Metallicity inhomogeneities in local star-forming galaxies as a sign of recent metal-poor gas accretion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sánchez Almeida, J.; Morales-Luis, A. B.; Muñoz-Tuñón, C.

    2014-03-01

    We measure the oxygen metallicity of the ionized gas along the major axis of seven dwarf star-forming galaxies. Two of them, SDSSJ1647+21 and SDSSJ2238+14, show ≅0.5 dex metallicity decrements in inner regions with enhanced star formation activity. This behavior is similar to the metallicity drop observed in a number of local tadpole galaxies by Sánchez Almeida et al., and was interpreted as showing early stages of assembling in disk galaxies, with the star formation sustained by external metal-poor gas accretion. The agreement with tadpoles has several implications. (1) It proves that galaxies other than the local tadpoles present the samemore » unusual metallicity pattern. (2) Our metallicity inhomogeneities were inferred using the direct method, thus discarding systematic errors usually attributed to other methods. (3) Taken together with the tadpole data, our findings suggest a threshold around one-tenth the solar value for the metallicity drops to show up. Although galaxies with clear metallicity drops are rare, the physical mechanism responsible for them may sustain a significant part of the star formation activity in the local universe. We argue that the star formation dependence of the mass-metallicity relationship, as well as other general properties followed by most local disk galaxies, is naturally interpreted as side effects of pristine gas infall. Alternatives to the metal-poor gas accretion are examined as well.« less

  7. A Well-Tempered Hybrid Method for Solving Challenging Time-Dependent Density Functional Theory (TDDFT) Systems.

    PubMed

    Kasper, Joseph M; Williams-Young, David B; Vecharynski, Eugene; Yang, Chao; Li, Xiaosong

    2018-04-10

    The time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) equations allow one to probe electronic resonances of a system quickly and inexpensively. However, the iterative solution of the eigenvalue problem can be challenging or impossible to converge, using standard methods such as the Davidson algorithm for spectrally dense regions in the interior of the spectrum, as are common in X-ray absorption spectroscopy (XAS). More robust solvers, such as the generalized preconditioned locally harmonic residual (GPLHR) method, can alleviate this problem, but at the expense of higher average computational cost. A hybrid method is proposed which adapts to the problem in order to maximize computational performance while providing the superior convergence of GPLHR. In addition, a modification to the GPLHR algorithm is proposed to adaptively choose the shift parameter to enforce a convergence of states above a predefined energy threshold.

  8. Critical Mutation Rate Has an Exponential Dependence on Population Size in Haploid and Diploid Populations

    PubMed Central

    Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.

    2013-01-01

    Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200

  9. Prevention of Ca(2+)-mediated action potentials in GABAergic local circuit neurones of rat thalamus by a transient K+ current.

    PubMed Central

    Pape, H C; Budde, T; Mager, R; Kisvárday, Z F

    1994-01-01

    1. Neurones enzymatically dissociated from the rat dorsal lateral geniculate nucleus (LGN) were identified as GABAergic local circuit interneurones and geniculocortical relay cells, based upon quantitative analysis of soma profiles, immunohistochemical detection of GABA or glutamic acid decarboxylase, and basic electrogenic behaviour. 2. During whole-cell current-clamp recording, isolated LGN neurones generated firing patterns resembling those in intact tissue, with the most striking difference relating to the presence in relay cells of a Ca2+ action potential with a low threshold of activation, capable of triggering fast spikes, and the absence of a regenerative Ca2+ response with a low threshold of activation in local circuit cells. 3. Whole-cell voltage-clamp experiments demonstrated that both classes of LGN neurones possess at least two voltage-dependent membrane currents which operate in a range of membrane potentials negative to the threshold for generation of Na(+)-K(+)-mediated spikes: the T-type Ca2+ current (IT) and an A-type K+ current (IA). Taking into account the differences in membrane surface area, the average size of IT was similar in the two types of neurones, and interneurones possessed a slightly larger A-conductance. 4. In local circuit neurones, the ranges of steady-state inactivation and activation of IT and IA were largely overlapping (VH = 81.1 vs. -82.8 mV), both currents activated at around -70 mV, and they rapidly increased in amplitude with further depolarization. In relay cells, the inactivation curve of IT was negatively shifted along the voltage axis by about 20 mV compared with that of IA (Vh = -86.1 vs. -69.2 mV), and the activation threshold for IT (at -80 mV) was 20 mV more negative than that for IA. In interneurones, the activation range of IT was shifted to values more positive than that in relay cells (Vh = -54.9 vs. -64.5 mV), whereas the activation range of IA was more negative (Vh = -25.2 vs. -14.5 mV). 5. Under whole-cell voltage-clamp conditions that allowed the combined activation of Ca2+ and K+ currents, depolarizing voltage steps from -110 mV evoked inward currents resembling IT in relay cells and small outward currents indicative of IA in local circuit neurones. After blockade of IA with 4-aminopyridine (4-AP), the same pulse protocol produced IT in both types of neurones. Under current clamp, 4-AP unmasked a regenerative membrane depolarization with a low threshold of activation capable of triggering fast spikes in local circuit neurones.(ABSTRACT TRUNCATED AT 400 WORDS) Images Figure 1 PMID:7965855

  10. Aerial Insecticide Treatments for Management of Dectes Stem Borer, Dectes texanus, in Soybean

    PubMed Central

    Sloderbeck, P. E.; Buschman, L.L.

    2011-01-01

    The Dectes stem borer, Dectes texanus LeConte (Coleoptera: Cerambycidae), is an increasingly important pest of soybean and sunflower in central North America. Nine large-scale field trials were conducted over a 3-year period to determine if Dectes stem borer could be managed with insecticide treatments. Aerial applications of lambda on July 6, 12 and 15 were successful in significantly reducing adults, but applications on July 1, 20 and 24 were less successful. These data suggest that for central Kansas two aerial applications may be required to control Dectes stem borers in soybean. Based on our experience the first application should be made at the peak of adult flight about July 5th and the second application 10 days later. The local treatment schedule should be developed to follow the local Dectes stem borer adult emergence pattern. Treated aerial strips 59 m (195 ft) wide were not large enough to prevent reinfestation, but treated half-circles (24 ha or 60 acres) were successful in reducing in Dectes stem borer infestation of soybean. Sweep net samples of adults were not successful in identifying a treatment threshold, so treatment decisions will need to be based on field history of infestation. Further studies are needed to identify better sampling methods that can be used to establish treatment thresholds and to refine the best timing for treatments. PMID:21861653

  11. Processing Motion Signals in Complex Environments

    NASA Technical Reports Server (NTRS)

    Verghese, Preeti

    2000-01-01

    Motion information is critical for human locomotion and scene segmentation. Currently we have excellent neurophysiological models that are able to predict human detection and discrimination of local signals. Local motion signals are insufficient by themselves to guide human locomotion and to provide information about depth, object boundaries and surface structure. My research is aimed at understanding the mechanisms underlying the combination of motion signals across space and time. A target moving on an extended trajectory amidst noise dots in Brownian motion is much more detectable than the sum of signals generated by independent motion energy units responding to the trajectory segments. This result suggests that facilitation occurs between motion units tuned to similar directions, lying along the trajectory path. We investigated whether the interaction between local motion units along the motion direction is mediated by contrast. One possibility is that contrast-driven signals from motion units early in the trajectory sequence are added to signals in subsequent units. If this were the case, then units later in the sequence would have a larger signal than those earlier in the sequence. To test this possibility, we compared contrast discrimination thresholds for the first and third patches of a triplet of sequentially presented Gabor patches, aligned along the motion direction. According to this simple additive model, contrast increment thresholds for the third patch should be higher than thresholds for the first patch.The lack of a measurable effect on contrast thresholds for these various manipulations suggests that the pooling of signals along a trajectory is not mediated by contrast-driven signals. Instead, these results are consistent with models that propose that the facilitation of trajectory signals is achieved by a second-level network that chooses the strongest local motion signals and combines them if they occur in a spatio-temporal sequence consistent with a trajectory. These results parallel the lack of increased apparent contrast along a static contour made up of similarly oriented elements.

  12. SU-E-CAMPUS-I-06: Y90 PET/CT for the Instantaneous Determination of Both Target and Non-Target Absorbed Doses Following Hepatic Radioembolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasciak, A; Kao, J

    2014-06-15

    Purpose The process of converting Yttrium-90 (Y90) PET/CT images into 3D absorbed dose maps will be explained. The simple methods presented will allow the medical physicst to analyze Y90 PET images following radioembolization and determine the absorbed dose to tumor, normal liver parenchyma and other areas of interest, without application of Monte-Carlo radiation transport or dose-point-kernel (DPK) convolution. Methods Absorbed dose can be computed from Y90 PET/CT images based on the premise that radioembolization is a permanent implant with a constant relative activity distribution after infusion. Many Y90 PET/CT publications have used DPK convolution to obtain 3D absorbed dose maps.more » However, this method requires specialized software limiting clinical utility. The Local Deposition method, an alternative to DPK convolution, can be used to obtain absorbed dose and requires no additional computer processing. Pixel values from regions of interest drawn on Y90 PET/CT images can be converted to absorbed dose (Gy) by multiplication with a scalar constant. Results There is evidence that suggests the Local Deposition method may actually be more accurate than DPK convolution and it has been successfully used in a recent Y90 PET/CT publication. We have analytically compared dose-volume-histograms (DVH) for phantom hot-spheres to determine the difference between the DPK and Local Deposition methods, as a function of PET scanner point-spread-function for Y90. We have found that for PET/CT systems with a FWHM greater than 3.0 mm when imaging Y90, the Local Deposition Method provides a more accurate representation of DVH, regardless of target size than DPK convolution. Conclusion Using the Local Deposition Method, post-radioembolization Y90 PET/CT images can be transformed into 3D absorbed dose maps of the liver. An interventional radiologist or a Medical Physicist can perform this transformation in a clinical setting, allowing for rapid prediction of treatment efficacy by comparison to published tumoricidal thresholds.« less

  13. Transition to subcritical turbulence in a tokamak plasma

    NASA Astrophysics Data System (ADS)

    van Wyk, F.; Highcock, E. G.; Schekochihin, A. A.; Roach, C. M.; Field, A. R.; Dorland, W.

    2016-12-01

    Tokamak turbulence, driven by the ion-temperature gradient and occurring in the presence of flow shear, is investigated by means of local, ion-scale, electrostatic gyrokinetic simulations (with both kinetic ions and electrons) of the conditions in the outer core of the Mega-Ampere Spherical Tokamak (MAST). A parameter scan in the local values of the ion-temperature gradient and flow shear is performed. It is demonstrated that the experimentally observed state is near the stability threshold and that this stability threshold is nonlinear: sheared turbulence is subcritical, i.e. the system is formally stable to small perturbations, but, given a large enough initial perturbation, it transitions to a turbulent state. A scenario for such a transition is proposed and supported by numerical results: close to threshold, the nonlinear saturated state and the associated anomalous heat transport are dominated by long-lived coherent structures, which drift across the domain, have finite amplitudes, but are not volume filling; as the system is taken away from the threshold into the more unstable regime, the number of these structures increases until they overlap and a more conventional chaotic state emerges. Whereas this appears to represent a new scenario for transition to turbulence in tokamak plasmas, it is reminiscent of the behaviour of other subcritically turbulent systems, e.g. pipe flows and Keplerian magnetorotational accretion flows.

  14. Artifacts Quantification of Metal Implants in MRI

    NASA Astrophysics Data System (ADS)

    Vrachnis, I. N.; Vlachopoulos, G. F.; Maris, T. G.; Costaridou, L. I.

    2017-11-01

    The presence of materials with different magnetic properties, such as metal implants, causes distortion of the magnetic field locally, resulting in signal voids and pile ups, i.e. susceptibility artifacts in MRI. Quantitative and unbiased measurement of the artifact is prerequisite for optimization of acquisition parameters. In this study an image gradient based segmentation method is proposed for susceptibility artifact quantification. The method captures abrupt signal alterations by calculation of the image gradient. Then the artifact is quantified in terms of its extent by an automated cross entropy thresholding method as image area percentage. The proposed method for artifact quantification was tested in phantoms containing two orthopedic implants with significantly different magnetic permeabilities. The method was compared against a method proposed in the literature, considered as a reference, demonstrating moderate to good correlation (Spearman’s rho = 0.62 and 0.802 in case of titanium and stainless steel implants). The automated character of the proposed quantification method seems promising towards MRI acquisition parameter optimization.

  15. Biomagnetism using SQUIDs: status and perspectives

    NASA Astrophysics Data System (ADS)

    Sternickel, Karsten; Braginski, Alex I.

    2006-03-01

    Biomagnetism involves the measurement and analysis of very weak local magnetic fields of living organisms and various organs in humans. Such fields can be of physiological origin or due to magnetic impurities or markers. This paper reviews existing and prospective applications of biomagnetism in clinical research and medical diagnostics. Currently, such applications require sensitive magnetic SQUID sensors and amplifiers. The practicality of biomagnetic methods depends especially on techniques for suppressing the dominant environmental electromagnetic noise, and on suitable nearly real-time data processing and interpretation methods. Of the many biomagnetic methods and applications, only the functional studies of the human brain (magnetoencephalography) and liver susceptometry are in clinical use, while functional diagnostics of the human heart (magnetocardiography) approaches the threshold of clinical acceptance. Particularly promising for the future is the ongoing research into low-field magnetic resonance anatomical imaging using SQUIDs.

  16. A complementary graphical method for reducing and analyzing large data sets. Case studies demonstrating thresholds setting and selection.

    PubMed

    Jing, X; Cimino, J J

    2014-01-01

    Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.

  17. Flood Extent Delineation by Thresholding Sentinel-1 SAR Imagery Based on Ancillary Land Cover Information

    NASA Astrophysics Data System (ADS)

    Liang, J.; Liu, D.

    2017-12-01

    Emergency responses to floods require timely information on water extents that can be produced by satellite-based remote sensing. As SAR image can be acquired in adverse illumination and weather conditions, it is particularly suitable for delineating water extent during a flood event. Thresholding SAR imagery is one of the most widely used approaches to delineate water extent. However, most studies apply only one threshold to separate water and dry land without considering the complexity and variability of different dry land surface types in an image. This paper proposes a new thresholding method for SAR image to delineate water from other different land cover types. A probability distribution of SAR backscatter intensity is fitted for each land cover type including water before a flood event and the intersection between two distributions is regarded as a threshold to classify the two. To extract water, a set of thresholds are applied to several pairs of land cover types—water and urban or water and forest. The subsets are merged to form the water distribution for the SAR image during or after the flooding. Experiments show that this land cover based thresholding approach outperformed the traditional single thresholding by about 5% to 15%. This method has great application potential with the broadly acceptance of the thresholding based methods and availability of land cover data, especially for heterogeneous regions.

  18. Methods for threshold determination in multiplexed assays

    DOEpatents

    Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J

    2014-06-24

    Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Guang, E-mail: lig2@mskcc.org; Schmidtlein, C. Ross; Humm, John L.

    Purpose: To assess and account for the impact of respiratory motion on the variability of activity and volume determination of liver tumor in positron emission tomography (PET) through a comparison between free-breathing (FB) and respiration-suspended (RS) PET images. Methods: As part of a PET/computed tomography (CT) guided percutaneous liver ablation procedure performed on a PET/CT scanner, a patient's breathing is suspended on a ventilator, allowing the acquisition of a near-motionless PET and CT reference images of the liver. In this study, baseline RS and FB PET/CT images of 20 patients undergoing thermal ablation were acquired. The RS PET provides near-motionlessmore » reference in a human study, and thereby allows a quantitative evaluation of the effect of respiratory motion on PET images obtained under FB conditions. Two methods were applied to calculate tumor activity and volume: (1) threshold-based segmentation (TBS), estimating the total lesion glycolysis (TLG) and the segmented volume and (2) histogram-based estimation (HBE), yielding the background-subtracted lesion (BSL) activity and associated volume. The TBS method employs 50% of the maximum standardized uptake value (SUV{sub max}) as the threshold for tumors with SUV{sub max} ≥ 2× SUV{sub liver-bkg}, and tumor activity above this threshold yields TLG{sub 50%}. The HBE method determines local PET background based on a Gaussian fit of the low SUV peak in a SUV-volume histogram, which is generated within a user-defined and optimized volume of interest containing both local background and lesion uptakes. Voxels with PET intensity above the fitted background were considered to have originated from the tumor and used to calculate the BSL activity and its associated lesion volume. Results: Respiratory motion caused SUV{sub max} to decrease from RS to FB by −15% ± 11% (p = 0.01). Using TBS method, there was also a decrease in SUV{sub mean} (−18% ± 9%, p = 0.01), but an increase in TLG{sub 50%} (18% ± 36%) and in the segmented volume (47% ± 52%, p = 0.01) from RS to FB PET images. The background uptake in normal liver was stable, 1% ± 9%. In contrast, using the HBE method, the differences in both BSL activity and BSL volume from RS to FB were −8% ± 10% (p = 0.005) and 0% ± 16% (p = 0.94), respectively. Conclusions: This is the first time that almost motion-free PET images of the human liver were acquired and compared to free-breathing PET. The BSL method's results are more consistent, for the calculation of both tumor activity and volume in RS and FB PET images, than those using conventional TBS. This suggests that the BSL method might be less sensitive to motion blurring and provides an improved estimation of tumor activity and volume in the presence of respiratory motion.« less

  20. Risk management in air protection in the Republic of Croatia.

    PubMed

    Peternel, Renata; Toth, Ivan; Hercog, Predrag

    2014-03-01

    In the Republic of Croatia, according to the Air Protection Act, air pollution assessment is obligatory on the whole State territory. For individual regions and populated areas in the State a network has been established for permanent air quality monitoring. The State network consists of stations for measuring background pollution, regional and cross-border remote transfer and measurements as part of international government liabilities, then stations for measuring air quality in areas of cultural and natural heritage, and stations for measuring air pollution in towns and industrial zones. The exceeding of alert and information threshold levels of air pollutants are related to emissions from industrial plants, and accidents. Each excess represents a threat to human health in case of short-time exposure. Monitoring of alert and information threshold levels is carried out at stations from the state and local networks for permanent air quality monitoring according to the Air Quality Measurement Program in the State network for permanent monitoring of air quality and air quality measurement programs in local networks for permanent air quality monitoring. The State network for permanent air quality monitoring has a developed automatic system for reporting on alert and information threshold levels, whereas many local networks under the competence of regional and local self-governments still lack any fully installed systems of this type. In case of accidents, prompt action at all responsibility levels is necessary in order to prevent crisis and this requires developed and coordinated competent units of State Administration as well as self-government units. It is also necessary to be continuously active in improving the implementation of legislative regulations in the field of crises related to critical and alert levels of air pollutants, especially at local levels.

  1. SNR enhancement for downhole microseismic data based on scale classification shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili

    2018-06-01

    Shearlet transform (ST) can be effective in 2D signal processing, due to its parabolic scaling, high directional sensitivity, and optimal sparsity. ST combined with thresholding has been successfully applied to suppress random noise. However, because of the low magnitude and high frequency of a downhole microseismic signal, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is difficult to use for denoising. In this paper, we present a scale classification ST to solve this problem. The ST is used to decompose noisy microseismic data into serval scales. By analyzing the spectrum and energy distribution of the shearlet coefficients of microseismic data, we divide the scales into two types: low-frequency scales which contain less useful signal and high-frequency scales which contain more useful signal. After classification, we use two different methods to deal with the coefficients on different scales. For the low-frequency scales, the noise is attenuated using a thresholding method. As for the high-frequency scales, we propose to use a generalized Gauss distribution model based a non-local means filter, which takes advantage of the temporal and spatial similarity of microseismic data. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.

  2. Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2002-01-01

    A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.

  3. Optimum threshold selection method of centroid computation for Gaussian spot

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; Wang, Caixia

    2015-10-01

    Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.

  4. A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events

    NASA Astrophysics Data System (ADS)

    Kholodovsky, V.

    2017-12-01

    Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.

  5. Efficient method for events detection in phonocardiographic signals

    NASA Astrophysics Data System (ADS)

    Martinez-Alajarin, Juan; Ruiz-Merino, Ramon

    2005-06-01

    The auscultation of the heart is still the first basic analysis tool used to evaluate the functional state of the heart, as well as the first indicator used to submit the patient to a cardiologist. In order to improve the diagnosis capabilities of auscultation, signal processing algorithms are currently being developed to assist the physician at primary care centers for adult and pediatric population. A basic task for the diagnosis from the phonocardiogram is to detect the events (main and additional sounds, murmurs and clicks) present in the cardiac cycle. This is usually made by applying a threshold and detecting the events that are bigger than the threshold. However, this method usually does not allow the detection of the main sounds when additional sounds and murmurs exist, or it may join several events into a unique one. In this paper we present a reliable method to detect the events present in the phonocardiogram, even in the presence of heart murmurs or additional sounds. The method detects relative maxima peaks in the amplitude envelope of the phonocardiogram, and computes a set of parameters associated with each event. Finally, a set of characteristics is extracted from each event to aid in the identification of the events. Besides, the morphology of the murmurs is also detected, which aids in the differentiation of different diseases that can occur in the same temporal localization. The algorithms have been applied to real normal heart sounds and murmurs, achieving satisfactory results.

  6. Recognition and Localization of Speech by Adult Cochlear Implant Recipients Wearing a Digital Hearing Aid in the Nonimplanted Ear (Bimodal Hearing)

    PubMed Central

    Potts, Lisa G.; Skinner, Margaret W.; Litovsky, Ruth A.; Strube, Michael J; Kuk, Francis

    2010-01-01

    Background The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). Purpose This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. Research Design A repeated-measures correlational study was completed. Study Sample Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. Intervention The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Data Collection and Analysis Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six–eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Results Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant–only and hearing aid–only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1–3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. Conclusions These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid. PMID:19594084

  7. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu Wu; Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8; Yuchi Ming

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped;more » the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions: The proposed needle segmentation algorithm is accurate, robust, and suitable for 3D TRUS guided prostate transperineal therapy.« less

  8. Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2017-02-01

    Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.

  9. DNA barcoding for effective biodiversity assessment of a hyperdiverse arthropod group: the ants of Madagascar

    PubMed Central

    Smith, M. Alex; Fisher, Brian L; Hebert, Paul D.N

    2005-01-01

    The role of DNA barcoding as a tool to accelerate the inventory and analysis of diversity for hyperdiverse arthropods is tested using ants in Madagascar. We demonstrate how DNA barcoding helps address the failure of current inventory methods to rapidly respond to pressing biodiversity needs, specifically in the assessment of richness and turnover across landscapes with hyperdiverse taxa. In a comparison of inventories at four localities in northern Madagascar, patterns of richness were not significantly different when richness was determined using morphological taxonomy (morphospecies) or sequence divergence thresholds (Molecular Operational Taxonomic Unit(s); MOTU). However, sequence-based methods tended to yield greater richness and significantly lower indices of similarity than morphological taxonomy. MOTU determined using our molecular technique were a remarkably local phenomenon—indicative of highly restricted dispersal and/or long-term isolation. In cases where molecular and morphological methods differed in their assignment of individuals to categories, the morphological estimate was always more conservative than the molecular estimate. In those cases where morphospecies descriptions collapsed distinct molecular groups, sequence divergences of 16% (on average) were contained within the same morphospecies. Such high divergences highlight taxa for further detailed genetic, morphological, life history, and behavioral studies. PMID:16214741

  10. On the importance of FIB-SEM specific segmentation algorithms for porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzer, Martin, E-mail: martin.salzer@uni-ulm.de; Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de; Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin,more » is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.« less

  11. On a new scenario for the saturation of the low-threshold two-plasmon parametric decay instability of an extraordinary wave in the inhomogeneous plasma of magnetic traps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusakov, E. Z., E-mail: Evgeniy.Gusakov@mail.ioffe.ru; Popov, A. Yu., E-mail: a.popov@mail.ioffe.ru; Irzak, M. A., E-mail: irzak@mail.ioffe.ru

    The most probable scenario for the saturation of the low-threshold two-plasmon parametric decay instability of an electron cyclotron extraordinary wave has been analyzed. Within this scenario two upperhybrid plasmons at frequencies close to half the pump wave frequency radially trapped in the vicinity of the local maximum of the plasma density profile are excited due to the excitation of primary instability. The primary instability saturation results from the decays of the daughter upper-hybrid waves into secondary upperhybrid waves that are also radially trapped in the vicinity of the local maximum of the plasma density profile and ion Bernstein waves.

  12. Regional rainfall thresholds for landslide occurrence using a centenary database

    NASA Astrophysics Data System (ADS)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia

    2018-04-01

    This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.

  13. Effect of wave localization on plasma instabilities

    NASA Astrophysics Data System (ADS)

    Levedahl, William Kirk

    1987-10-01

    The Anderson model of wave localization in random media is involved to study the effect of solar wind density turbulence on plasma processes associated with the solar type III radio burst. ISEE-3 satellite data indicate that a possible model for the type III process is the parametric decay of Langmuir waves excited by solar flare electron streams into daughter electromagnetic and ion acoustic waves. The threshold for this instability, however, is much higher than observed Langmuir wave levels because of rapid wave convection of the transverse electromagnetic daughter wave in the case where the solar wind is assumed homogeneous. Langmuir and transverse waves near critical density satisfy the Ioffe-Reigel criteria for wave localization in the solar wind with observed density fluctuations -1 percent. Numerical simulations of wave propagation in random media confirm the localization length predictions of Escande and Souillard for stationary density fluctations. For mobile density fluctuations localized wave packets spread at the propagation velocity of the density fluctuations rather than the group velocity of the waves. Computer simulations using a linearized hybrid code show that an electron beam will excite localized Langmuir waves in a plasma with density turbulence. An action principle approach is used to develop a theory of non-linear wave processes when waves are localized. A theory of resonant particles diffusion by localized waves is developed to explain the saturation of the beam-plasma instability. It is argued that localization of electromagnetic waves will allow the instability threshold to be exceeded for the parametric decay discussed above.

  14. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.

  15. Localization in Multiple Source Environments: Localizing the Missing Source

    DTIC Science & Technology

    2007-02-01

    volunteer listeners (3 males and 3 females, 19-24 years of age ), participated in the experiment. All had normal hearing (au- diometric thresholds < 15...were routed from a control computer to a Mark of the Unicorn digital-to-analog con- verter (MOTU 24 I/O), then through a bank of amplifiers (Crown Model

  16. Localization and Spreading of Diseases in Complex Networks

    NASA Astrophysics Data System (ADS)

    Goltsev, A. V.; Dorogovtsev, S. N.; Oliveira, J. G.; Mendes, J. F. F.

    2012-09-01

    Using the susceptible-infected-susceptible model on unweighted and weighted networks, we consider the disease localization phenomenon. In contrast to the well-recognized point of view that diseases infect a finite fraction of vertices right above the epidemic threshold, we show that diseases can be localized on a finite number of vertices, where hubs and edges with large weights are centers of localization. Our results follow from the analysis of standard models of networks and empirical data for real-world networks.

  17. Thermal bistability-based method for real-time optimization of ultralow-threshold whispering gallery mode microlasers.

    PubMed

    Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J

    2012-12-15

    A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.

  18. Local immunization program for susceptible-infected-recovered network epidemic model

    NASA Astrophysics Data System (ADS)

    Wu, Qingchu; Lou, Yijun

    2016-02-01

    The immunization strategies through contact tracing on the susceptible-infected-recovered framework in social networks are modelled to evaluate the cost-effectiveness of information-based vaccination programs with particular focus on the scenario where individuals belonging to a specific set can get vaccinated due to the vaccine shortages and other economic or humanity constraints. By using the block heterogeneous mean-field approach, a series of discrete-time dynamical models is formulated and the condition for epidemic outbreaks can be established which is shown to be not only dependent on the network structure but also closely related to the immunization control parameters. Results show that increasing the immunization strength can effectively raise the epidemic threshold, which is different from the predictions obtained through the susceptible-infected-susceptible network framework, where epidemic threshold is independent of the vaccination strength. Furthermore, a significant decrease of vaccine use to control the infectious disease is observed for the local vaccination strategy, which shows the promising applications of the local immunization programs to disease control while calls for accurate local information during the process of disease outbreak.

  19. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  20. Local curvature entropy-based 3D terrain representation using a comprehensive Quadtree

    NASA Astrophysics Data System (ADS)

    Chen, Qiyu; Liu, Gang; Ma, Xiaogang; Mariethoz, Gregoire; He, Zhenwen; Tian, Yiping; Weng, Zhengping

    2018-05-01

    Large scale 3D digital terrain modeling is a crucial part of many real-time applications in geoinformatics. In recent years, the improved speed and precision in spatial data collection make the original terrain data more complex and bigger, which poses challenges for data management, visualization and analysis. In this work, we presented an effective and comprehensive 3D terrain representation based on local curvature entropy and a dynamic Quadtree. The Level-of-detail (LOD) models of significant terrain features were employed to generate hierarchical terrain surfaces. In order to reduce the radical changes of grid density between adjacent LODs, local entropy of terrain curvature was regarded as a measure of subdividing terrain grid cells. Then, an efficient approach was presented to eliminate the cracks among the different LODs by directly updating the Quadtree due to an edge-based structure proposed in this work. Furthermore, we utilized a threshold of local entropy stored in each parent node of this Quadtree to flexibly control the depth of the Quadtree and dynamically schedule large-scale LOD terrain. Several experiments were implemented to test the performance of the proposed method. The results demonstrate that our method can be applied to construct LOD 3D terrain models with good performance in terms of computational cost and the maintenance of terrain features. Our method has already been deployed in a geographic information system (GIS) for practical uses, and it is able to support the real-time dynamic scheduling of large scale terrain models more easily and efficiently.

  1. Edge Evaluation Using Local Edge Coherence

    DTIC Science & Technology

    1980-12-01

    response within each region. (The operators discussed below also compute an esti- mate of the direction of brightness change .) In the next step, the edges...worth remarking on is that Abdou and Pratt vary the relative strength of signal to noise by holding the contrast constant and changing the standard...threshold level on the basis of the busyness of the resulting thresholded image.) In applications where edge extraction is an important part of the processing

  2. A Room Temperature Low-Threshold Ultraviolet Plasmonic Nanolaser

    DTIC Science & Technology

    2014-09-23

    Here we demonstrate the first strong room temperature ultraviolet (B370 nm) SP polariton laser with an extremely low threshold (B3.5MWcm 2). We find...localized surface plasmon and propagating surface plasmon polariton (SPP), has been demonstrated in metal nanosphere cavities6, metal-cladding...Quantum plasmonics. Nat. Phys. 9, 329–340 (2013). 4. Berini, P. & De Leon, I. Surface plasmon- polariton amplifiers and lasers. Nat. Photon. 6, 16–24 (2012

  3. Exotic Effects at the Charm Threshold and Other Novel Physics Topics at JLab-12 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.; /SLAC

    I briefly survey a number of novel hadron physics topics which can be investigated with the 12 GeV upgrade at J-Lab. The topics include new the formation of exotic heavy quark resonances accessible above the charm threshold, intrinsic charm and strangeness phenomena, the exclusive Sivers effect, hidden-color Fock states of nuclei, local two-photon interactions in deeply virtual Compton scattering, and non-universal antishadowing.

  4. In-air hearing of a diving duck: A comparison of psychoacoustic and auditory brainstem response thresholds

    USGS Publications Warehouse

    Crowell, Sara E.; Wells-Berlin, Alicia M.; Therrien, Ronald E.; Yannuzzi, Sally E.; Carr, Catherine E.

    2016-01-01

    Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000−3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.

  5. In-air hearing of a diving duck: A comparison of psychoacoustic and auditory brainstem response thresholds.

    PubMed

    Crowell, Sara E; Wells-Berlin, Alicia M; Therrien, Ronald E; Yannuzzi, Sally E; Carr, Catherine E

    2016-05-01

    Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000-3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.

  6. A low dose of three local anesthetic solutions for interscalene blockade tested by thermal quantitative sensory testing: a randomized controlled trial.

    PubMed

    Sermeus, Luc A; Schepens, Tom; Hans, Guy H; Morrison, Stuart G; Wouters, Kristien; Breebaart, Margaretha B; Smitz, Carine J; Vercauteren, Marcel P

    2018-05-03

    This randomized double-blind controlled trial compared the block characteristics of three low-dose local anesthetics at different roots in an ultrasound-guided interscalene block, using thermal quantitative sensory testing for assessing the functioning of cutaneous small nerve fibres. A total of 37 adults scheduled to undergo shoulder arthroscopy were randomized to receive 5 mL of either 0.5% levobupivacaine with and without epinephrine 1/200,000 or 0.75% ropivacaine in a single-shot interscalene block. Thermal quantitative sensory testing was performed in the C4, C5, C6 and C7 dermatomes. Detection thresholds for cold/warm sensation and cold/heat pain were measured before and at 30 min, 6, 10 and 24 h after infiltration around C5. The need for rescue medication was recorded. No significant differences between groups were found for any sensation (lowest P = 0.28). At 6 h, the largest differences in sensory thresholds were observed for the C5 dermatome. The increase in thresholds were less in C4 and C6 and minimal in C7 for all sensations. The analgesic effect lasted the longest in C5 (time × location mixed model P < 0.001 for all sensory tests). The time to rescue analgesia was significantly shorter with 0.75% ropivacaine (P = 0.02). The quantitative sensory findings showed no difference in intensity between the local anesthetics tested. A decrease in block intensity, with minimal changes in pain detection thresholds, was observed in the roots adjacent to C5, with the lowest block intensity in C7. A clinically relevant shorter duration was found with 0.75% ropivacaine compared to the other groups. Trial registration NCT 02691442.

  7. THRESHOLD LOGIC.

    DTIC Science & Technology

    synthesis procedures; a ’best’ method is definitely established. (2) ’Symmetry Types for Threshold Logic’ is a tutorial expositon including a careful...development of the Goto-Takahasi self-dual type ideas. (3) ’Best Threshold Gate Decisions’ reports a comparison, on the 2470 7-argument threshold ...interpretation is shown best. (4) ’ Threshold Gate Networks’ reviews the previously discussed 2-algorithm in geometric terms, describes our FORTRAN

  8. Geospatial Association between Low Birth Weight and Arsenic in Groundwater in New Hampshire, USA

    PubMed Central

    Shi, Xun; Ayotte, Joseph D.; Onda, Akikazu; Miller, Stephanie; Rees, Judy; Gilbert-Diamond, Diane; Onega, Tracy; Gui, Jiang; Karagas, Margaret; Moeschler, John

    2015-01-01

    Background There is increasing evidence of the role of arsenic in the etiology of adverse human reproductive outcomes. Since drinking water can be a major source of arsenic to pregnant women, the effect of arsenic exposure through drinking water on human birth may be revealed by a geospatial association between arsenic concentration in groundwater and birth problems, particularly in a region where private wells substantially account for water supply, like New Hampshire, US. Methods We calculated town-level rates of preterm birth and term low birth weight (term LBW) for New Hampshire, using data for 1997-2009 and stratified by maternal age. We smoothed the rates using a locally-weighted averaging method to increase the statistical stability. The town-level groundwater arsenic values are from three GIS data layers generated by the US Geological Survey: probability of local groundwater arsenic concentration > 1 μg/L, probability > 5 μg/L, and probability > 10 μg/L. We calculated Pearson's correlation coefficients (r) between the reproductive outcomes (preterm birth and term LBW) and the arsenic values, at both state and county levels. Results For preterm birth, younger mothers (maternal age < 20) have a statewide r = 0.70 between the rates smoothed with a threshold = 2,000 births and the town mean arsenic level based on the data of probability > 10 μg/L; For older mothers, r = 0.19 when the smoothing threshold = 3,500; A majority of county level r values are positive based on the arsenic data of probability > 10 μg/L. For term LBW, younger mothers (maternal age < 25) have a statewide r = 0.44 between the rates smoothed with a threshold = 3,500 and town minimum arsenic level based on the data of probability > 1 μg/L; For older mothers, r = 0.14 when the rates are smoothed with a threshold = 1,000 births and also adjusted by town median household income in 1999, and the arsenic values are the town minimum based on probability > 10 μg/L. At the county level, for younger mothers positive r values prevail, but for older mothers it is a mix. For both birth problems, the several most populous counties - with 60-80% of the state's population and clustering at the southwest corner of the state – are largely consistent in having a positive r across different smoothing thresholds. Conclusion We found evident spatial associations between the two adverse human reproductive outcomes and groundwater arsenic in New Hampshire, US. However, the degree of associations and their sensitivity to different representations of arsenic level are variable. Generally, preterm birth has a stronger spatial association with groundwater arsenic than term LBW, suggesting an inconsistency in the impact of arsenic on the two reproductive outcomes. For both outcomes, younger maternal age has stronger spatial associations with groundwater arsenic. PMID:25326895

  9. Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals

    NASA Astrophysics Data System (ADS)

    Peter, M.; Jafri, S. R. U. N.; Vosselman, G.

    2017-09-01

    Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  10. Semi-automated segmentation of solid and GGO nodules in lung CT images using vessel-likelihood derived from local foreground structure

    NASA Astrophysics Data System (ADS)

    Yaguchi, Atsushi; Okazaki, Tomoya; Takeguchi, Tomoyuki; Matsumoto, Sumiaki; Ohno, Yoshiharu; Aoyagi, Kota; Yamagata, Hitoshi

    2015-03-01

    Reflecting global interest in lung cancer screening, considerable attention has been paid to automatic segmentation and volumetric measurement of lung nodules on CT. Ground glass opacity (GGO) nodules deserve special consideration in this context, since it has been reported that they are more likely to be malignant than solid nodules. However, due to relatively low contrast and indistinct boundaries of GGO nodules, segmentation is more difficult for GGO nodules compared with solid nodules. To overcome this difficulty, we propose a method for accurately segmenting not only solid nodules but also GGO nodules without prior information about nodule types. First, the histogram of CT values in pre-extracted lung regions is modeled by a Gaussian mixture model and a threshold value for including high-attenuation regions is computed. Second, after setting up a region of interest around the nodule seed point, foreground regions are extracted by using the threshold and quick-shift-based mode seeking. Finally, for separating vessels from the nodule, a vessel-likelihood map derived from elongatedness of foreground regions is computed, and a region growing scheme starting from the seed point is applied to the map with the aid of fast marching method. Experimental results using an anthropomorphic chest phantom showed that our method yielded generally lower volumetric measurement errors for both solid and GGO nodules compared with other methods reported in preceding studies conducted using similar technical settings. Also, our method allowed reasonable segmentation of GGO nodules in low-dose images and could be applied to clinical CT images including part-solid nodules.

  11. Mapping gullies, dunes, lava fields, and landslides via surface roughness

    NASA Astrophysics Data System (ADS)

    Korzeniowska, Karolina; Pfeifer, Norbert; Landtwing, Stephan

    2018-01-01

    Gully erosion is a widespread and significant process involved in soil and land degradation. Mapping gullies helps to quantify past, and anticipate future, soil losses. Digital terrain models offer promising data for automatically detecting and mapping gullies especially in vegetated areas, although methods vary widely measures of local terrain roughness are the most varied and debated among these methods. Rarely do studies test the performance of roughness metrics for mapping gullies, limiting their applicability to small training areas. To this end, we systematically explored how local terrain roughness derived from high-resolution Light Detection And Ranging (LiDAR) data can aid in the unsupervised detection of gullies over a large area. We also tested expanding this method for other landforms diagnostic of similarly abrupt land-surface changes, including lava fields, dunes, and landslides, as well as investigating the influence of different roughness thresholds, resolutions of kernels, and input data resolution, and comparing our method with previously published roughness algorithms. Our results show that total curvature is a suitable metric for recognising analysed gullies and lava fields from LiDAR data, with comparable success to that of more sophisticated roughness metrics. Tested dunes or landslides remain difficult to distinguish from the surrounding landscape, partly because they are not easily defined in terms of their topographic signature.

  12. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  13. Multistage Electrotherapy Delivered Through Chronically-Implanted Leads Terminates Atrial Fibrillation With Lower Energy Than a Single Biphasic Shock

    PubMed Central

    Janardhan, Ajit H.; Gutbrod, Sarah R.; Li, Wenwen; Lang, Di; Schuessler, Richard B.; Efimov, Igor R.

    2014-01-01

    Objectives The goal of this study was to develop a low-energy, implantable device–based multistage electrotherapy (MSE) to terminate atrial fibrillation (AF). Background Previous attempts to perform cardioversion of AF by using an implantable device were limited by the pain caused by use of a high-energy single biphasic shock (BPS). Methods Transvenous leads were implanted into the right atrium (RA), coronary sinus, and left pulmonary artery of 14 dogs. Self-sustaining AF was induced by 6 ± 2 weeks of high-rate RA pacing. Atrial defibrillation thresholds of standard versus experimental electrotherapies were measured in vivo and studied by using optical imaging in vitro. Results The mean AF cycle length (CL) in vivo was 112 ± 21 ms (534 beats/min). The impedances of the RA–left pulmonary artery and RA–coronary sinus shock vectors were similar (121 ± 11 Ω vs. 126 ± 9 Ω; p = 0.27). BPS required 1.48 ± 0.91 J (165 ± 34 V) to terminate AF. In contrast, MSE terminated AF with significantly less energy (0.16 ± 0.16 J; p < 0.001) and significantly lower peak voltage (31.1 ± 19.3 V; p < 0.001). In vitro optical imaging studies found that AF was maintained by localized foci originating from pulmonary vein–left atrium interfaces. MSE Stage 1 shocks temporarily disrupted localized foci; MSE Stage 2 entrainment shocks continued to silence the localized foci driving AF; and MSE Stage 3 pacing stimuli enabled consistent RA–left atrium activation until sinus rhythm was restored. Conclusions Low-energy MSE significantly reduced the atrial defibrillation thresholds compared with BPS in a canine model of AF. MSE may enable painless, device-based AF therapy. PMID:24076284

  14. Changes In The Heating Degree-days In Norway Due Toglobal Warming

    NASA Astrophysics Data System (ADS)

    Skaugen, T. E.; Tveito, O. E.; Hanssen-Bauer, I.

    A continuous spatial representation of temperature improves the possibility topro- duce maps of temperature-dependent variables. A temperature scenario for the period 2021-2050 is obtained for Norway from the Max-Planck-Institute? AOGCM, GSDIO ECHAM4/OPEC 3. This is done by an ?empirical downscaling method? which in- volves the use of empirical links between large-scale fields and local variables to de- duce estimates of the local variables. The analysis is obtained at forty-six sites in Norway. Spatial representation of the anomalies of temperature in the scenario period compared to the normal period (1961-1990) is obtained with the use of spatial interpo- lation in a GIS. The temperature scenario indicates that we will have a warmer climate in Norway in the future, especially during the winter season. The heating degree-days (HDD) is defined as the accumulated Celsius degrees be- tween the daily mean temperature and a threshold temperature. For Scandinavian countries, this threshold temperature is 17 Celsius degrees. The HDD is found to be a good estimate of accumulated cold. It is therefore a useful index for heating energy consumption within the heating season, and thus to power production planning. As a consequence of the increasing temperatures, the length of the heating season and the HDD within this season will decrease in Norway in the future. The calculations of the heating season and the HDD is estimated at grid level with the use of a GIS. The spatial representation of the heating season and the HDD can then easily be plotted. Local information of the variables being analysed can be withdrawn from the spatial grid in a GIS. The variable is prepared for further spatial analysis. It may also be used as an input to decision making systems.

  15. Stabilizing effect of helical current drive on tearing modes

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Lu, X. Q.; Dong, J. Q.; Gong, X. Y.; Zhang, R. B.

    2018-01-01

    The effect of helical driven current on the m = 2/n = 1 tearing mode is studied numerically in a cylindrical geometry using the method of reduced magneto-hydro-dynamic simulation. The results show that the local persistent helical current drive from the beginning time can be applied to control the tearing modes, and will cause a rebound effect called flip instability when the driven current reaches a certain value. The current intensity threshold value for the occurrence of flip instability is about 0.00087I0. The method of controlling the development of tearing mode with comparative economy is given. If the local helical driven current is discontinuous, the magnetic island can be controlled within a certain range, and then, the tearing modes stop growing; thus, the flip instability can be avoided. We also find that the flip instability will become impatient with delay injection of the driven current because the high order harmonics have been developed in the original O-point. The tearing mode instability can be controlled by using the electron cyclotron current drive to reduce the gradient of the current intensity on the rational surfaces.

  16. A synergetic combination of small and large neighborhood schemes in developing an effective procedure for solving the job shop scheduling problem.

    PubMed

    Amirghasemi, Mehrdad; Zamani, Reza

    2014-01-01

    This paper presents an effective procedure for solving the job shop problem. Synergistically combining small and large neighborhood schemes, the procedure consists of four components, namely (i) a construction method for generating semi-active schedules by a forward-backward mechanism, (ii) a local search for manipulating a small neighborhood structure guided by a tabu list, (iii) a feedback-based mechanism for perturbing the solutions generated, and (iv) a very large-neighborhood local search guided by a forward-backward shifting bottleneck method. The combination of shifting bottleneck mechanism and tabu list is used as a means of the manipulation of neighborhood structures, and the perturbation mechanism employed diversifies the search. A feedback mechanism, called repeat-check, detects consequent repeats and ignites a perturbation when the total number of consecutive repeats for two identical makespan values reaches a given threshold. The results of extensive computational experiments on the benchmark instances indicate that the combination of these four components is synergetic, in the sense that they collectively make the procedure fast and robust.

  17. Change Detection via Selective Guided Contrasting Filters

    NASA Astrophysics Data System (ADS)

    Vizilter, Y. V.; Rubis, A. Y.; Zheltov, S. Y.

    2017-05-01

    Change detection scheme based on guided contrasting was previously proposed. Guided contrasting filter takes two images (test and sample) as input and forms the output as filtered version of test image. Such filter preserves the similar details and smooths the non-similar details of test image with respect to sample image. Due to this the difference between test image and its filtered version (difference map) could be a basis for robust change detection. Guided contrasting is performed in two steps: at the first step some smoothing operator (SO) is applied for elimination of test image details; at the second step all matched details are restored with local contrast proportional to the value of some local similarity coefficient (LSC). The guided contrasting filter was proposed based on local average smoothing as SO and local linear correlation as LSC. In this paper we propose and implement new set of selective guided contrasting filters based on different combinations of various SO and thresholded LSC. Linear average and Gaussian smoothing, nonlinear median filtering, morphological opening and closing are considered as SO. Local linear correlation coefficient, morphological correlation coefficient (MCC), mutual information, mean square MCC and geometrical correlation coefficients are applied as LSC. Thresholding of LSC allows operating with non-normalized LSC and enhancing the selective properties of guided contrasting filters: details are either totally recovered or not recovered at all after the smoothing. These different guided contrasting filters are tested as a part of previously proposed change detection pipeline, which contains following stages: guided contrasting filtering on image pyramid, calculation of difference map, binarization, extraction of change proposals and testing change proposals using local MCC. Experiments on real and simulated image bases demonstrate the applicability of all proposed selective guided contrasting filters. All implemented filters provide the robustness relative to weak geometrical discrepancy of compared images. Selective guided contrasting based on morphological opening/closing and thresholded morphological correlation demonstrates the best change detection result.

  18. Hot spot detection, segmentation, and identification in PET images

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Meetz, Kirsten

    2006-03-01

    Positron Emission Tomography (PET) images provide functional or metabolic information from areas of high concentration of [18F]fluorodeoxyglucose (FDG) tracer, the "hot spots". These hot spots can be easily detected by the eye, but delineation and size determination required e.g. for diagnosis and staging of cancer is a tedious task that demands for automation. The approach for such an automated hot spot segmentation described in this paper comprises three steps: A region of interest detection by the watershed transform, a heart identification by an evaluation of scan lines, and the final segmentation of hot spot areas by a local threshold. The region of interest detection is the essential step, since it localizes the hot spot identification and the final segmentation. The heart identification is an example of how to differentiate between hot spots. Finally, we demonstrate the combination of PET and CT data. Our method is applicable to other techniques like SPECT.

  19. Role of weakest links and system-size scaling in multiscale modeling of stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Ispánovity, Péter Dusán; Tüzes, Dániel; Szabó, Péter; Zaiser, Michael; Groma, István

    2017-02-01

    Plastic deformation of crystalline and amorphous matter often involves intermittent local strain burst events. To understand the physical background of the phenomenon a minimal stochastic mesoscopic model was introduced, where details of the microstructure evolution are statistically represented in terms of a fluctuating local yield threshold. In the present paper we propose a method for determining the corresponding yield stress distribution for the case of crystal plasticity from lower scale discrete dislocation dynamics simulations which we combine with weakest link arguments. The success of scale linking is demonstrated by comparing stress-strain curves obtained from the resulting mesoscopic and the underlying discrete dislocation models in the microplastic regime. As shown by various scaling relations they are statistically equivalent and behave identically in the thermodynamic limit. The proposed technique is expected to be applicable to different microstructures and also to amorphous materials.

  20. SNW 2000 Proceedings. Oxide Thickness Variation Induced Threshold Voltage Fluctuations in Decanano MOSFETs: a 3D Density Gradient Simulation Study

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Kaya, S.; Davies, J. H.; Saini, S.

    2000-01-01

    We use the density gradient (DG) simulation approach to study, in 3D, the effect of local oxide thickness fluctuations on the threshold voltage of decanano MOSFETs in a statistical manner. A description of the reconstruction procedure for the random 2D surfaces representing the 'atomistic' Si-SiO2 interface variations is presented. The procedure is based on power spectrum synthesis in the Fourier domain and can include either Gaussian or exponential spectra. The simulations show that threshold voltage variations induced by oxide thickness fluctuation become significant when the gate length of the devices become comparable to the correlation length of the fluctuations. The extent of quantum corrections in the simulations with respect to the classical case and the dependence of threshold variations on the oxide thickness are examined.

  1. A threshold-based fixed predictor for JPEG-LS image compression

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.

  2. Comparison of Image Processing Techniques for Nonviable Tissue Quantification in Late Gadolinium Enhancement Cardiac Magnetic Resonance Images.

    PubMed

    Carminati, M Chiara; Boniotti, Cinzia; Fusini, Laura; Andreini, Daniele; Pontone, Gianluca; Pepi, Mauro; Caiani, Enrico G

    2016-05-01

    The aim of this study was to compare the performance of quantitative methods, either semiautomated or automated, for left ventricular (LV) nonviable tissue analysis from cardiac magnetic resonance late gadolinium enhancement (CMR-LGE) images. The investigated segmentation techniques were: (i) n-standard deviations thresholding; (ii) full width at half maximum thresholding; (iii) Gaussian mixture model classification; and (iv) fuzzy c-means clustering. These algorithms were applied either in each short axis slice (single-slice approach) or globally considering the entire short-axis stack covering the LV (global approach). CMR-LGE images from 20 patients with ischemic cardiomyopathy were retrospectively selected, and results from each technique were assessed against manual tracing. All methods provided comparable performance in terms of accuracy in scar detection, computation of local transmurality, and high correlation in scar mass compared with the manual technique. In general, no significant difference between single-slice and global approach was noted. The reproducibility of manual and investigated techniques was confirmed in all cases with slightly lower results for the nSD approach. Automated techniques resulted in accurate and reproducible evaluation of LV scars from CMR-LGE in ischemic patients with performance similar to the manual technique. Their application could minimize user interaction and computational time, even when compared with semiautomated approaches.

  3. Interface morphology of Mo/Si multilayer systems with varying Mo layer thickness studied by EUV diffuse scattering.

    PubMed

    Haase, Anton; Soltwisch, Victor; Braun, Stefan; Laubis, Christian; Scholze, Frank

    2017-06-26

    We investigate the influence of the Mo-layer thickness on the EUV reflectance of Mo/Si mirrors with a set of unpolished and interface-polished Mo/Si/C multilayer mirrors. The Mo-layer thickness is varied in the range from 1.7 nm to 3.05 nm. We use a novel combination of specular and diffuse intensity measurements to determine the interface roughness throughout the multilayer stack and do not rely on scanning probe measurements at the surface only. The combination of EUV and X-ray reflectivity measurements and near-normal incidence EUV diffuse scattering allows to reconstruct the Mo layer thicknesses and to determine the interface roughness power spectral density. The data analysis is conducted by applying a matrix method for the specular reflection and the distorted-wave Born approximation for diffuse scattering. We introduce the Markov-chain Monte Carlo method into the field in order to determine the respective confidence intervals for all reconstructed parameters. We unambiguously detect a threshold thickness for Mo in both sample sets where the specular reflectance goes through a local minimum correlated with a distinct increase in diffuse scatter. We attribute that to the known appearance of an amorphous-to-crystallization transition at a certain thickness threshold which is altered in our sample system by the polishing.

  4. Statistical corruption in Beijing's air quality data has likely ended in 2012

    NASA Astrophysics Data System (ADS)

    Stoerk, Thomas

    2016-02-01

    This research documents changes in likely misreporting in official air quality data from Beijing for the years 2008-2013. It is shown that, consistent with prior research, the official Chinese data report suspiciously few observations that exceed the politically important Blue Sky Day threshold, a particular air pollution level used to evaluate local officials, and an excess of observations just below that threshold. Similar data, measured by the US Embassy in Beijing, do not show this irregularity. To document likely misreporting, this analysis proposes a new way of comparing air quality data via Benford's Law, a statistical regularity known to fit air pollution data. Using this method to compare the official data to the US Embassy data for the first time, I find that the Chinese data fit Benford's Law poorly until a change in air quality measurements at the end of 2012. From 2013 onwards, the Chinese data fit Benford's Law closely. The US Embassy data, by contrast, exhibit no variation over time in the fit with Benford's Law, implying that the underlying pollution processes remain unchanged. These findings suggest that misreporting of air quality data for Beijing has likely ended in 2012. Additionally, I use aerosol optical density data to show the general applicability of this method of detecting likely misreporting in air pollution data.

  5. Triggering Interventions for Influenza: The ALERT Algorithm

    PubMed Central

    Reich, Nicholas G.; Cummings, Derek A. T.; Lauer, Stephen A.; Zorn, Martha; Robinson, Christine; Nyquist, Ann-Christine; Price, Connie S.; Simberkoff, Michael; Radonovich, Lewis J.; Perl, Trish M.

    2015-01-01

    Background. Early, accurate predictions of the onset of influenza season enable targeted implementation of control efforts. Our objective was to develop a tool to assist public health practitioners, researchers, and clinicians in defining the community-level onset of seasonal influenza epidemics. Methods. Using recent surveillance data on virologically confirmed infections of influenza, we developed the Above Local Elevated Respiratory Illness Threshold (ALERT) algorithm, a method to identify the period of highest seasonal influenza activity. We used data from 2 large hospitals that serve Baltimore, Maryland and Denver, Colorado, and the surrounding geographic areas. The data used by ALERT are routinely collected surveillance data: weekly case counts of laboratory-confirmed influenza A virus. The main outcome is the percentage of prospective seasonal influenza cases identified by the ALERT algorithm. Results. When ALERT thresholds designed to capture 90% of all cases were applied prospectively to the 2011–2012 and 2012–2013 influenza seasons in both hospitals, 71%–91% of all reported cases fell within the ALERT period. Conclusions. The ALERT algorithm provides a simple, robust, and accurate metric for determining the onset of elevated influenza activity at the community level. This new algorithm provides valuable information that can impact infection prevention recommendations, public health practice, and healthcare delivery. PMID:25414260

  6. From GCM Output to Local Hydrologic and Ecological Impacts: Integrating Climate Change Projections into Conservation Lands

    NASA Astrophysics Data System (ADS)

    Weiss, S. B.; Micheli, L.; Flint, L. E.; Flint, A. L.; Thorne, J. H.

    2014-12-01

    Assessment of climate change resilience, vulnerability, and adaptation options require downscaling of GCM outputs to local scales, and conversion of temperature and precipitation forcings into hydrologic and ecological responses. Recent work in the San Francisco Bay Area, and California demonstrate a practical approach to this process. First, climate futures (GCM x Emissions Scenario) are screened using cluster analysis for seasonal precipitation and temperature, to select a tractable subset of projections that still represent the range of climate projections. Second, monthly climate projections are downscaled to 270m and the Basin Characterization Model (BCM) applied, to generate fine-scale recharge, runoff, actual evapotranspiration (AET), and climatic water deficit (CWD) accounting for soils, bedrock geology, topography, and local climate. Third, annual time-series are used to derive 30-year climatologies and recurrence intervals of extreme events (including multi-year droughts) at the scale of small watersheds and conservation parcels/networks. We take a "scenario-neutral" approach where thresholds are defined for system "failure," such as water supply shortfalls or drought mortality/vegetation transitions, and the time-window for hitting those thresholds is evaluated across all selected climate projections. San Francisco Bay Area examples include drought thresholds (CWD) for specific vegetation-types that identify leading/trailing edges and local refugia, evaluation of hydrologic resources (recharge and runoff) provided by conservation lands, and productivity of rangelands (AET). BCM outputs for multiple futures are becoming available to resource managers through on-line data extraction tools. This approach has wide applicability to numerous resource management issues.

  7. Thermoelectricity near Anderson localization transitions

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kaoru; Aharony, Amnon; Entin-Wohlman, Ora; Hatano, Naomichi

    2017-10-01

    The electronic thermoelectric coefficients are analyzed in the vicinity of one and two Anderson localization thresholds in three dimensions. For a single mobility edge, we correct and extend previous studies and find universal approximants which allow us to deduce the critical exponent for the zero-temperature conductivity from thermoelectric measurements. In particular, we find that at nonzero low temperatures the Seebeck coefficient and the thermoelectric efficiency can be very large on the "insulating" side, for chemical potentials below the (zero-temperature) localization threshold. Corrections to the leading power-law singularity in the zero-temperature conductivity are shown to introduce nonuniversal temperature-dependent corrections to the otherwise universal functions which describe the Seebeck coefficient, the figure of merit, and the Wiedemann-Franz ratio. Next, the thermoelectric coefficients are shown to have interesting dependences on the system size. While the Seebeck coefficient decreases with decreasing size, the figure of merit first decreases but then increases, while the Wiedemann-Franz ratio first increases but then decreases as the size decreases. Small (but finite) samples may thus have larger thermoelectric efficiencies. In the last part we study thermoelectricity in systems with a pair of localization edges, the ubiquitous situation in random systems near the centers of electronic energy bands. As the disorder increases, the two thresholds approach each other, and then the Seebeck coefficient and the figure of merit increase significantly, as expected from the general arguments of Mahan and Sofo [J. D. Mahan and J. O. Sofo, Proc. Natl. Acad. Sci. USA 93, 7436 (1996), 10.1073/pnas.93.15.7436] for a narrow energy range of the zero-temperature metallic behavior.

  8. A Comparison of Earthquake Back-Projection Imaging Methods for Dense Local Arrays, and Application to the 2011 Virginia Aftershock Sequence

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.

    2016-12-01

    Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.

  9. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  10. Welding studs detection based on line structured light

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Wang, Jia; Wang, Wen; Xiao, Zhitao

    2018-01-01

    The quality of welding studs is significant for installation and localization of components of car in the process of automobile general assembly. A welding stud detection method based on line structured light is proposed. Firstly, the adaptive threshold is designed to calculate the binary images. Then, the light stripes of the image are extracted after skeleton line extraction and morphological filtering. The direction vector of the main light stripe is calculated using the length of the light stripe. Finally, the gray projections along the orientation of the main light stripe and the vertical orientation of the main light stripe are computed to obtain curves of gray projection, which are used to detect the studs. Experimental results demonstrate that the error rate of proposed method is lower than 0.1%, which is applied for automobile manufacturing.

  11. Automated tumour boundary delineation on 18F-FDG PET images using active contour coupled with shifted-optimal thresholding method

    NASA Astrophysics Data System (ADS)

    Khamwan, Kitiwat; Krisanachinda, Anchali; Pluempitiwiriyawej, Charnchai

    2012-10-01

    This study presents an automatic method to trace the boundary of the tumour in positron emission tomography (PET) images. It has been discovered that Otsu's threshold value is biased when the within-class variances between the object and the background are significantly different. To solve the problem, a double-stage threshold search that minimizes the energy between the first Otsu's threshold and the maximum intensity value is introduced. Such shifted-optimal thresholding is embedded into a region-based active contour so that both algorithms are performed consecutively. The efficiency of the method is validated using six sphere inserts (0.52-26.53 cc volume) of the IEC/2001 torso phantom. Both spheres and phantom were filled with 18F solution with four source-to-background ratio (SBR) measurements of PET images. The results illustrate that the tumour volumes segmented by combined algorithm are of higher accuracy than the traditional active contour. The method had been clinically implemented in ten oesophageal cancer patients. The results are evaluated and compared with the manual tracing by an experienced radiation oncologist. The advantage of the algorithm is the reduced erroneous delineation that improves the precision and accuracy of PET tumour contouring. Moreover, the combined method is robust, independent of the SBR threshold-volume curves, and it does not require prior lesion size measurement.

  12. Identification of ecological thresholds from variations in phytoplankton communities among lakes: contribution to the definition of environmental standards.

    PubMed

    Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc

    2016-04-01

    In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.

  13. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  14. Efficient and accurate local single reference correlation methods for high-spin open-shell molecules using pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Hansen, Andreas; Liakos, Dimitrios G.; Neese, Frank

    2011-12-01

    A production level implementation of the high-spin open-shell (spin unrestricted) single reference coupled pair, quadratic configuration interaction and coupled cluster methods with up to doubly excited determinants in the framework of the local pair natural orbital (LPNO) concept is reported. This work is an extension of the closed-shell LPNO methods developed earlier [F. Neese, F. Wennmohs, and A. Hansen, J. Chem. Phys. 130, 114108 (2009), 10.1063/1.3086717; F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009), 10.1063/1.3173827]. The internal space is spanned by localized orbitals, while the external space for each electron pair is represented by a truncated PNO expansion. The laborious integral transformation associated with the large number of PNOs becomes feasible through the extensive use of density fitting (resolution of the identity (RI)) techniques. Technical complications arising for the open-shell case and the use of quasi-restricted orbitals for the construction of the reference determinant are discussed in detail. As in the closed-shell case, only three cutoff parameters control the average number of PNOs per electron pair, the size of the significant pair list, and the number of contributing auxiliary basis functions per PNO. The chosen threshold default values ensure robustness and the results of the parent canonical methods are reproduced to high accuracy. Comprehensive numerical tests on absolute and relative energies as well as timings consistently show that the outstanding performance of the LPNO methods carries over to the open-shell case with minor modifications. Finally, hyperfine couplings calculated with the variational LPNO-CEPA/1 method, for which a well-defined expectation value type density exists, indicate the great potential of the LPNO approach for the efficient calculation of molecular properties.

  15. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  16. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  17. Soviet Developments in Material Science No. 1, January - June 1975

    DTIC Science & Technology

    1975-11-30

    Zotova, T. Makhanbetaliyev, B. Ya. Mel’tser, and D. N. Nasledov. Effect of fluctuations in local composition of solid solutions on...289-297. Gurin, N. T., D. G. Semak, and V. V. Fedak. Threshold switching and local states in chalcogenide glasses. FTP, no. 4, 1975...L. N. Seregina. Crystal-glass transition in Ge0 .-Te- p. and its effect on local environment of germanium atoms. FTT, no. 2, 1975, 633

  18. Assessing regional and interspecific variation in threshold responses of forest breeding birds through broad scale analyses.

    PubMed

    van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L

    2013-01-01

    Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation.

  19. Differences in two-point discrimination and sensory threshold in the blind between braille and text reading: a pilot study.

    PubMed

    Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan

    2015-06-01

    [Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds.

  20. Differences in two-point discrimination and sensory threshold in the blind between braille and text reading: a pilot study

    PubMed Central

    Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan

    2015-01-01

    [Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds. PMID:26180348

  1. Local health care expenditure plans and their opportunity costs.

    PubMed

    Karlsberg Schaffer, Sarah; Sussex, Jon; Devlin, Nancy; Walker, Andrew

    2015-09-01

    In the UK, approval decisions by Health Technology Assessment bodies are made using a cost per quality-adjusted life year (QALY) threshold, the value of which is based on little empirical evidence. We test the feasibility of estimating the "true" value of the threshold in NHS Scotland using information on marginal services (those planned to receive significant (dis)investment). We also explore how the NHS makes spending decisions and the role of cost per QALY evidence in this process. We identify marginal services using NHS Board-level responses to the 2012/13 Budget Scrutiny issued by the Scottish Government, supplemented with information on prioritisation processes derived from interviews with Finance Directors. We search the literature for cost-effectiveness evidence relating to marginal services. The cost-effectiveness estimates of marginal services vary hugely and thus it was not possible to obtain a reliable estimate of the threshold. This is unsurprising given the finding that cost-effectiveness evidence is rarely used to justify expenditure plans, which are driven by a range of other factors. Our results highlight the differences in objectives between HTA bodies and local health service decision makers. We also demonstrate that, even if it were desirable, the use of cost-effectiveness evidence at local level would be highly challenging without extensive investment in health economics resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Full-Waveform Envelope Templates for Low Magnitude Discrimination and Yield Estimation at Local and Regional Distances with Application to the North Korean Nuclear Tests

    NASA Astrophysics Data System (ADS)

    Yoo, S. H.

    2017-12-01

    Monitoring seismologists have successfully used seismic coda for event discrimination and yield estimation for over a decade. In practice seismologists typically analyze long-duration, S-coda signals with high signal-to-noise ratios (SNR) at regional and teleseismic distances, since the single back-scattering model reasonably predicts decay of the late coda. However, seismic monitoring requirements are shifting towards smaller, locally recorded events that exhibit low SNR and short signal lengths. To be successful at characterizing events recorded at local distances, we must utilize the direct-phase arrivals, as well as the earlier part of the coda, which is dominated by multiple forward scattering. To remedy this problem, we have developed a new hybrid method known as full-waveform envelope template matching to improve predicted envelope fits over the entire waveform and account for direct-wave and early coda complexity. We accomplish this by including a multiple forward-scattering approximation in the envelope modeling of the early coda. The new hybrid envelope templates are designed to fit local and regional full waveforms and produce low-variance amplitude estimates, which will improve yield estimation and discrimination between earthquakes and explosions. To demonstrate the new technique, we applied our full-waveform envelope template-matching method to the six known North Korean (DPRK) underground nuclear tests and four aftershock events following the September 2017 test. We successfully discriminated the event types and estimated the yield for all six nuclear tests. We also applied the same technique to the 2015 Tianjin explosions in China, and another suspected low-yield explosion at the DPRK test site on May 12, 2010. Our results show that the new full-waveform envelope template-matching method significantly improves upon longstanding single-scattering coda prediction techniques. More importantly, the new method allows monitoring seismologists to extend coda-based techniques to lower magnitude thresholds and low-yield local explosions.

  3. Lung segmentation from HRCT using united geometric active contours

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Li, Chuanfu; Xiong, Jin; Feng, Huanqing

    2007-12-01

    Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold, however its results are usually rough. A united geometric active contours model based on level set is proposed for lung segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into the model, which forces the level set function evolving without re-initialization. The method is found to be much more efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided diagnostic (CAD) system.

  4. Excitonic lasing in solution-processed subwavelength nanosphere assemblies

    DOE PAGES

    Appavoo, Kannatassen; Liu, Xiaoze; Menon, Vinod; ...

    2016-02-03

    Lasing in solution-processed nanomaterials has gained significant interest because of the potential for low-cost integrated photonic devices. Still, a key challenge is to utilize a comprehensive knowledge of the system’s spectral and temporal dynamics to design low-threshold lasing devices. Here, we demonstrate intrinsic lasing (without external cavity) at low-threshold in an ultrathin film of coupled, highly crystalline nanospheres with overall thickness on the order of ~λ/4. The cavity-free geometry consists of ~35 nm zinc oxide nanospheres that collectively localize the in-plane emissive light fields while minimizing scattering losses, resulting in excitonic lasing with fluence thresholds at least an order ofmore » magnitude lower than previous UV-blue random and quantum-dot lasers (<75 μJ/cm 2). Fluence-dependent effects, as quantified by subpicosecond transient spectroscopy, highlight the role of phonon-mediated processes in excitonic lasing. Subpicosecond evolution of distinct lasing modes, together with three-dimensional electromagnetic simulations, indicate a random lasing process, which is in violation of the commonly cited criteria of strong scattering from individual nanostructures and an optically thick sample. Subsequently, an electron–hole plasma mechanism is observed with increased fluence. Furthermore, these results suggest that coupled nanostructures with high crystallinity, fabricated by low-cost solution-processing methods, can function as viable building blocks for high-performance optoelectronics devices.« less

  5. Calculation of photoionization cross section near auto-ionizing lines and magnesium photoionization cross section near threshold

    NASA Technical Reports Server (NTRS)

    Moore, E. N.; Altick, P. L.

    1972-01-01

    The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.

  6. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  7. Method of up-front load balancing for local memory parallel processors

    NASA Technical Reports Server (NTRS)

    Baffes, Paul Thomas (Inventor)

    1990-01-01

    In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.

  8. The impact of vaccine success and awareness on epidemic dynamics

    NASA Astrophysics Data System (ADS)

    Juang, Jonq; Liang, Yu-Hao

    2016-11-01

    The role of vaccine success is introduced into an epidemic spreading model consisting of three states: susceptible, infectious, and vaccinated. Moreover, the effect of three types, namely, contact, local, and global, of infection awareness and immunization awareness is also taken into consideration. The model generalizes those considered in Pastor-Satorras and Vespignani [Phys. Rev. E 63, 066117 (2001)], Pastor-Satorras and Vespignani [Phys. Rev. E 65, 036104 (2002)], Moreno et al. [Eur. Phys. J. B 26, 521-529 (2002)], Wu et al. [Chaos 22, 013101 (2012)], and Wu et al. [Chaos 24, 023108 (2014)]. Our main results contain the following. First, the epidemic threshold is explicitly obtained. In particular, we show that, for any initial conditions, the epidemic eventually dies out regardless of what other factors are whenever some type of immunization awareness is considered, and vaccination has a perfect success. Moreover, the threshold is independent of the global type of awareness. Second, we compare the effect of contact and local types of awareness on the epidemic thresholds between heterogeneous networks and homogeneous networks. Specifically, we find that the epidemic threshold for the homogeneous network can be lower than that of the heterogeneous network in an intermediate regime for intensity of contact infection awareness while it is higher otherwise. In summary, our results highlight the important and crucial roles of both vaccine success and contact infection awareness on epidemic dynamics.

  9. Dynamical decoupling of local transverse random telegraph noise in a two-qubit gate

    NASA Astrophysics Data System (ADS)

    D'Arrigo, A.; Falci, G.; Paladino, E.

    2015-10-01

    Achieving high-fidelity universal two-qubit gates is a central requisite of any implementation of quantum information processing. The presence of spurious fluctuators of various physical origin represents a limiting factor for superconducting nanodevices. Operating qubits at optimal points, where the qubit-fluctuator interaction is transverse with respect to the single qubit Hamiltonian, considerably improved single qubit gates. Further enhancement has been achieved by dynamical decoupling (DD). In this article we investigate DD of transverse random telegraph noise acting locally on each of the qubits forming an entangling gate. Our analysis is based on the exact numerical solution of the stochastic Schrödinger equation. We evaluate the gate error under local periodic, Carr-Purcell and Uhrig DD sequences. We find that a threshold value of the number, n, of pulses exists above which the gate error decreases with a sequence-specific power-law dependence on n. Below threshold, DD may even increase the error with respect to the unconditioned evolution, a behaviour reminiscent of the anti-Zeno effect.

  10. New model for high-power electromagnetic field instability in transparent media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruzdev, V.E.; Libenson, M.N.

    A model of high-power field instability is developed to describe local abrupt increasing of electromagnetic field intensity in transparent dielectric. Small local enhancement of the field amplitude is initiated by low-absorbing spherical inclusion which size is less than radiation wavelength. Exceeding threshold of optical bistability results in abrupt increasing of field amplitude in the defect that also leads to local increasing of field amplitude in the host material in the vicinity of the inclusion. Bearing in mind nonlinear dependence of refractive index of the host material on light intensity we develop a model to describe spreading of initial defect upmore » to size appropriate for the first resonant field mode to be formed. Increasing of refraction index due to nonlinear light-matter interaction and existence of high-Q eigenmodes of dielectric sphere can both cause positive feedback`s and result in field instability in the medium. Estimates are obtained of the threshold value of incident-field amplitude.« less

  11. Effect of arginine vasopressin in the nucleus raphe magnus on antinociception in the rat.

    PubMed

    Yang, Jun; Chen, Jian-Min; Liu, Wen-Yan; Song, Cao-You; Wang, Cheng-Hai; Lin, Bao-Cheng

    2006-09-01

    Previous work has shown that arginine vasopressin (AVP) regulates antinociception through brain nuclei rather than the spinal cord and peripheral organs. The present study investigated the nociceptive effect of AVP in the nucleus raphe magnus (NRM) of the rat. Microinjection of AVP into the NRM increased pain threshold in a dose-dependent manner, while local administration of AVP-receptor antagonist-d(CH2)5Tyr(Et)DAVP decreased the pain threshold. Pain stimulation elevated AVP concentration in the NRM perfuse liquid. NRM pretreatment with AVP-receptor antagonist completely reversed AVP's effect on pain threshold in the NRM. The data suggest that AVP in the NRM is involved in antinociception.

  12. Crack Growth Behavior in the Threshold Region for High Cyclic Loading

    NASA Technical Reports Server (NTRS)

    Forman, R.; Figert, J.; Beek, J.; Ventura, J.; Martinez, J.; Samonski, F.

    2011-01-01

    The present studies show that fanning in the threshold regime is likely caused by other factors than a plastic wake developed during load shedding. The cause of fanning at low R-values is a result of localized roughness, mainly formation of a faceted crack surface morphology , plus crack bifurcations which alters the crack closure at low R-values. The crack growth behavior in the threshold regime involves both crack closure theory and the dislocation theory of metals. Research will continue in studying numerous other metal alloys and performing more extensive analysis, such as the variation in dislocation properties (e.g., stacking fault energy) and its effects in different materials.

  13. Multi-mode ultrasonic welding control and optimization

    DOEpatents

    Tang, Jason C.H.; Cai, Wayne W

    2013-05-28

    A system and method for providing multi-mode control of an ultrasonic welding system. In one embodiment, the control modes include the energy of the weld, the time of the welding process and the compression displacement of the parts being welded during the welding process. The method includes providing thresholds for each of the modes, and terminating the welding process after the threshold for each mode has been reached, the threshold for more than one mode has been reached or the threshold for one of the modes has been reached. The welding control can be either open-loop or closed-loop, where the open-loop process provides the mode thresholds and once one or more of those thresholds is reached the welding process is terminated. The closed-loop control provides feedback of the weld energy and/or the compression displacement so that the weld power and/or weld pressure can be increased or decreased accordingly.

  14. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Wrapping conformations of a polymer on a curved surface

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Hsiao; Tsai, Yan-Chr; Hu, Chin-Kun

    2007-03-01

    The conformation of a polymer on a curved surface is high on the agenda for polymer science. We assume that the free energy of the system is the sum of bending energy of the polymer and the electrostatic attraction between the polymer and surface. As is also assumed, the polymer is very stiff with an invariant length for each segment so that we can neglect its tensile energy and view its length as a constant. Based on the principle of minimization of free energy, we apply a variation method with a locally undetermined Lagrange multiplier to obtain a set of equations for the polymer conformation in terms of local geometrical quantities. We have obtained some numerical solutions for the conformations of the polymer chain on cylindrical and ellipsoidal surfaces. With some boundary conditions, we find that the free energy profiles of polymer chains behave differently and depend on the geometry of the surface for both cases. In the former case, the free energy of each segment distributes within a narrower range and its value per unit length oscillates almost periodically in the azimuthal angle. However, in the latter case the free energy distributes in a wider range with larger value at both ends and smaller value in the middle of the chain. The structure of a polymer wrapping around an ellipsoidal surface is apt to dewrap a polymer from the endpoints. The dependence of threshold lengths for a polymer on the initially anchored positions is also investigated. With initial conditions, the threshold wrapping length is found to increase with the electrostatic attraction strength for the ellipsoidal surface case. When a polymer wraps around a sphere surface, the threshold length increases monotonically with the radius without the self-intersection configuration for a polymer. We also discuss potential applications of the present theory to DNA/protein complex and further researches on DNA on the curved surface.

  16. Imposed Power of Breathing Associated With Use of an Impedance Threshold Device

    DTIC Science & Technology

    2007-02-01

    threshold device and a sham impedance threshold device. DESIGN: Prospective randomized blinded protocol. SETTING: University medical center. PATIENTS...for males). METHODS: The volunteers completed 2 trials of breathing through a face mask fitted with an active impedance threshold device set to open...at -7cmH 2 O pressure, or with a sham impedance threshold device, which was identical to the active device except that it did not contain an

  17. Threshold Evaluation of Emergency Risk Communication for Health Risks Related to Hazardous Ambient Temperature.

    PubMed

    Liu, Yang; Hoppe, Brenda O; Convertino, Matteo

    2018-04-10

    Emergency risk communication (ERC) programs that activate when the ambient temperature is expected to cross certain extreme thresholds are widely used to manage relevant public health risks. In practice, however, the effectiveness of these thresholds has rarely been examined. The goal of this study is to test if the activation criteria based on extreme temperature thresholds, both cold and heat, capture elevated health risks for all-cause and cause-specific mortality and morbidity in the Minneapolis-St. Paul Metropolitan Area. A distributed lag nonlinear model (DLNM) combined with a quasi-Poisson generalized linear model is used to derive the exposure-response functions between daily maximum heat index and mortality (1998-2014) and morbidity (emergency department visits; 2007-2014). Specific causes considered include cardiovascular, respiratory, renal diseases, and diabetes. Six extreme temperature thresholds, corresponding to 1st-3rd and 97th-99th percentiles of local exposure history, are examined. All six extreme temperature thresholds capture significantly increased relative risks for all-cause mortality and morbidity. However, the cause-specific analyses reveal heterogeneity. Extreme cold thresholds capture increased mortality and morbidity risks for cardiovascular and respiratory diseases and extreme heat thresholds for renal disease. Percentile-based extreme temperature thresholds are appropriate for initiating ERC targeting the general population. Tailoring ERC by specific causes may protect some but not all individuals with health conditions exacerbated by hazardous ambient temperature exposure. © 2018 Society for Risk Analysis.

  18. LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI

    PubMed Central

    Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A

    2016-01-01

    Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658

  19. Effects of threshold on the topology of gene co-expression networks.

    PubMed

    Couto, Cynthia Martins Villar; Comin, César Henrique; Costa, Luciano da Fontoura

    2017-09-26

    Several developments regarding the analysis of gene co-expression profiles using complex network theory have been reported recently. Such approaches usually start with the construction of an unweighted gene co-expression network, therefore requiring the selection of a suitable threshold defining which pairs of vertices will be connected. We aimed at addressing such an important problem by suggesting and comparing five different approaches for threshold selection. Each of the methods considers a respective biologically-motivated criterion for electing a potentially suitable threshold. A set of 21 microarray experiments from different biological groups was used to investigate the effect of applying the five proposed criteria to several biological situations. For each experiment, we used the Pearson correlation coefficient to measure the relationship between each gene pair, and the resulting weight matrices were thresholded considering several values, generating respective adjacency matrices (co-expression networks). Each of the five proposed criteria was then applied in order to select the respective threshold value. The effects of these thresholding approaches on the topology of the resulting networks were compared by using several measurements, and we verified that, depending on the database, the impact on the topological properties can be large. However, a group of databases was verified to be similarly affected by most of the considered criteria. Based on such results, it can be suggested that when the generated networks present similar measurements, the thresholding method can be chosen with greater freedom. If the generated networks are markedly different, the thresholding method that better suits the interests of each specific research study represents a reasonable choice.

  20. Carbon dioxide laser polishing of fused silica surfaces for increased laser-damage resistance at 1064 nm.

    PubMed

    Temple, P A; Lowdermilk, W H; Milam, D

    1982-09-15

    Mechanically polished fused silica surfaces were heated with continuous-wave CO(2) laser radiation. Laser-damage thresholds of the surfaces were measured with 1064-nm 9-nsec pulses focused to small spots and with large-spot, 1064-nm, 1-nsec irradiation. A sharp transition from laser-damage-prone to highly laser-damage-resistant took place over a small range in CO(2) laser power. The transition to high damage resistance occurred at a silica surface temperature where material softening began to take place as evidenced by the onset of residual strain in the CO(2) laser-processed part. The small-spot damage measurements show that some CO(2) laser-treated surfaces have a local damage threshold as high as the bulk damage threshold of SiO(2). On some CO(2) laser-treated surfaces, large-spot damage thresholds were increased by a factor of 3-4 over thresholds of the original mechanically polished surface. These treated parts show no obvious change in surface appearance as seen in bright-field, Nomarski, or total internal reflection microscopy. They also show little change in transmissive figure. Further, antireflection films deposited on CO(2) laser-treated surfaces have thresholds greater than the thresholds of antireflection films on mechanically polished surfaces.

  1. Tuning Piezo ion channels to detect molecular-scale movements relevant for fine touch

    PubMed Central

    Poole, Kate; Herget, Regina; Lapatsina, Liudmila; Ngo, Ha-Duong; Lewin, Gary R.

    2014-01-01

    In sensory neurons, mechanotransduction is sensitive, fast and requires mechanosensitive ion channels. Here we develop a new method to directly monitor mechanotransduction at defined regions of the cell-substrate interface. We show that molecular-scale (~13 nm) displacements are sufficient to gate mechanosensitive currents in mouse touch receptors. Using neurons from knockout mice, we show that displacement thresholds increase by one order of magnitude in the absence of stomatin-like protein 3 (STOML3). Piezo1 is the founding member of a class of mammalian stretch-activated ion channels, and we show that STOML3, but not other stomatin-domain proteins, brings the activation threshold for Piezo1 and Piezo2 currents down to ~10 nm. Structure–function experiments localize the Piezo modulatory activity of STOML3 to the stomatin domain, and higher-order scaffolds are a prerequisite for function. STOML3 is the first potent modulator of Piezo channels that tunes the sensitivity of mechanically gated channels to detect molecular-scale stimuli relevant for fine touch. PMID:24662763

  2. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  3. Effect of wave localization on plasma instabilities. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Levedahl, William Kirk

    1987-01-01

    The Anderson model of wave localization in random media is involved to study the effect of solar wind density turbulence on plasma processes associated with the solar type III radio burst. ISEE-3 satellite data indicate that a possible model for the type III process is the parametric decay of Langmuir waves excited by solar flare electron streams into daughter electromagnetic and ion acoustic waves. The threshold for this instability, however, is much higher than observed Langmuir wave levels because of rapid wave convection of the transverse electromagnetic daughter wave in the case where the solar wind is assumed homogeneous. Langmuir and transverse waves near critical density satisfy the Ioffe-Reigel criteria for wave localization in the solar wind with observed density fluctuations -1 percent. Numerical simulations of wave propagation in random media confirm the localization length predictions of Escande and Souillard for stationary density fluctations. For mobile density fluctuations localized wave packets spread at the propagation velocity of the density fluctuations rather than the group velocity of the waves. Computer simulations using a linearized hybrid code show that an electron beam will excite localized Langmuir waves in a plasma with density turbulence. An action principle approach is used to develop a theory of non-linear wave processes when waves are localized. A theory of resonant particles diffusion by localized waves is developed to explain the saturation of the beam-plasma instability. It is argued that localization of electromagnetic waves will allow the instability threshold to be exceeded for the parametric decay discussed above.

  4. Spatial connections in regional climate model rainfall outputs at different temporal scales: Application of network theory

    NASA Astrophysics Data System (ADS)

    Naufan, Ihsan; Sivakumar, Bellie; Woldemeskel, Fitsum M.; Raghavan, Srivatsan V.; Vu, Minh Tue; Liong, Shie-Yui

    2018-01-01

    Understanding the spatial and temporal variability of rainfall has always been a great challenge, and the impacts of climate change further complicate this issue. The present study employs the concepts of complex networks to study the spatial connections in rainfall, with emphasis on climate change and rainfall scaling. Rainfall outputs (during 1961-1990) from a regional climate model (i.e. Weather Research and Forecasting (WRF) model that downscaled the European Centre for Medium-range Weather Forecasts, ECMWF ERA-40 reanalyses) over Southeast Asia are studied, and data corresponding to eight different temporal scales (6-hr, 12-hr, daily, 2-day, 4-day, weekly, biweekly, and monthly) are analyzed. Two network-based methods are applied to examine the connections in rainfall: clustering coefficient (a measure of the network's local density) and degree distribution (a measure of the network's spread). The influence of rainfall correlation threshold (T) on spatial connections is also investigated by considering seven different threshold levels (ranging from 0.5 to 0.8). The results indicate that: (1) rainfall networks corresponding to much coarser temporal scales exhibit properties similar to that of small-world networks, regardless of the threshold; (2) rainfall networks corresponding to much finer temporal scales may be classified as either small-world networks or scale-free networks, depending upon the threshold; and (3) rainfall spatial connections exhibit a transition phase at intermediate temporal scales, especially at high thresholds. These results suggest that the most appropriate model for studying spatial connections may often be different at different temporal scales, and that a combination of small-world and scale-free network models might be more appropriate for rainfall upscaling/downscaling across all scales, in the strict sense of scale-invariance. The results also suggest that spatial connections in the studied rainfall networks in Southeast Asia are weak, especially when more stringent conditions are imposed (i.e. when T is very high), except at the monthly scale.

  5. Can we set a global threshold age to define mature forests?

    PubMed

    Martin, Philip; Jung, Martin; Brearley, Francis Q; Ribbons, Relena R; Lines, Emily R; Jacob, Aerin L

    2016-01-01

    Globally, mature forests appear to be increasing in biomass density (BD). There is disagreement whether these increases are the result of increases in atmospheric CO2 concentrations or a legacy effect of previous land-use. Recently, it was suggested that a threshold of 450 years should be used to define mature forests and that many forests increasing in BD may be younger than this. However, the study making these suggestions failed to account for the interactions between forest age and climate. Here we revisit the issue to identify: (1) how climate and forest age control global forest BD and (2) whether we can set a threshold age for mature forests. Using data from previously published studies we modelled the impacts of forest age and climate on BD using linear mixed effects models. We examined the potential biases in the dataset by comparing how representative it was of global mature forests in terms of its distribution, the climate space it occupied, and the ages of the forests used. BD increased with forest age, mean annual temperature and annual precipitation. Importantly, the effect of forest age increased with increasing temperature, but the effect of precipitation decreased with increasing temperatures. The dataset was biased towards northern hemisphere forests in relatively dry, cold climates. The dataset was also clearly biased towards forests <250 years of age. Our analysis suggests that there is not a single threshold age for forest maturity. Since climate interacts with forest age to determine BD, a threshold age at which they reach equilibrium can only be determined locally. We caution against using BD as the only determinant of forest maturity since this ignores forest biodiversity and tree size structure which may take longer to recover. Future research should address the utility and cost-effectiveness of different methods for determining whether forests should be classified as mature.

  6. Influence of aging on thermal and vibratory thresholds of quantitative sensory testing.

    PubMed

    Lin, Yea-Huey; Hsieh, Song-Chou; Chao, Chi-Chao; Chang, Yang-Chyuan; Hsieh, Sung-Tsang

    2005-09-01

    Quantitative sensory testing has become a common approach to evaluate thermal and vibratory thresholds in various types of neuropathies. To understand the effect of aging on sensory perception, we measured warm, cold, and vibratory thresholds by performing quantitative sensory testing on a population of 484 normal subjects (175 males and 309 females), aged 48.61 +/- 14.10 (range 20-86) years. Sensory thresholds of the hand and foot were measured with two algorithms: the method of limits (Limits) and the method of level (Level). Thresholds measured by Limits are reaction-time-dependent, while those measured by Level are independent of reaction time. In addition, we explored (1) the correlations of thresholds between these two algorithms, (2) the effect of age on differences in thresholds between algorithms, and (3) differences in sensory thresholds between the two test sites. Age was consistently and significantly correlated with sensory thresholds of all tested modalities measured by both algorithms on multivariate regression analysis compared with other factors, including gender, body height, body weight, and body mass index. When thresholds were plotted against age, slopes differed between sensory thresholds of the hand and those of the foot: for the foot, slopes were steeper compared with those for the hand for each sensory modality. Sensory thresholds of both test sites measured by Level were highly correlated with those measured by Limits, and thresholds measured by Limits were higher than those measured by Level. Differences in sensory thresholds between the two algorithms were also correlated with age: thresholds of the foot were higher than those of the hand for each sensory modality. This difference in thresholds (measured with both Level and Limits) between the hand and foot was also correlated with age. These findings suggest that age is the most significant factor in determining sensory thresholds compared with the other factors of gender and anthropometric parameters, and this provides a foundation for investigating the neurobiologic significance of aging on the processing of sensory stimuli.

  7. Global gray-level thresholding based on object size.

    PubMed

    Ranefall, Petter; Wählby, Carolina

    2016-04-01

    In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  8. Characteristics of the local cutaneous sensory thermoneutral zone

    PubMed Central

    Zhang, Hui; Arens, Edward A.

    2017-01-01

    Skin temperature detection thresholds have been used to measure human cold and warm sensitivity across the temperature continuum. They exhibit a sensory zone within which neither warm nor cold sensations prevail. This zone has been widely assumed to coincide with steady-state local skin temperatures between 32 and 34°C, but its underlying neurophysiology has been rarely investigated. In this study we employ two approaches to characterize the properties of sensory thermoneutrality, testing for each whether neutrality shifts along the temperature continuum depending on adaptation to a preceding thermal state. The focus is on local spots of skin on the palm. Ten participants (age: 30.3 ± 4.8 yr) underwent two experiments. Experiment 1 established the cold-to-warm inter-detection threshold range for the palm’s glabrous skin and its shift as a function of 3 starting skin temperatures (26, 31, or 36°C). For the same conditions, experiment 2 determined a thermally neutral zone centered around a thermally neutral point in which thermoreceptors’ activity is balanced. The zone was found to be narrow (~0.98 to ~1.33°C), moving with the starting skin temperature over the temperature span 27.5–34.9°C (Pearson r = 0.94; P < 0.001). It falls within the cold-to-warm inter-threshold range (~2.25 to ~2.47°C) but is only half as wide. These findings provide the first quantitative analysis of the local sensory thermoneutral zone in humans, indicating that it does not occur only within a specific range of steady-state skin temperatures (i.e., it shifts across the temperature continuum) and that it differs from the inter-detection threshold range both quantitatively and qualitatively. These findings provide insight into thermoreception neurophysiology. NEW & NOTEWORTHY Contrary to a widespread concept in human thermoreception, we show that local sensory thermoneutrality is achievable outside the 32–34°C skin temperature range. We propose that sensory adaption underlies a new mechanism of temperature integration. Also, we have developed from vision research a new quantitative test addressing the balance in activity of cutaneous cold and warm thermoreceptors. This could have important clinical (assessment of somatosensory abnormalities in neurological disease) and applied (design of personal comfort systems) implications. PMID:28148644

  9. Study on the threshold of a stochastic SIR epidemic model and its extensions

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli

    2016-09-01

    This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.

  10. Threshold Voltage Instability in A-Si:H TFTS and the Implications for Flexible Displays and Circuits

    DTIC Science & Technology

    2008-12-01

    and negative gate voltages with and without elevated drain voltages for FDC TFTs. Extending techniques used to localize hot electron degradation...in MOSFETs, experiments in our lab have localized the degradation of a-Si:H to the gate dielectric/a-Si:H channel interface [Shringarpure, et al...saturation, increased drain source current measured with the source and drain reversed indicates localization of ΔVth to the gate dielectric/amorphous

  11. Exploiting Synoptic-Scale Climate Processes to Develop Nonstationary, Probabilistic Flood Hazard Projections

    NASA Astrophysics Data System (ADS)

    Spence, C. M.; Brown, C.; Doss-Gollin, J.

    2016-12-01

    Climate model projections are commonly used for water resources management and planning under nonstationarity, but they do not reliably reproduce intense short-term precipitation and are instead more skilled at broader spatial scales. To provide a credible estimate of flood trend that reflects climate uncertainty, we present a framework that exploits the connections between synoptic-scale oceanic and atmospheric patterns and local-scale flood-producing meteorological events to develop long-term flood hazard projections. We demonstrate the method for the Iowa River, where high flow episodes have been found to correlate with tropical moisture exports that are associated with a pressure dipole across the eastern continental United States We characterize the relationship between flooding on the Iowa River and this pressure dipole through a nonstationary Pareto-Poisson peaks-over-threshold probability distribution estimated based on the historic record. We then combine the results of a trend analysis of dipole index in the historic record with the results of a trend analysis of the dipole index as simulated by General Circulation Models (GCMs) under climate change conditions through a Bayesian framework. The resulting nonstationary posterior distribution of dipole index, combined with the dipole-conditioned peaks-over-threshold flood frequency model, connects local flood hazard to changes in large-scale atmospheric pressure and circulation patterns that are related to flooding in a process-driven framework. The Iowa River example demonstrates that the resulting nonstationary, probabilistic flood hazard projection may be used to inform risk-based flood adaptation decisions.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher

    Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less

  13. The segmentation of Thangka damaged regions based on the local distinction

    NASA Astrophysics Data System (ADS)

    Xuehui, Bi; Huaming, Liu; Xiuyou, Wang; Weilan, Wang; Yashuai, Yang

    2017-01-01

    Damaged regions must be segmented before digital repairing Thangka cultural relics. A new segmentation algorithm based on local distinction is proposed for segmenting damaged regions, taking into account some of the damaged area with a transition zone feature, as well as the difference between the damaged regions and their surrounding regions, combining local gray value, local complexity and local definition-complexity (LDC). Firstly, calculate the local complexity and normalized; secondly, calculate the local definition-complexity and normalized; thirdly, calculate the local distinction; finally, set the threshold to segment local distinction image, remove the over segmentation, and get the final segmentation result. The experimental results show that our algorithm is effective, and it can segment the damaged frescoes and natural image etc.

  14. Intraoperative detection of 18F-FDG-avid tissue sites using the increased probe counting efficiency of the K-alpha probe design and variance-based statistical analysis with the three-sigma criteria

    PubMed Central

    2013-01-01

    Background Intraoperative detection of 18F-FDG-avid tissue sites during 18F-FDG-directed surgery can be very challenging when utilizing gamma detection probes that rely on a fixed target-to-background (T/B) ratio (ratiometric threshold) for determination of probe positivity. The purpose of our study was to evaluate the counting efficiency and the success rate of in situ intraoperative detection of 18F-FDG-avid tissue sites (using the three-sigma statistical threshold criteria method and the ratiometric threshold criteria method) for three different gamma detection probe systems. Methods Of 58 patients undergoing 18F-FDG-directed surgery for known or suspected malignancy using gamma detection probes, we identified nine 18F-FDG-avid tissue sites (from amongst seven patients) that were seen on same-day preoperative diagnostic PET/CT imaging, and for which each 18F-FDG-avid tissue site underwent attempted in situ intraoperative detection concurrently using three gamma detection probe systems (K-alpha probe, and two commercially-available PET-probe systems), and then were subsequently surgical excised. Results The mean relative probe counting efficiency ratio was 6.9 (± 4.4, range 2.2–15.4) for the K-alpha probe, as compared to 1.5 (± 0.3, range 1.0–2.1) and 1.0 (± 0, range 1.0–1.0), respectively, for two commercially-available PET-probe systems (P < 0.001). Successful in situ intraoperative detection of 18F-FDG-avid tissue sites was more frequently accomplished with each of the three gamma detection probes tested by using the three-sigma statistical threshold criteria method than by using the ratiometric threshold criteria method, specifically with the three-sigma statistical threshold criteria method being significantly better than the ratiometric threshold criteria method for determining probe positivity for the K-alpha probe (P = 0.05). Conclusions Our results suggest that the improved probe counting efficiency of the K-alpha probe design used in conjunction with the three-sigma statistical threshold criteria method can allow for improved detection of 18F-FDG-avid tissue sites when a low in situ T/B ratio is encountered. PMID:23496877

  15. The effect of condoms on penile vibrotactile sensitivity thresholds in young, heterosexual men

    PubMed Central

    Hill, Brandon J.; Janssen, Erick; Kvam, Peter; Amick, Erick E.; Sanders, Stephanie A.

    2013-01-01

    Introduction Investigating the ways in which barrier methods such as condoms may affect penile sensory thresholds has potential relevance to the development of interventions in men who experience negative effects of condoms on sexual response and sensation. A quantitative, psychophysiological investigation examining the degree to which sensations are altered by condoms has, to date, not been conducted. Aim The objective of this study was to examine penile vibrotactile sensitivity thresholds in both flaccid and erect penises with and without a condom, while comparing men who do and those who do not report condom-associated erection problems (CAEP). Methods Penile vibrotactile sensitivity thresholds were assessed among a total of 141 young, heterosexual men using biothesiometry. An incremental two-step staircase method was used and repeated three times for each of four conditions. Intra-class correlation coefficients (ICC) were calculated for all vibratory assessments. Penile vibratory thresholds were compared using a mixed-model Analysis of Variance (ANOVA). Main Outcome Measures Penile vibrotactile sensitivity thresholds with and without a condom, erectile function measured by International Index of Erectile Function Questionnaire (IIEF), and self-reported degree of erection. Results Significant main effects of condoms (yes/no) and erection (yes/no) were found. No main or interaction effects of CAEP were found. Condoms were associated with higher penile vibrotactile sensitivity thresholds (F(1, 124)=17.11, p<.001). Penile vibrotactile thresholds were higher with an erect than with a flaccid penis (F(1, 124)=4.21, p=.042). Conclusion The current study demonstrates the feasibility of measuring penile vibratory thresholds with and without a condom in both erect and flaccid experimental conditions. As might be expected, condoms increased penile vibrotactile sensitivity thresholds. Interestingly, erections were associated with the highest thresholds. Thus, this study was the first to document that erect penises are less sensitive to vibrotactile stimulation than flaccid penises. PMID:24168347

  16. CHOW PARAMETERS IN THRESHOLD LOGIC,

    DTIC Science & Technology

    respect to threshold functions, they provide the optimal test-synthesis method for completely specified 7-argument (or less) functions, reflect the...signs and relative magnitudes of realizing weights and threshold , and can be used themselves as approximating weights. Results are reproved in a

  17. Effects of urbanization on benthic macroinvertebrate communities in streams, Anchorage, Alaska

    USGS Publications Warehouse

    Ourso, Robert T.

    2001-01-01

    The effect of urbanization on stream macroinvertebrate communities was examined by using data gathered during a 1999 reconnaissance of 14 sites in the Municipality of Anchorage, Alaska. Data collected included macroinvertebrate abundance, water chemistry, and trace elements in bed sediments. Macroinvertebrate relative-abundance data were edited and used in metric and index calculations. Population density was used as a surrogate for urbanization. Cluster analysis (unweighted-paired-grouping method) using arithmetic means of macroinvertebrate presence-absence data showed a well-defined separation between urbanized and nonurbanized sites as well as extracted sites that did not cleanly fall into either category. Water quality in Anchorage generally declined with increasing urbanization (population density). Of 59 variables examined, 31 correlated with urbanization. Local regression analysis extracted 11 variables that showed a significant impairment threshold response and 6 that showed a significant linear response. Significant biological variables for determining the impairment threshold in this study were the Margalef diversity index, Ephemeroptera-Plecoptera-Trichoptera taxa richness, and total taxa richness. Significant thresholds were observed in the water-chemistry variables conductivity, dissolved organic carbon, potassium, and total dissolved solids. Significant thresholds in trace elements in bed sediments included arsenic, iron, manganese, and lead. Results suggest that sites in Anchorage that have ratios of population density to road density greater than 70, storm-drain densities greater than 0.45 miles per square mile, road densities greater than 4 miles per square mile, or population densities greater than 125-150 persons per square mile may require further monitoring to determine if the stream has become impaired. This population density is far less than the 1,000 persons per square mile used by the U.S. Census Bureau to define an urban area.

  18. Modifying Ventricular Fibrillation by Targeted Rotor Substrate Ablation: Proof-of-Concept from Experimental Studies to Clinical VF

    PubMed Central

    KRUMMEN, DAVID E.; HAYASE, JUSTIN; VAMPOLA, STEPHEN P.; HO, GORDON; SCHRICKER, AMIR A.; LALANI, GAUTAM G.; BAYKANER, TINA; COE, TAYLOR M.; CLOPTON, PAUL; RAPPEL, WOUTER-JAN; OMENS, JEFFREY H.; NARAYAN, SANJIV M.

    2016-01-01

    Introduction Recent work has suggested a role for organized sources in sustaining ventricular fibrillation (VF). We assessed whether ablation of rotor substrate could modulate VF inducibility in canines, and used this proof-of-concept as a foundation to suppress antiarrhythmic drug-refractory clinical VF in a patient with structural heart disease. Methods and Results In 9 dogs, we introduced 64-electrode basket catheters into one or both ventricles, used rapid pacing at a recorded induction threshold to initiate VF, and then defibrillated after 18±8 seconds. Endocardial rotor sites were identified from basket recordings using phase mapping, and ablation was performed at nonrotor (sham) locations (7 ± 2 minutes) and then at rotor sites (8 ± 2 minutes, P = 0.10 vs. sham); the induction threshold was remeasured after each. Sham ablation did not alter canine VF induction threshold (preablation 150 ± 16 milliseconds, postablation 144 ± 16 milliseconds, P = 0.54). However, rotor site ablation rendered VF noninducible in 6/9 animals (P = 0.041), and increased VF induction threshold in the remaining 3. Clinical proof-of-concept was performed in a patient with repetitive ICD shocks due to VF refractory to antiarrhythmic drugs. Following biventricular basket insertion, VF was induced and then defibrillated. Mapping identified 4 rotors localized at borderzone tissue, and rotor site ablation (6.3 ± 1.5 minutes/site) rendered VF noninducible. The VF burden fell from 7 ICD shocks in 8 months preablation to zero ICD therapies at 1 year, without antiarrhythmic medications. Conclusions Targeted rotor substrate ablation suppressed VF in an experimental model and a patient with refractory VF. Further studies are warranted on the efficacy of VF source modulation. PMID:26179310

  19. Super-Resolution Community Detection for Layer-Aggregated Multilayer Networks

    PubMed Central

    Taylor, Dane; Caceres, Rajmonda S.; Mucha, Peter J.

    2017-01-01

    Applied network science often involves preprocessing network data before applying a network-analysis method, and there is typically a theoretical disconnect between these steps. For example, it is common to aggregate time-varying network data into windows prior to analysis, and the trade-offs of this preprocessing are not well understood. Focusing on the problem of detecting small communities in multilayer networks, we study the effects of layer aggregation by developing random-matrix theory for modularity matrices associated with layer-aggregated networks with N nodes and L layers, which are drawn from an ensemble of Erdős–Rényi networks with communities planted in subsets of layers. We study phase transitions in which eigenvectors localize onto communities (allowing their detection) and which occur for a given community provided its size surpasses a detectability limit K*. When layers are aggregated via a summation, we obtain K∗∝O(NL/T), where T is the number of layers across which the community persists. Interestingly, if T is allowed to vary with L, then summation-based layer aggregation enhances small-community detection even if the community persists across a vanishing fraction of layers, provided that T/L decays more slowly than 𝒪(L−1/2). Moreover, we find that thresholding the summation can, in some cases, cause K* to decay exponentially, decreasing by orders of magnitude in a phenomenon we call super-resolution community detection. In other words, layer aggregation with thresholding is a nonlinear data filter enabling detection of communities that are otherwise too small to detect. Importantly, different thresholds generally enhance the detectability of communities having different properties, illustrating that community detection can be obscured if one analyzes network data using a single threshold. PMID:29445565

  20. Super-Resolution Community Detection for Layer-Aggregated Multilayer Networks.

    PubMed

    Taylor, Dane; Caceres, Rajmonda S; Mucha, Peter J

    2017-01-01

    Applied network science often involves preprocessing network data before applying a network-analysis method, and there is typically a theoretical disconnect between these steps. For example, it is common to aggregate time-varying network data into windows prior to analysis, and the trade-offs of this preprocessing are not well understood. Focusing on the problem of detecting small communities in multilayer networks, we study the effects of layer aggregation by developing random-matrix theory for modularity matrices associated with layer-aggregated networks with N nodes and L layers, which are drawn from an ensemble of Erdős-Rényi networks with communities planted in subsets of layers. We study phase transitions in which eigenvectors localize onto communities (allowing their detection) and which occur for a given community provided its size surpasses a detectability limit K * . When layers are aggregated via a summation, we obtain [Formula: see text], where T is the number of layers across which the community persists. Interestingly, if T is allowed to vary with L , then summation-based layer aggregation enhances small-community detection even if the community persists across a vanishing fraction of layers, provided that T/L decays more slowly than ( L -1/2 ). Moreover, we find that thresholding the summation can, in some cases, cause K * to decay exponentially, decreasing by orders of magnitude in a phenomenon we call super-resolution community detection. In other words, layer aggregation with thresholding is a nonlinear data filter enabling detection of communities that are otherwise too small to detect. Importantly, different thresholds generally enhance the detectability of communities having different properties, illustrating that community detection can be obscured if one analyzes network data using a single threshold.

Top