A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
Evaluation of Maryland abutment scour equation through selected threshold velocity methods
Benedict, S.T.
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
Jing, X; Cimino, J J
2014-01-01
Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events
NASA Astrophysics Data System (ADS)
Kholodovsky, V.
2017-12-01
Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.
Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang
2016-10-10
In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.
Method and apparatus for monitoring a hydrocarbon-selective catalytic reduction device
Schmieg, Steven J; Viola, Michael B; Cheng, Shi-Wai S; Mulawa, Patricia A; Hilden, David L; Sloane, Thompson M; Lee, Jong H
2014-05-06
A method for monitoring a hydrocarbon-selective catalytic reactor device of an exhaust aftertreatment system of an internal combustion engine operating lean of stoichiometry includes injecting a reductant into an exhaust gas feedstream upstream of the hydrocarbon-selective catalytic reactor device at a predetermined mass flowrate of the reductant, and determining a space velocity associated with a predetermined forward portion of the hydrocarbon-selective catalytic reactor device. When the space velocity exceeds a predetermined threshold space velocity, a temperature differential across the predetermined forward portion of the hydrocarbon-selective catalytic reactor device is determined, and a threshold temperature as a function of the space velocity and the mass flowrate of the reductant is determined. If the temperature differential across the predetermined forward portion of the hydrocarbon-selective catalytic reactor device is below the threshold temperature, operation of the engine is controlled to regenerate the hydrocarbon-selective catalytic reactor device.
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
NASA Astrophysics Data System (ADS)
Zhu, C.; Zhang, S.; Xiao, F.; Li, J.; Yuan, L.; Zhang, Y.; Zhu, T.
2018-05-01
The NASA Operation IceBridge (OIB) mission is the largest program in the Earth's polar remote sensing science observation project currently, initiated in 2009, which collects airborne remote sensing measurements to bridge the gap between NASA's ICESat and the upcoming ICESat-2 mission. This paper develop an improved method that optimizing the selection method of Digital Mapping System (DMS) image and using the optimal threshold obtained by experiments in Beaufort Sea to calculate the local instantaneous sea surface height in this area. The optimal threshold determined by comparing manual selection with the lowest (Airborne Topographic Mapper) ATM L1B elevation threshold of 2 %, 1 %, 0.5 %, 0.2 %, 0.1 % and 0.05 % in A, B, C sections, the mean of mean difference are 0.166 m, 0.124 m, 0.083 m, 0.018 m, 0.002 m and -0.034 m. Our study shows the lowest L1B data of 0.1 % is the optimal threshold. The optimal threshold and manual selections are also used to calculate the instantaneous sea surface height over images with leads, we find that improved methods has closer agreement with those from L1B manual selections. For these images without leads, the local instantaneous sea surface height estimated by using the linear equations between distance and sea surface height calculated over images with leads.
Threshold selection for classification of MR brain images by clustering method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less
Effects of threshold on the topology of gene co-expression networks.
Couto, Cynthia Martins Villar; Comin, César Henrique; Costa, Luciano da Fontoura
2017-09-26
Several developments regarding the analysis of gene co-expression profiles using complex network theory have been reported recently. Such approaches usually start with the construction of an unweighted gene co-expression network, therefore requiring the selection of a suitable threshold defining which pairs of vertices will be connected. We aimed at addressing such an important problem by suggesting and comparing five different approaches for threshold selection. Each of the methods considers a respective biologically-motivated criterion for electing a potentially suitable threshold. A set of 21 microarray experiments from different biological groups was used to investigate the effect of applying the five proposed criteria to several biological situations. For each experiment, we used the Pearson correlation coefficient to measure the relationship between each gene pair, and the resulting weight matrices were thresholded considering several values, generating respective adjacency matrices (co-expression networks). Each of the five proposed criteria was then applied in order to select the respective threshold value. The effects of these thresholding approaches on the topology of the resulting networks were compared by using several measurements, and we verified that, depending on the database, the impact on the topological properties can be large. However, a group of databases was verified to be similarly affected by most of the considered criteria. Based on such results, it can be suggested that when the generated networks present similar measurements, the thresholding method can be chosen with greater freedom. If the generated networks are markedly different, the thresholding method that better suits the interests of each specific research study represents a reasonable choice.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2016-02-08
We aimed to provide insight in how threshold selection affects time to stabilization (TTS) and its reliability to support selection of methods to determine TTS. Eighty-two elite youth soccer players performed six single leg drop jump landings. The TTS was calculated based on four processed signals: raw ground reaction force (GRF) signal (RAW), moving root mean square window (RMS), sequential average (SA) or unbounded third order polynomial fit (TOP). For each trial and processing method a wide range of thresholds was applied. Per threshold, reliability of the TTS was assessed through intra-class correlation coefficients (ICC) for the vertical (V), anteroposterior (AP) and mediolateral (ML) direction of force. Low thresholds resulted in a sharp increase of TTS values and in the percentage of trials in which TTS exceeded trial duration. The TTS and ICC were essentially similar for RAW and RMS in all directions; ICC's were mostly 'insufficient' (<0.4) to 'fair' (0.4-0.6) for the entire range of thresholds. The SA signals resulted in the most stable ICC values across thresholds, being 'substantial' (>0.8) for V, and 'moderate' (0.6-0.8) for AP and ML. The ICC's for TOP were 'substantial' for V, 'moderate' for AP, and 'fair' for ML. The present findings did not reveal an optimal threshold to assess TTS in elite youth soccer players following a single leg drop jump landing. Irrespective of threshold selection, the SA and TOP methods yielded sufficiently reliable TTS values, while for RAW and RMS the reliability was insufficient to differentiate between players. Copyright © 2016 Elsevier Ltd. All rights reserved.
Method for photon activation positron annihilation analysis
Akers, Douglas W.
2006-06-06
A non-destructive testing method comprises providing a specimen having at least one positron emitter therein; determining a threshold energy for activating the positron emitter; and determining whether a half-life of the positron emitter is less than a selected half-life. If the half-life of the positron emitter is greater than or equal to the selected half-life, then activating the positron emitter by bombarding the specimen with photons having energies greater than the threshold energy and detecting gamma rays produced by annihilation of positrons in the specimen. If the half-life of the positron emitter is less then the selected half-life, then alternately activating the positron emitter by bombarding the specimen with photons having energies greater then the threshold energy and detecting gamma rays produced by positron annihilation within the specimen.
Wang, Ruiping; Jiang, Yonggen; Michael, Engelgau; Zhao, Genming
2017-06-12
China Centre for Diseases Control and Prevention (CDC) developed the China Infectious Disease Automated Alert and Response System (CIDARS) in 2005. The CIDARS was used to strengthen infectious disease surveillance and aid in the early warning of outbreak. The CIDARS has been integrated into the routine outbreak monitoring efforts of the CDC at all levels in China. Early warning threshold is crucial for outbreak detection in the CIDARS, but CDCs at all level are currently using thresholds recommended by the China CDC, and these recommended thresholds have recognized limitations. Our study therefore seeks to explore an operational method to select the proper early warning threshold according to the epidemic features of local infectious diseases. The data used in this study were extracted from the web-based Nationwide Notifiable Infectious Diseases Reporting Information System (NIDRIS), and data for infectious disease cases were organized by calendar week (1-52) and year (2009-2015) in Excel format; Px was calculated using a percentile-based moving window (moving window [5 week*5 year], x), where x represents one of 12 centiles (0.40, 0.45, 0.50….0.95). Outbreak signals for the 12 Px were calculated using the moving percentile method (MPM) based on data from the CIDARS. When the outbreak signals generated by the 'mean + 2SD' gold standard were in line with a Px generated outbreak signal for each week during the year of 2014, this Px was then defined as the proper threshold for the infectious disease. Finally, the performance of new selected thresholds for each infectious disease was evaluated by simulated outbreak signals based on 2015 data. Six infectious diseases were selected in this study (chickenpox, mumps, hand foot and mouth diseases (HFMD), scarlet fever, influenza and rubella). Proper thresholds for chickenpox (P75), mumps (P80), influenza (P75), rubella (P45), HFMD (P75), and scarlet fever (P80) were identified. The selected proper thresholds for these 6 infectious diseases could detect almost all simulated outbreaks within a shorter time period compared to thresholds recommended by the China CDC. It is beneficial to select the proper early warning threshold to detect infectious disease aberrations based on characteristics and epidemic features of local diseases in the CIDARS.
Methods, apparatus and system for selective duplication of subtasks
Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2016-03-29
A method for selective duplication of subtasks in a high-performance computing system includes: monitoring a health status of one or more nodes in a high-performance computing system, where one or more subtasks of a parallel task execute on the one or more nodes; identifying one or more nodes as having a likelihood of failure which exceeds a first prescribed threshold; selectively duplicating the one or more subtasks that execute on the one or more nodes having a likelihood of failure which exceeds the first prescribed threshold; and notifying a messaging library that one or more subtasks were duplicated.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Nastasi, Michael Anthony; Wang, Yongqiang; Fraboni, Beatrice; Cosseddu, Piero; Bonfiglio, Annalisa
2013-06-11
Organic thin film devices that included an organic thin film subjected to a selected dose of a selected energy of ions exhibited a stabilized mobility (.mu.) and threshold voltage (VT), a decrease in contact resistance R.sub.C, and an extended operational lifetime that did not degrade after 2000 hours of operation in the air.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Rate-Compatible Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
75 FR 18607 - Mandatory Reporting of Greenhouse Gases: Petroleum and Natural Gas Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-12
... any of the following methods: Federal eRulemaking Portal: http://www.regulations.gov . Follow the... the Source Category D. Selection of Reporting Threshold E. Selection of Proposed Monitoring Methods F... rule and the monitoring methods proposed. This section then provides a brief summary of, and rationale...
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
NASA Astrophysics Data System (ADS)
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan
2015-01-01
Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Joint Dictionary Learning for Multispectral Change Detection.
Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao
2017-04-01
Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.
Froud, Robert; Abel, Gary
2014-01-01
Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Adaptive 4d Psi-Based Change Detection
NASA Astrophysics Data System (ADS)
Yang, Chia-Hsiang; Soergel, Uwe
2018-04-01
In a previous work, we proposed a PSI-based 4D change detection to detect disappearing and emerging PS points (3D) along with their occurrence dates (1D). Such change points are usually caused by anthropic events, e.g., building constructions in cities. This method first divides an entire SAR image stack into several subsets by a set of break dates. The PS points, which are selected based on their temporal coherences before or after a break date, are regarded as change candidates. Change points are then extracted from these candidates according to their change indices, which are modelled from their temporal coherences of divided image subsets. Finally, we check the evolution of the change indices for each change point to detect the break date that this change occurred. The experiment validated both feasibility and applicability of our method. However, two questions still remain. First, selection of temporal coherence threshold associates with a trade-off between quality and quantity of PS points. This selection is also crucial for the amount of change points in a more complex way. Second, heuristic selection of change index thresholds brings vulnerability and causes loss of change points. In this study, we adapt our approach to identify change points based on statistical characteristics of change indices rather than thresholding. The experiment validates this adaptive approach and shows increase of change points compared with the old version. In addition, we also explore and discuss optimal selection of temporal coherence threshold.
Accuracy of cancellous bone volume fraction measured by micro-CT scanning.
Ding, M; Odgaard, A; Hvid, I
1999-03-01
Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
Improving ontology matching with propagation strategy and user feedback
NASA Astrophysics Data System (ADS)
Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu
2015-07-01
Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2017-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS. PMID:28728470
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2018-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS.
Rainfall threshold calculation for debris flow early warning in areas with scarcity of data
NASA Astrophysics Data System (ADS)
Pan, Hua-Li; Jiang, Yuan-Jun; Wang, Jun; Ou, Guo-Qiang
2018-05-01
Debris flows are natural disasters that frequently occur in mountainous areas, usually accompanied by serious loss of lives and properties. One of the most commonly used approaches to mitigate the risk associated with debris flows is the implementation of early warning systems based on well-calibrated rainfall thresholds. However, many mountainous areas have little data regarding rainfall and hazards, especially in debris-flow-forming regions. Therefore, the traditional statistical analysis method that determines the empirical relationship between rainstorms and debris flow events cannot be effectively used to calculate reliable rainfall thresholds in these areas. After the severe Wenchuan earthquake, there were plenty of deposits deposited in the gullies, which resulted in several debris flow events. The triggering rainfall threshold has decreased obviously. To get a reliable and accurate rainfall threshold and improve the accuracy of debris flow early warning, this paper developed a quantitative method, which is suitable for debris flow triggering mechanisms in meizoseismal areas, to identify rainfall threshold for debris flow early warning in areas with a scarcity of data based on the initiation mechanism of hydraulic-driven debris flow. First, we studied the characteristics of the study area, including meteorology, hydrology, topography and physical characteristics of the loose solid materials. Then, the rainfall threshold was calculated by the initiation mechanism of the hydraulic debris flow. The comparison with other models and with alternate configurations demonstrates that the proposed rainfall threshold curve is a function of the antecedent precipitation index (API) and 1 h rainfall. To test the proposed method, we selected the Guojuanyan gully, a typical debris flow valley that during the 2008-2013 period experienced several debris flow events, located in the meizoseismal areas of the Wenchuan earthquake, as a case study. The comparison with other threshold models and configurations shows that the selected approach is the most promising starting point for further studies on debris flow early warning systems in areas with a scarcity of data.
Thresher: an improved algorithm for peak height thresholding of microbial community profiles.
Starke, Verena; Steele, Andrew
2014-11-15
This article presents Thresher, an improved technique for finding peak height thresholds for automated rRNA intergenic spacer analysis (ARISA) profiles. We argue that thresholds must be sample dependent, taking community richness into account. In most previous fragment analyses, a common threshold is applied to all samples simultaneously, ignoring richness variations among samples and thereby compromising cross-sample comparison. Our technique solves this problem, and at the same time provides a robust method for outlier rejection, selecting for removal any replicate pairs that are not valid replicates. Thresholds are calculated individually for each replicate in a pair, and separately for each sample. The thresholds are selected to be the ones that minimize the dissimilarity between the replicates after thresholding. If a choice of threshold results in the two replicates in a pair failing a quantitative test of similarity, either that threshold or that sample must be rejected. We compare thresholded ARISA results with sequencing results, and demonstrate that the Thresher algorithm outperforms conventional thresholding techniques. The software is implemented in R, and the code is available at http://verenastarke.wordpress.com or by contacting the author. vstarke@ciw.edu or http://verenastarke.wordpress.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Shinn-Cunningham, Barbara
2017-01-01
Purpose: This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method: The results from neuroscience and psychoacoustics are reviewed. Results: In noisy settings, listeners focus their…
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
Global gray-level thresholding based on object size.
Ranefall, Petter; Wählby, Carolina
2016-04-01
In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Optical spectral singularities as threshold resonances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mostafazadeh, Ali
2011-04-15
Spectral singularities are among generic mathematical features of complex scattering potentials. Physically they correspond to scattering states that behave like zero-width resonances. For a simple optical system, we show that a spectral singularity appears whenever the gain coefficient coincides with its threshold value and other parameters of the system are selected properly. We explore a concrete realization of spectral singularities for a typical semiconductor gain medium and propose a method of constructing a tunable laser that operates at threshold gain.
Pressure Flammability Thresholds in Oxygen of Selected Aerospace Materials
NASA Technical Reports Server (NTRS)
Hirsch, David; Williams, Jim; Harper, Susana; Beeson, Harold; Ruff, Gary; Pedley, Mike
2010-01-01
The experimental approach consisted of concentrating the testing in the flammability transition zone following the Bruceton Up-and-Down Method. For attribute data, the method has been shown to be very repeatable and most efficient. Other methods for characterization of critical levels (Karberand Probit) were also considered. The data yielded the upward limiting pressure index (ULPI), the pressure level where approx.50% of materials self-extinguish in a given environment.Parametric flammability thresholds other than oxygen concentration can be determined with the methodology proposed for evaluating the MOC when extinguishment occurs. In this case, a pressure threshold in 99.8% oxygen was determined with the methodology and found to be 0.4 to 0.9 psia for typical spacecraft materials. Correlation of flammability thresholds obtained with chemical, hot wire, and other ignition sources will be conducted to provide recommendations for using alternate ignition sources to evaluate flammability of aerospace materials.
Swarm: robust and fast clustering method for amplicon-based studies.
Mahé, Frédéric; Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah
2014-01-01
Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters' internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.
Swarm: robust and fast clustering method for amplicon-based studies
Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah
2014-01-01
Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units. PMID:25276506
Method for depositing layers of high quality semiconductor material
Guha, Subhendu; Yang, Chi C.
2001-08-14
Plasma deposition of substantially amorphous semiconductor materials is carried out under a set of deposition parameters which are selected so that the process operates near the amorphous/microcrystalline threshold. This threshold varies as a function of the thickness of the depositing semiconductor layer; and, deposition parameters, such as diluent gas concentrations, must be adjusted as a function of layer thickness. Also, this threshold varies as a function of the composition of the depositing layer, and in those instances where the layer composition is profiled throughout its thickness, deposition parameters must be adjusted accordingly so as to maintain the amorphous/microcrystalline threshold.
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
Sampling Based Influence Maximization on Linear Threshold Model
NASA Astrophysics Data System (ADS)
Jia, Su; Chen, Ling
2018-04-01
A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.
Salicylate-induced changes in auditory thresholds of adolescent and adult rats.
Brennan, J F; Brown, C A; Jastreboff, P J
1996-01-01
Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Prefixed-threshold real-time selection method in free-space quantum key distribution
NASA Astrophysics Data System (ADS)
Wang, Wenyuan; Xu, Feihu; Lo, Hoi-Kwong
2018-03-01
Free-space quantum key distribution allows two parties to share a random key with unconditional security, between ground stations, between mobile platforms, and even in satellite-ground quantum communications. Atmospheric turbulence causes fluctuations in transmittance, which further affect the quantum bit error rate and the secure key rate. Previous postselection methods to combat atmospheric turbulence require a threshold value determined after all quantum transmission. In contrast, here we propose a method where we predetermine the optimal threshold value even before quantum transmission. Therefore, the receiver can discard useless data immediately, thus greatly reducing data storage requirements and computing resources. Furthermore, our method can be applied to a variety of protocols, including, for example, not only single-photon BB84 but also asymptotic and finite-size decoy-state BB84, which can greatly increase its practicality.
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
Positive-negative corresponding normalized ghost imaging based on an adaptive threshold
NASA Astrophysics Data System (ADS)
Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.
2016-11-01
Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.
NASA Astrophysics Data System (ADS)
Tong, Xin; Winney, Alexander H.; Willitsch, Stefan
2010-10-01
We present a new method for the generation of rotationally and vibrationally state-selected, translationally cold molecular ions in ion traps. Our technique is based on the state-selective threshold photoionization of neutral molecules followed by sympathetic cooling of the resulting ions with laser-cooled calcium ions. Using N2+ ions as a test system, we achieve >90% selectivity in the preparation of the ground rovibrational level and state lifetimes on the order of 15 minutes limited by collisions with background-gas molecules. The technique can be employed to produce a wide range of apolar and polar molecular ions in the ground and excited rovibrational states. Our approach opens up new perspectives for cold quantum-controlled ion-molecule-collision studies, frequency-metrology experiments with state-selected molecular ions and molecular-ion qubits.
Impacts of selected stimulation patterns on the perception threshold in electrocutaneous stimulation
2011-01-01
Background Consistency is one of the most important concerns to convey stable artificially induced sensory feedback. However, the constancy of perceived sensations cannot be guaranteed, as the artificially evoked sensation is a function of the interaction of stimulation parameters. The hypothesis of this study is that the selected stimulation parameters in multi-electrode cutaneous stimulation have significant impacts on the perception threshold. Methods The investigated parameters included the stimulated location, the number of active electrodes, the number of pulses, and the interleaved time between a pair of electrodes. Biphasic, rectangular pulses were applied via five surface electrodes placed on the forearm of 12 healthy subjects. Results Our main findings were: 1) the perception thresholds at the five stimulated locations were significantly different (p < 0.0001), 2) dual-channel simultaneous stimulation lowered the perception thresholds and led to smaller variance in perception thresholds compared to single-channel stimulation, 3) the perception threshold was inversely related to the number of pulses, and 4) the perception threshold increased with increasing interleaved time when the interleaved time between two electrodes was below 500 μs. Conclusions To maintain a consistent perception threshold, our findings indicate that dual-channel simultaneous stimulation with at least five pulses should be used, and that the interleaved time between two electrodes should be longer than 500 μs. We believe that these findings have implications for design of reliable sensory feedback codes. PMID:21306616
Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.
Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles
2015-11-01
Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.
A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.
Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês
2015-12-01
Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Improved Controller Design of Grid Friendly™ Appliances for Primary Frequency Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Jianming; Sun, Yannan; Marinovici, Laurentiu D.
2015-09-01
The Grid Friendlymore » $$^\\textrm{TM}$$ Appliance~(GFA) controller, developed at Pacific Northwest National Laboratory, can autonomously switch off the appliances by detecting the under-frequency events. In this paper, the impacts of curtailing frequency threshold on the performance of frequency responsive GFAs are carefully analyzed first. The current method of selecting curtailing frequency thresholds for GFAs is found to be insufficient to guarantee the desired performance especially when the frequency deviation is shallow. In addition, the power reduction of online GFAs could be so excessive that it can even impact the system response negatively. As a remedy to the deficiency of the current controller design, a different way of selecting curtailing frequency thresholds is proposed to ensure the effectiveness of GFAs in frequency protection. Moreover, it is also proposed to introduce a supervisor at each distribution feeder to monitor the curtailing frequency thresholds of online GFAs and take corrective actions if necessary.« less
Identifying Turbulent Structures through Topological Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bremer, Peer-Timo; Gruber, Andrea; Bennett, Janine C.
2016-01-01
A new method of extracting vortical structures from a turbulent flow is proposed whereby topological segmentation of an indicator function scalar field is used to identify the regions of influence of the individual vortices. This addresses a long-standing challenge in vector field topological analysis: indicator functions commonly used produce a scalar field based on the local velocity vector field; reconstructing regions of influence for a particular structure requires selecting a threshold to define vortex extent. In practice, the same threshold is rarely meaningful throughout a given flow. By also considering the topology of the indicator field function, the characteristics ofmore » vortex strength and extent can be separated and the ambiguity in the choice of the threshold reduced. The proposed approach is able to identify several types of vortices observed in a jet in cross-flow configuration simultaneously where no single threshold value for a selection of common indicator functions appears able to identify all of these vortex types.« less
Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.
NASA Astrophysics Data System (ADS)
Song, Yong-Ak; Melik, Rohat; Rabie, Amr N.; Ibrahim, Ahmed M. S.; Moses, David; Tan, Ara; Han, Jongyoon; Lin, Samuel J.
2011-12-01
Conventional functional electrical stimulation aims to restore functional motor activity of patients with disabilities resulting from spinal cord injury or neurological disorders. However, intervention with functional electrical stimulation in neurological diseases lacks an effective implantable method that suppresses unwanted nerve signals. We have developed an electrochemical method to activate and inhibit a nerve by electrically modulating ion concentrations in situ along the nerve. Using ion-selective membranes to achieve different excitability states of the nerve, we observe either a reduction of the electrical threshold for stimulation by up to approximately 40%, or voluntary, reversible inhibition of nerve signal propagation. This low-threshold electrochemical stimulation method is applicable in current implantable neuroprosthetic devices, whereas the on-demand nerve-blocking mechanism could offer effective clinical intervention in disease states caused by uncontrolled nerve activation, such as epilepsy and chronic pain syndromes.
A SVM-based quantitative fMRI method for resting-state functional network detection.
Song, Xiaomu; Chen, Nan-kuei
2014-09-01
Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
Getting the message across: using ecological integrity to communicate with resource managers
Mitchell, Brian R.; Tierney, Geraldine L.; Schweiger, E. William; Miller, Kathryn M.; Faber-Langendoen, Don; Grace, James B.
2014-01-01
This chapter describes and illustrates how concepts of ecological integrity, thresholds, and reference conditions can be integrated into a research and monitoring framework for natural resource management. Ecological integrity has been defined as a measure of the composition, structure, and function of an ecosystem in relation to the system’s natural or historical range of variation, as well as perturbations caused by natural or anthropogenic agents of change. Using ecological integrity to communicate with managers requires five steps, often implemented iteratively: (1) document the scale of the project and the current conceptual understanding and reference conditions of the ecosystem, (2) select appropriate metrics representing integrity, (3) define externally verified assessment points (metric values that signify an ecological change or need for management action) for the metrics, (4) collect data and calculate metric scores, and (5) summarize the status of the ecosystem using a variety of reporting methods. While we present the steps linearly for conceptual clarity, actual implementation of this approach may require addressing the steps in a different order or revisiting steps (such as metric selection) multiple times as data are collected. Knowledge of relevant ecological thresholds is important when metrics are selected, because thresholds identify where small changes in an environmental driver produce large responses in the ecosystem. Metrics with thresholds at or just beyond the limits of a system’s range of natural variability can be excellent, since moving beyond the normal range produces a marked change in their values. Alternatively, metrics with thresholds within but near the edge of the range of natural variability can serve as harbingers of potential change. Identifying thresholds also contributes to decisions about selection of assessment points. In particular, if there is a significant resistance to perturbation in an ecosystem, with threshold behavior not occurring until well beyond the historical range of variation, this may provide a scientific basis for shifting an ecological assessment point beyond the historical range. We present two case studies using ongoing monitoring by the US National Park Service Vital Signs program that illustrate the use of an ecological integrity approach to communicate ecosystem status to resource managers. The Wetland Ecological Integrity in Rocky Mountain National Park case study uses an analytical approach that specifically incorporates threshold detection into the process of establishing assessment points. The Forest Ecological Integrity of Northeastern National Parks case study describes a method for reporting ecological integrity to resource managers and other decision makers. We believe our approach has the potential for wide applicability for natural resource management.
Miles, Jeffrey Hilton
2011-05-01
Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
Goldberg, J M; Lindblom, U
1979-01-01
Vibration threshold determinations were made by means of an electromagnetic vibrator at three sites (carpal, tibial, and tarsal), which were primarily selected for examining patients with polyneuropathy. Because of the vast variation demonstrated for both vibrator output and tissue damping, the thresholds were expressed in terms of amplitude of stimulator movement measured by means of an accelerometer, instead of applied voltage which is commonly used. Statistical analysis revealed a higher power of discimination for amplitude measurements at all three stimulus sites. Digital read-out gave the best statistical result and was also most practical. Reference values obtained from 110 healthy males, 10 to 74 years of age, were highly correlated with age for both upper and lower extremities. The variance of the vibration perception threshold was less than that of the disappearance threshold, and determination of the perception threshold alone may be sufficient in most cases. PMID:501379
Selective Auditory Attention in Adults: Effects of Rhythmic Structure of the Competing Language
ERIC Educational Resources Information Center
Reel, Leigh Ann; Hicks, Candace Bourland
2012-01-01
Purpose: The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Method: Reception thresholds for English sentences were measured for 50…
Effects of surface anchoring on the electric Frederiks transition in ferronematic systems
NASA Astrophysics Data System (ADS)
Farrokhbin, Mojtaba; Kadivar, Erfan
2016-11-01
The effects of anchoring phenomenon on the electric Frederiks transition threshold field in a nematic liquid crystal doped with ferroelectric nanoparticles are discussed. The polarizability of these nanoparticles in combination with confinement effects cause the drastic effects on the ferronematic systems. This study is based on Frank free energy and Rapini-Papoular surface energy for ferronematic liquid crystal having finite anchoring condition. In the case of different anchoring boundary conditions, the Euler-Lagrange equation of the total free energy is numerically solved by using the finite difference method together with the relaxation method and Maxwell construction to select the physical solutions and therefore investigate the effects of different anchoring strengths on the Frederiks transition threshold field. Maxwell construction method is employed to select three periodic solutions for nematic liquid crystal director at the interfaces of a slab. In the interval from zero to half- π, there is only one solution for the director orientation. In this way, NLC director rotates toward the normal to the surface as the applied electric field increases at the walls. Our numerical results illustrate that above Frederiks transition and in the intermediate anchoring strength, nematic molecules illustrate the different orientation at slab boundaries. We also study the effects of different anchoring strengths, nanoparticle volume fractions and polarizations on the Frederiks transition threshold field. We report that decreasing in the nanoparticle polarization results in the saturation Frederiks threshold. However, this situation does not happen for the nanoparticles volume fraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher
Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less
Müller, Dirk; Pulm, Jannis; Gandjour, Afschin
2012-01-01
To compare cost-effectiveness modeling analyses of strategies to prevent osteoporotic and osteopenic fractures either based on fixed thresholds using bone mineral density or based on variable thresholds including bone mineral density and clinical risk factors. A systematic review was performed by using the MEDLINE database and reference lists from previous reviews. On the basis of predefined inclusion/exclusion criteria, we identified relevant studies published since January 2006. Articles included for the review were assessed for their methodological quality and results. The literature search resulted in 24 analyses, 14 of them using a fixed-threshold approach and 10 using a variable-threshold approach. On average, 70% of the criteria for methodological quality were fulfilled, but almost half of the analyses did not include medication adherence in the base case. The results of variable-threshold strategies were more homogeneous and showed more favorable incremental cost-effectiveness ratios compared with those based on a fixed threshold with bone mineral density. For analyses with fixed thresholds, incremental cost-effectiveness ratios varied from €80,000 per quality-adjusted life-year in women aged 55 years to cost saving in women aged 80 years. For analyses with variable thresholds, the range was €47,000 to cost savings. Risk assessment using variable thresholds appears to be more cost-effective than selecting high-risk individuals by fixed thresholds. Although the overall quality of the studies was fairly good, future economic analyses should further improve their methods, particularly in terms of including more fracture types, incorporating medication adherence, and including or discussing unrelated costs during added life-years. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Identifying failure in a tree network of a parallel computer
Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.
2010-08-24
Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.
Donoho, David; Jin, Jiashun
2008-09-30
In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
Donoho, David; Jin, Jiashun
2008-01-01
In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365
Cloud Detection of Optical Satellite Images Using Support Vector Machine
NASA Astrophysics Data System (ADS)
Lee, Kuan-Yi; Lin, Chao-Hung
2016-06-01
Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.
Using an Outranking Method Supporting the Acquisition of Military Equipment
2009-10-01
selection methodology, taking several criteria into account. We show to what extent the class of PROMETHEE methods is presenting these features. We...functions, the indifference and preference thresholds and some other technical parameters. Then we discuss the capabilities of the PROMETHEE methods to...discuss the interpretation of the results given by these PROMETHEE methods. INTRODUCTION Outranking methods for multicriteria decision aid belong
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Short selection process for procurements not to exceed the simplified acquisition threshold. 736.602-5 Section 736.602-5... selection process for procurements not to exceed the simplified acquisition threshold. References to FAR 36...
Subtil, Fabien; Rabilloud, Muriel
2015-07-01
The receiver operating characteristic curves (ROC curves) are often used to compare continuous diagnostic tests or determine the optimal threshold of a test; however, they do not consider the costs of misclassifications or the disease prevalence. The ROC graph was extended to allow for these aspects. Two new lines are added to the ROC graph: a sensitivity line and a specificity line. Their slopes depend on the disease prevalence and on the ratio of the net benefit of treating a diseased subject to the net cost of treating a nondiseased one. First, these lines help researchers determine the range of specificities within which test comparisons of partial areas under the curves is clinically relevant. Second, the ROC curve point the farthest from the specificity line is shown to be the optimal threshold in terms of expected utility. This method was applied: (1) to determine the optimal threshold of ratio specific immunoglobulin G (IgG)/total IgG for the diagnosis of congenital toxoplasmosis and (2) to select, among two markers, the most accurate for the diagnosis of left ventricular hypertrophy in hypertensive subjects. The two additional lines transform the statistically valid ROC graph into a clinically relevant tool for test selection and threshold determination. Copyright © 2015 Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-10-01
... for contracts not to exceed the simplified acquisition threshold. 636.602-5 Section 636.602-5 Federal... not to exceed the simplified acquisition threshold. The short selection process described in FAR 36.602-5 is authorized for use for contracts not expected to exceed the simplified acquisition threshold...
Code of Federal Regulations, 2010 CFR
2010-10-01
... for contracts not to exceed the simplified acquisition threshold. 1336.602-5 Section 1336.602-5... for contracts not to exceed the simplified acquisition threshold. (a) In contracts not expected to exceed the simplified acquisition threshold, either or both of the short selection processes set out at...
Logarithmic compression methods for spectral data
Dunham, Mark E.
2003-01-01
A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.
Classification Influence of Features on Given Emotions and Its Application in Feature Selection
NASA Astrophysics Data System (ADS)
Xing, Yin; Chen, Chuang; Liu, Li-Long
2018-04-01
In order to solve the problem that there is a large amount of redundant data in high-dimensional speech emotion features, we analyze deeply the extracted speech emotion features and select better features. Firstly, a given emotion is classified by each feature. Secondly, the recognition rate is ranked in descending order. Then, the optimal threshold of features is determined by rate criterion. Finally, the better features are obtained. When applied in Berlin and Chinese emotional data set, the experimental results show that the feature selection method outperforms the other traditional methods.
Wang, Rui-Ping; Jiang, Yong-Gen; Zhao, Gen-Ming; Guo, Xiao-Qin; Michael, Engelgau
2017-12-01
The China Infectious Disease Automated-alert and Response System (CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control (CDC) at all levels in China. In the CIDARS, thresholds are determined using the "Mean+2SD‟ in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the "Mean +2SD‟ method to the performance of 5 novel algorithms to select optimal "Outbreak Gold Standard (OGS)‟ and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The "Mean+2SD‟, C1, C2, moving average (MA), seasonal model (SM), and cumulative sum (CUSUM) algorithms were applied. Outbreak signals for the predicted value (Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A (chickenpox and mumps), TYPE B (influenza and rubella) and TYPE C [hand foot and mouth disease (HFMD) and scarlet fever]. Optimized thresholds for chickenpox (P 55 ), mumps (P 50 ), influenza (P 40 , P 55 , and P 75 ), rubella (P 45 and P 75 ), HFMD (P 65 and P 70 ), and scarlet fever (P 75 and P 80 ) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State.
Wind scatterometry with improved ambiguity selection and rain modeling
NASA Astrophysics Data System (ADS)
Draper, David Willis
Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous wind/rain (SWR) estimation procedure can improve wind estimates during rain, while providing a scatterometer-based rain rate estimate. SWR also affords improved rain flagging for low to moderate rain rates. QuikSCAT-retrieved rain rates correlate well with TRMM PR instantaneous measurements and TMI monthly rain averages. SeaWinds rain measurements can be used to supplement data from other rain-measuring instruments, filling spatial and temporal gaps in coverage.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-06
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.
2016-01-01
Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
Noise-Riding Video Signal Threshold Generation Scheme for a Plurality of Video Signal Channels
2007-02-12
on the selected one signal channel to generate a new video signal threshold . The processing resource has an output to provide the new video signal threshold to the comparator circuit corresponding to the selected signal channel.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2010-01-01
Combustion noise from turbofan engines has become important, as the noise from sources like the fan and jet are reduced. An aligned and un-aligned coherence technique has been developed to determine a threshold level for the coherence and thereby help to separate the coherent combustion noise source from other noise sources measured with far-field microphones. This method is compared with a statistics based coherence threshold estimation method. In addition, the un-aligned coherence procedure at the same time also reveals periodicities, spectral lines, and undamped sinusoids hidden by broadband turbofan engine noise. In calculating the coherence threshold using a statistical method, one may use either the number of independent records or a larger number corresponding to the number of overlapped records used to create the average. Using data from a turbofan engine and a simulation this paper shows that applying the Fisher z-transform to the un-aligned coherence can aid in making the proper selection of samples and produce a reasonable statistics based coherence threshold. Examples are presented showing that the underlying tonal and coherent broad band structure which is buried under random broadband noise and jet noise can be determined. The method also shows the possible presence of indirect combustion noise. Copyright 2011 Acoustical Society of America. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the Acoustical Society of America.
A chaotic model for advertising diffusion problem with competition
NASA Astrophysics Data System (ADS)
Ip, W. H.; Yung, K. L.; Wang, Dingwei
2012-08-01
In this article, the author extends Dawid and Feichtinger's chaotic advertising diffusion model into the duopoly case. A computer simulation system is used to test this enhanced model. Based on the analysis of simulation results, it is found that the best advertising strategy in duopoly is to increase the advertising investment to reach the best Win-Win situation where the oscillation of market portion will not occur. In order to effectively arrive at the best situation, we define a synthetic index and two thresholds. An estimation method for the parameters of the index and thresholds is proposed in this research. We can reach the Win-Win situation by simply selecting the control parameters to make the synthetic index close to the threshold of min-oscillation state. The numerical example and computational results indicated that the proposed chaotic model is useful to describe and analyse advertising diffusion process in duopoly, it is an efficient tool for the selection and optimisation of advertising strategy.
A robustness test of the braided device foreshortening algorithm
NASA Astrophysics Data System (ADS)
Moyano, Raquel Kale; Fernandez, Hector; Macho, Juan M.; Blasco, Jordi; San Roman, Luis; Narata, Ana Paula; Larrabide, Ignacio
2017-11-01
Different computational methods have been recently proposed to simulate the virtual deployment of a braided stent inside a patient vasculature. Those methods are primarily based on the segmentation of the region of interest to obtain the local vessel morphology descriptors. The goal of this work is to evaluate the influence of the segmentation quality on the method named "Braided Device Foreshortening" (BDF). METHODS: We used the 3DRA images of 10 aneurysmatic patients (cases). The cases were segmented by applying a marching cubes algorithm with a broad range of thresholds in order to generate 10 surface models each. We selected a braided device to apply the BDF algorithm to each surface model. The range of the computed flow diverter lengths for each case was obtained to calculate the variability of the method against the threshold segmentation values. RESULTS: An evaluation study over 10 clinical cases indicates that the final length of the deployed flow diverter in each vessel model is stable, shielding maximum difference of 11.19% in vessel diameter and maximum of 9.14% in the simulated stent length for the threshold values. The average coefficient of variation was found to be 4.08 %. CONCLUSION: A study evaluating how the threshold segmentation affects the simulated length of the deployed FD, was presented. The segmentation algorithm used to segment intracranial aneurysm 3D angiography images presents small variation in the resulting stent simulation.
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
Laser antisepsis of Phorphyromonas gingivalis in vitro with dental lasers
NASA Astrophysics Data System (ADS)
Harris, David M.
2004-05-01
It has been shown that both pulsed Nd:YAG (1064nm) and continuous diode (810nm) dental lasers kill pathogenic bacteria (laser antisepsis), but a quantitative method for determining clinical dosimetry does not exist. The purpose of this study was to develop a method to quantify the efficacy of ablation of Porphyromonas gingivalis (Pg) in vitro for two different lasers. The ablation thresholds for the two lasers were compared in the following manner. The energy density was measured as a function of distance from the output of the fiber-optic delivery system. Pg cultures were grown on blood agar plates under standard anaerobic conditions. Blood agar provides an approximation of gingival tissue for the wavelengths tested in having hemoglobin as a primary absorber. Single pulses (Nd:YAG: 100- Œs diode: 100-msec) of laser energy were delivered to Pg colonies and the energy density was increased until the appearance of a small plume was observed coincident with a laser pulse. The energy density at this point defines the ablation threshold. Ablation thresholds to a single pulse were determined for both Pg and for blood agar alone. The large difference in ablation thresholds between the pigmented pathogen and the host matrix for pulsed-Nd:YAG represented a significant therapeutic ratio and Pg was ablated without visible effect on the blood agar. Near threshold the 810-nm diode laser destroyed both the pathogen and the gel. Clinically, the pulsed Nd:YAG may selectively destroy pigmented pathogens leaving the surrounding tissue intact. The 810-nm diode laser may not demonstrate this selectivity due to its longer pulse length and greater absorption by hemoglobin.
Ambient radiation levels in positron emission tomography/computed tomography (PET/CT) imaging center
Santana, Priscila do Carmo; de Oliveira, Paulo Marcio Campos; Mamede, Marcelo; Silveira, Mariana de Castro; Aguiar, Polyanna; Real, Raphaela Vila; da Silva, Teógenes Augusto
2015-01-01
Objective To evaluate the level of ambient radiation in a PET/CT center. Materials and Methods Previously selected and calibrated TLD-100H thermoluminescent dosimeters were utilized to measure room radiation levels. During 32 days, the detectors were placed in several strategically selected points inside the PET/CT center and in adjacent buildings. After the exposure period the dosimeters were collected and processed to determine the radiation level. Results In none of the points selected for measurements the values exceeded the radiation dose threshold for controlled area (5 mSv/year) or free area (0.5 mSv/year) as recommended by the Brazilian regulations. Conclusion In the present study the authors demonstrated that the whole shielding system is appropriate and, consequently, the workers are exposed to doses below the threshold established by Brazilian standards, provided the radiation protection standards are followed. PMID:25798004
Dobie, Robert A
2006-10-01
To discuss appropriate and inappropriate methods for comparing distributions of hearing thresholds of a study group with distributions in population standards and to determine whether the thresholds of Washington State Ferries engineers are different from those of men in the general population, using both frequency-by-frequency comparisons and analysis of audiometric shape. The most recent hearing conservation program audiograms of 321 noise-exposed engineers, ages 35 to 64, were compared with the predictions of Annexes A, B, and C from ANSI S3.44. There was no screening by history or otoscopy; all audiograms were included. 95% confidence intervals (95% CIs) were calculated for the engineers' median thresholds for each ear, for the better ear (defined two ways), and for the binaural average. For Annex B, where 95% CIs are also available, it was possible to calculate z scores for the differences between Annex B and the engineers' better ears. Bulge depth, an audiometric shape statistic, measured curvature between 1 and 6 kHz. Engineers' better-ear median thresholds were worse than those in Annex A but (except at 1 kHz) were as good as or better than those in Annexes B and C, which are more appropriate for comparison to an unscreened noise-exposed group like the engineers. Average bulge depth for the engineers was similar to that of the Annex B standard (no added occupational noise) and was much less than that of audiograms created by using the standard with added occupational noise between 90 and 100 dBA. Audiograms from groups that have been selected for a particular exposure, but, without regard to severity, can appropriately be compared with population standards, if certain pitfalls are avoided. For unscreened study groups with large age-sex subgroups, a simple method to assess statistical significance, taking into consideration uncertainties in both the study group and the comparison standard, is the calculation of z scores for the proportion of better-ear thresholds above the Annex B median. A less powerful method combines small age-sex subgroups after age correction. Small threshold differences, even if statistically significant, may not be due to genuine differences in hearing sensitivity between study group and standard. Audiometric shape analysis offers an independent dimension of comparison between the study group and audiograms predicted from the ANSI S3.44 standard, with and without occupational noise exposure. Important pitfalls in comparison to population standards include nonrandom selection of study groups, inappropriate choice of population standard, use of the right and left ear thresholds instead of the better-ear threshold for comparison to Annex B, and comparing means with medians. The thresholds of the engineers in this study were similar to published standards for an unscreened population.
Design of a reliable and operational landslide early warning system at regional scale
NASA Astrophysics Data System (ADS)
Calvello, Michele; Piciullo, Luca; Gariano, Stefano Luigi; Melillo, Massimo; Brunetti, Maria Teresa; Peruccacci, Silvia; Guzzetti, Fausto
2017-04-01
Landslide early warning systems at regional scale are used to warn authorities, civil protection personnel and the population about the occurrence of rainfall-induced landslides over wide areas, typically through the prediction and measurement of meteorological variables. A warning model for these systems must include a regional correlation law and a decision algorithm. A regional correlation law can be defined as a functional relationship between rainfall and landslides; it is typically based on thresholds of rainfall indicators (e.g., cumulated rainfall, rainfall duration) related to different exceedance probabilities of landslide occurrence. A decision algorithm can be defined as a set of assumptions and procedures linking rainfall thresholds to warning levels. The design and the employment of an operational and reliable early warning system for rainfall-induced landslides at regional scale depend on the identification of a reliable correlation law as well as on the definition of a suitable decision algorithm. Herein, a five-step process chain addressing both issues and based on rainfall thresholds is proposed; the procedure is tested in a landslide-prone area of the Campania region in southern Italy. To this purpose, a database of 96 shallow landslides triggered by rainfall in the period 2003-2010 and rainfall data gathered from 58 rain gauges are used. First, a set of rainfall thresholds are defined applying a frequentist method to reconstructed rainfall conditions triggering landslides in the test area. In the second step, several thresholds at different exceedance probabilities are evaluated, and different percentile combinations are selected for the activation of three warning levels. Subsequently, within steps three and four, the issuing of warning levels is based on the comparison, over time and for each combination, between the measured rainfall and the pre-defined warning level thresholds. Finally, the optimal percentile combination to be employed in the regional early warning system is selected evaluating the model performance in terms of success and error indicators by means of the "event, duration matrix, performance" (EDuMaP) method.
Manning, F.W.; Groothuis, S.E.; Lykins, J.H.; Papke, D.M.
1962-06-12
S>An improved area radiation dose monitor is designed which is adapted to compensate continuously for background radiation below a threshold dose rate and to give warning when the dose integral of the dose rate of an above-threshold radiation excursion exceeds a selected value. This is accomplished by providing means for continuously charging an ionization chamber. The chamber provides a first current proportional to the incident radiation dose rate. Means are provided for generating a second current including means for nulling out the first current with the second current at all values of the first current corresponding to dose rates below a selected threshold dose rate value. The second current has a maximum value corresponding to that of the first current at the threshold dose rate. The excess of the first current over the second current, which occurs above the threshold, is integrated and an alarm is given at a selected integrated value of the excess corresponding to a selected radiation dose. (AEC)
Selection of representative embankments based on rough set - fuzzy clustering method
NASA Astrophysics Data System (ADS)
Bin, Ou; Lin, Zhi-xiang; Fu, Shu-yan; Gao, Sheng-song
2018-02-01
The premise condition of comprehensive evaluation of embankment safety is selection of representative unit embankment, on the basis of dividing the unit levee the influencing factors and classification of the unit embankment are drafted.Based on the rough set-fuzzy clustering, the influence factors of the unit embankment are measured by quantitative and qualitative indexes.Construct to fuzzy similarity matrix of standard embankment then calculate fuzzy equivalent matrix of fuzzy similarity matrix by square method. By setting the threshold of the fuzzy equivalence matrix, the unit embankment is clustered, and the representative unit embankment is selected from the classification of the embankment.
Stroke-model-based character extraction from gray-level document images.
Ye, X; Cheriet, M; Suen, C Y
2001-01-01
Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.
Aruga, Yasuhiro; Kozuka, Masaya
2016-04-01
Needle-shaped precipitates in an aged Al-0.62Mg-0.93Si (mass%) alloy were identified using a compositional threshold method, an isoconcentration surface, in atom probe tomography (APT). The influence of thresholds on the morphological and compositional characteristics of the precipitates was investigated. Utilizing optimum parameters for the concentration space, a reliable number density of the precipitates is obtained without dependence on the elemental concentration threshold in comparison with evaluation by transmission electron microscopy (TEM). It is suggested that careful selection of the concentration space in APT can lead to a reasonable average Mg/Si ratio for the precipitates. It was found that the maximum length and maximum diameter of the precipitates are affected by the elemental concentration threshold. Adjustment of the concentration threshold gives better agreement with the precipitate dimensions measured by TEM. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Everett, Kibri H; Potter, Margaret A; Wheaton, William D; Gleason, Sherrianne M; Brown, Shawn T; Lee, Bruce Y
2013-01-01
Public health agencies use mass immunization locations to quickly administer vaccines to protect a population against an epidemic. The selection of such locations is frequently determined by available staffing levels and in some places, not all potential sites can be opened, often because of a lack of resources. Public health agencies need assistance in determining which n sites are the prime ones to open given available staff to minimize travel time and travel distance for those in the population who need to get to a site to receive treatment. Employ geospatial analytical methods to identify the prime n locations from a predetermined set of potential locations (eg, schools) and determine which locations may not be able to achieve the throughput necessary to reach the herd immunity threshold based on varying R0 values. Spatial location-allocation algorithms were used to select the ideal n mass vaccination locations. Allegheny County, Pennsylvania, served as the study area. The most favorable sites were selected and the number of individuals required to be vaccinated to achieve the herd immunity threshold for a given R0, ranging from 1.5 to 7, was determined. Locations that did not meet the Centers for Disease Control and Prevention throughput recommendation for smallpox were identified. At R0 = 1.5, all mass immunization locations met the required throughput to achieve the herd immunity threshold within 5 days. As R0s increased from 2 to 7, an increasing number of sites were inadequate to meet throughput requirements. Identifying the top n sites and categorizing those with throughput challenges allows health departments to adjust staffing, shift length, or the number of sites. This method has the potential to be expanded to select immunization locations under a number of additional scenarios.
Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan
2018-01-01
Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow < 1.239 × 10 -3 mm 2 /s), and D fast (D fast < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.
Method for nonlinear optimization for gas tagging and other systems
Chen, Ting; Gross, Kenny C.; Wegerich, Stephan
1998-01-01
A method and system for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established.
Method for nonlinear optimization for gas tagging and other systems
Chen, T.; Gross, K.C.; Wegerich, S.
1998-01-06
A method and system are disclosed for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established. 6 figs.
Griffel, G; Marshall, W K; Gravé, I; Yariv, A; Nabiev, R
1991-08-01
Frequency selectivity of a novel type of multielement, multisection laterally coupled semiconductor laser array is studied using the round-trip method. It is found that such a structure should lead to a strong frequency selectivity owing to a periodic dependency of the threshold gain on the frequency. A gain-guided two-coupledcavity device was fabricated. The experimental results show excellent agreement with the theoretical prediction.
Pavlidis, Paul; Qin, Jie; Arango, Victoria; Mann, John J; Sibille, Etienne
2004-06-01
One of the challenges in the analysis of gene expression data is placing the results in the context of other data available about genes and their relationships to each other. Here, we approach this problem in the study of gene expression changes associated with age in two areas of the human prefrontal cortex, comparing two computational methods. The first method, "overrepresentation analysis" (ORA), is based on statistically evaluating the fraction of genes in a particular gene ontology class found among the set of genes showing age-related changes in expression. The second method, "functional class scoring" (FCS), examines the statistical distribution of individual gene scores among all genes in the gene ontology class and does not involve an initial gene selection step. We find that FCS yields more consistent results than ORA, and the results of ORA depended strongly on the gene selection threshold. Our findings highlight the utility of functional class scoring for the analysis of complex expression data sets and emphasize the advantage of considering all available genomic information rather than sets of genes that pass a predetermined "threshold of significance."
System and method for progressive band selection for hyperspectral images
NASA Technical Reports Server (NTRS)
Fisher, Kevin (Inventor)
2013-01-01
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for progressive band selection for hyperspectral images. A system having module configured to control a processor to practice the method calculates a virtual dimensionality of a hyperspectral image having multiple bands to determine a quantity Q of how many bands are needed for a threshold level of information, ranks each band based on a statistical measure, selects Q bands from the multiple bands to generate a subset of bands based on the virtual dimensionality, and generates a reduced image based on the subset of bands. This approach can create reduced datasets of full hyperspectral images tailored for individual applications. The system uses a metric specific to a target application to rank the image bands, and then selects the most useful bands. The number of bands selected can be specified manually or calculated from the hyperspectral image's virtual dimensionality.
High mobility high efficiency organic films based on pure organic materials
Salzman, Rhonda F [Ann Arbor, MI; Forrest, Stephen R [Ann Arbor, MI
2009-01-27
A method of purifying small molecule organic material, performed as a series of operations beginning with a first sample of the organic small molecule material. The first step is to purify the organic small molecule material by thermal gradient sublimation. The second step is to test the purity of at least one sample from the purified organic small molecule material by spectroscopy. The third step is to repeat the first through third steps on the purified small molecule material if the spectroscopic testing reveals any peaks exceeding a threshold percentage of a magnitude of a characteristic peak of a target organic small molecule. The steps are performed at least twice. The threshold percentage is at most 10%. Preferably the threshold percentage is 5% and more preferably 2%. The threshold percentage may be selected based on the spectra of past samples that achieved target performance characteristics in finished devices.
Threshold regression to accommodate a censored covariate.
Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A
2018-06-22
In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.
Dudley-Javoroski, S.
2010-01-01
Summary Surveillance of femur metaphysis bone mineral density (BMD) decline after spinal cord injury (SCI) may be subject to slice placement error of 2.5%. Adaptations to anti-osteoporosis measures should exceed this potential source of error. Image analysis parameters likewise affect BMD output and should be selected strategically in longitudinal studies. Introduction Understanding the longitudinal changes in bone mineral density (BMD) after spinal cord injury (SCI) is important when assessing new interventions. We determined the longitudinal effect of SCI on BMD of the femur metaphysis. To facilitate interpretation of longitudinal outcomes, we (1) determined the BMD difference associated with erroneous peripheral quantitative computed tomography (pQCT) slice placement, and (2) determined the effect of operator-selected pQCT peel algorithms on BMD. Methods pQCT images were obtained from the femur metaphysis (12% of length from distal end) of adult subjects with and without SCI. Slice placement errors were simulated at 3 mm intervals and were processed in two ways (threshold-based vs. concentric peel). Results BMD demonstrated a rapid decline over 2 years post-injury. BMD differences attributable to operator-selected peel methods were large (17.3% for subjects with SCI). Conclusions Femur metaphysis BMD declines after SCI in a manner similar to other anatomic sites. Concentric (percentage-based) peel methods may be most appropriate when special sensitivity is required to detect BMD adaptations. Threshold-based methods may be more appropriate when asymmetric adaptations are observed. PMID:19707702
Operational Dynamic Configuration Analysis
NASA Technical Reports Server (NTRS)
Lai, Chok Fung; Zelinski, Shannon
2010-01-01
Sectors may combine or split within areas of specialization in response to changing traffic patterns. This method of managing capacity and controller workload could be made more flexible by dynamically modifying sector boundaries. Much work has been done on methods for dynamically creating new sector boundaries [1-5]. Many assessments of dynamic configuration methods assume the current day baseline configuration remains fixed [6-7]. A challenging question is how to select a dynamic configuration baseline to assess potential benefits of proposed dynamic configuration concepts. Bloem used operational sector reconfigurations as a baseline [8]. The main difficulty is that operational reconfiguration data is noisy. Reconfigurations often occur frequently to accommodate staff training or breaks, or to complete a more complicated reconfiguration through a rapid sequence of simpler reconfigurations. Gupta quantified a few aspects of airspace boundary changes from this data [9]. Most of these metrics are unique to sector combining operations and not applicable to more flexible dynamic configuration concepts. To better understand what sort of reconfigurations are acceptable or beneficial, more configuration change metrics should be developed and their distribution in current practice should be computed. This paper proposes a method to select a simple sequence of configurations among operational configurations to serve as a dynamic configuration baseline for future dynamic configuration concept assessments. New configuration change metrics are applied to the operational data to establish current day thresholds for these metrics. These thresholds are then corroborated, refined, or dismissed based on airspace practitioner feedback. The dynamic configuration baseline selection method uses a k-means clustering algorithm to select the sequence of configurations and trigger times from a given day of operational sector combination data. The clustering algorithm selects a simplified schedule containing k configurations based on stability score of the sector combinations among the raw operational configurations. In addition, the number of the selected configurations is determined based on balance between accuracy and assessment complexity.
2016-01-01
The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658
NASA Astrophysics Data System (ADS)
Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-02-01
In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k
Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring
NASA Astrophysics Data System (ADS)
Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin
2015-08-01
In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.
Code of Federal Regulations, 2010 CFR
2010-10-01
... factors in the selection decision. (iii) Orders exceeding $5 million. For task or delivery orders in... procedures in 5.705. (11) When using the Governmentwide commercial purchase card as a method of payment, orders at or below the micro-purchase threshold are exempt from verification in the Central Contractor...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 436.602-5 Section 436.602-5 Federal... to exceed the simplified acquisition threshold. The HCA may include either or both procedures in FAR...
Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R
2010-01-01
The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can reliably be used in the low SNR data set (set B) compared to the high SNR data set (set A). Despite the fact that no definitive threshold could be found, this method may help to determine the cutoff for the number of principal components used in discriminant analysis. Future analysis of a selection of spectral databases using this technique will allow optimum thresholds to be selected for different applications and spectral data quality levels.
IMRT QA: Selecting gamma criteria based on error detection sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org
Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
An adaptive design for updating the threshold value of a continuous biomarker
Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2017-01-01
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407
IMRT QA: Selecting gamma criteria based on error detection sensitivity.
Steers, Jennifer M; Fraass, Benedick A
2016-04-01
The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.
Recursive feature selection with significant variables of support vectors.
Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh
2012-01-01
The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.
Radiation area monitor device and method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vencelj, Matjaz; Stowe, Ashley C.; Petrovic, Toni
A radiation area monitor device/method, utilizing: a radiation sensor having a directional radiation sensing capability; a rotation mechanism operable for selectively rotating the radiation sensor such that the directional radiation sensing capability selectively sweeps an area of interest; and a processor operable for analyzing and storing a radiation fingerprint acquired by the radiation sensor as the directional radiation sensing capability selectively sweeps the area of interest. Optionally, the radiation sensor includes a gamma and/or neutron radiation sensor. The device/method selectively operates in: a first supervised mode during which a baseline radiation fingerprint is acquired by the radiation sensor; and amore » second unsupervised mode during which a subsequent radiation fingerprint is acquired by the radiation sensor, wherein the subsequent radiation fingerprint is compared to the baseline radiation fingerprint and, if a predetermined difference threshold is exceeded, an alert is issued.« less
Transverse tripolar stimulation of peripheral nerve: a modelling study of spatial selectivity.
Deurloo, K E; Holsheimer, J; Boom, H B
1998-01-01
Various anode-cathode configurations in a nerve cuff are modelled to predict their spatial selectivity characteristics for functional nerve stimulation. A 3D volume conductor model of a monofascicular nerve is used for the computation of stimulation-induced field potentials, whereas a cable model of myelinated nerve fibre is used for the calculation of the excitation thresholds of fibres. As well as the usual configurations (monopole, bipole, longitudinal tripole, 'steering' anode), a transverse tripolar configuration (central cathode) is examined. It is found that the transverse tripole is the only configuration giving convex recruitment contours and therefore maximises activation selectivity for a small (cylindrical) bundle of fibres in the periphery of a monofascicular nerve trunk. As the electrode configuration is changed to achieve greater selectivity, the threshold current increases. Therefore threshold currents for fibre excitation with a transverse tripole are relatively high. Inverse recruitment is less extreme than for the other configurations. The influences of several geometrical parameters and model conductivities of the transverse tripole on selectivity and threshold current are analysed. In chronic implantation, when electrodes are encapsulated by a layer of fibrous tissue, threshold currents are low, whereas the shape of the recruitment contours in transverse tripolar stimulation does not change.
Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A
2017-09-15
Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code available at http://insilico.utulsa.edu/software/privateEC . brett-mckinney@utulsa.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A clustering algorithm for determining community structure in complex networks
NASA Astrophysics Data System (ADS)
Jin, Hong; Yu, Wei; Li, ShiJun
2018-02-01
Clustering algorithms are attractive for the task of community detection in complex networks. DENCLUE is a representative density based clustering algorithm which has a firm mathematical basis and good clustering properties allowing for arbitrarily shaped clusters in high dimensional datasets. However, this method cannot be directly applied to community discovering due to its inability to deal with network data. Moreover, it requires a careful selection of the density parameter and the noise threshold. To solve these issues, a new community detection method is proposed in this paper. First, we use a spectral analysis technique to map the network data into a low dimensional Euclidean Space which can preserve node structural characteristics. Then, DENCLUE is applied to detect the communities in the network. A mathematical method named Sheather-Jones plug-in is chosen to select the density parameter which can describe the intrinsic clustering structure accurately. Moreover, every node on the network is meaningful so there were no noise nodes as a result the noise threshold can be ignored. We test our algorithm on both benchmark and real-life networks, and the results demonstrate the effectiveness of our algorithm over other popularity density based clustering algorithms adopted to community detection.
Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information
NASA Astrophysics Data System (ADS)
Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.
2017-09-01
Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.
Liyanage, Ganesha S; Ayre, David J; Ooi, Mark K J
2016-11-01
The production of morphologically different seeds or fruits by the same individual plant is known as seed heteromorphism. Such variation is expected to be selected for in disturbance-prone environments to allow germination into inherently variable regeneration niches. However, there are few demonstrations that heteromorphic seed characteristics should be favored by selection or how they may be maintained. In fire-prone ecosystems, seed heteromorphism is found in the temperatures needed to break physical dormancy, with seeds responding to high or low temperatures, ensuring emergence under variable fire-regime-related soil heating. Because of the relationship between dormancy-breaking temperature thresholds and fire severity, we hypothesize that different post-fire resource conditions have selected for covarying seedling traits, which contribute to maintenance of such heteromorphism. Seeds with low thresholds emerge into competitive conditions, either after low-severity fire or in vegetation gaps, and are therefore likely to experience selection for seedling characteristics that make them good competitors. On the other hand, high-temperature-threshold seeds would emerge into less competitive environments, indicative of stand-clearing high-severity fires, and would not experience the same selective forces. We identified high and low-threshold seed morphs via dormancy-breaking heat treatments and germination trials for two study species and compared seed mass and other morphological characteristics between morphs. We then grew seedlings from the two different morphs, with and without competition, and measured growth and biomass allocation as indicators of seedling performance. Seedlings from low-threshold seeds of both species performed better than their high-threshold counterparts, growing more quickly under competitive conditions, confirming that different performance can result from this seed characteristic. Seed mass or appearance did not differ between morphs, indicating that dormancy-breaking temperature threshold variation is a form of cryptic heteromorphism. The potential shown for the selective influence of different post-fire environmental conditions on seedling performance provides evidence of a mechanism for the maintenance of heteromorphic variation in dormancy-breaking temperature thresholds. © 2016 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
Johansson, Linda; Singh, Tanoj; Leong, Thomas; Mawson, Raymond; McArthur, Sally; Manasseh, Richard; Juliano, Pablo
2016-01-01
We here suggest a novel and straightforward approach for liter-scale ultrasound particle manipulation standing wave systems to guide system design in terms of frequency and acoustic power for operating in either cavitation or non-cavitation regimes for ultrasound standing wave systems, using the sonochemiluminescent chemical luminol. We show that this method offers a simple way of in situ determination of the cavitation threshold for selected separation vessel geometry. Since the pressure field is system specific the cavitation threshold is system specific (for the threshold parameter range). In this study we discuss cavitation effects and also measure one implication of cavitation for the application of milk fat separation, the degree of milk fat lipid oxidation by headspace volatile measurements. For the evaluated vessel, 2 MHz as opposed to 1 MHz operation enabled operation in non-cavitation or low cavitation conditions as measured by the luminol intensity threshold method. In all cases the lipid oxidation derived volatiles were below the human sensory detection level. Ultrasound treatment did not significantly influence the oxidative changes in milk for either 1 MHz (dose of 46 kJ/L and 464 kJ/L) or 2 MHz (dose of 37 kJ/L and 373 kJ/L) operation. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Nan-Jie; Wong, Chun-Sing, E-mail: drcswong@gmail.com; Chu, Yiu-Ching
2013-10-01
Purpose: To improve the accuracy of volume and apparent diffusion coefficient (ADC) measurements in diffusion-weighted magnetic resonance imaging (MRI), we proposed a method based on thresholding both the b0 images and the ADC maps. Methods and Materials: In 21 heterogeneous lesions from patients with metastatic gastrointestinal stromal tumors (GIST), gross lesion were manually contoured, and corresponding volumes and ADCs were denoted as gross tumor volume (GTV) and gross ADC (ADC{sub g}), respectively. Using a k-means clustering algorithm, the probable high-cellularity tumor tissues were selected based on b0 images and ADC maps. ADC and volume of the tissues selected using themore » proposed method were denoted as thresholded ADC (ADC{sub thr}) and high-cellularity tumor volume (HCTV), respectively. The metabolic tumor volume (MTV) in positron emission tomography (PET)/computed tomography (CT) was measured using 40% maximum standard uptake value (SUV{sub max}) as the lower threshold, and corresponding mean SUV (SUV{sub mean}) was also measured. Results: HCTV had excellent concordance with MTV according to Pearson's correlation (r=0.984, P<.001) and linear regression (slope = 1.085, intercept = −4.731). In contrast, GTV overestimated the volume and differed significantly from MTV (P=.005). ADC{sub thr} correlated significantly and strongly with SUV{sub mean} (r=−0.807, P<.001) and SUV{sub max} (r=−0.843, P<.001); both were stronger than those of ADC{sub g}. Conclusions: The proposed lesion-adaptive semiautomatic method can help segment high-cellularity tissues that match hypermetabolic tissues in PET/CT and enables more accurate volume and ADC delineation on diffusion-weighted MR images of GIST.« less
ERIC Educational Resources Information Center
Smolkowski, Keith; Cummings, Kelli D.
2016-01-01
This comprehensive evaluation of the Dynamic Indicators of Basic Early Literacy Skills Sixth Edition (DIBELS6) set of measures gives a practical illustration of signal detection methods, the methods used to determine the value of screening and diagnostic systems, and offers an updated set of cut scores (decision thresholds). Data were drawn from a…
An integrated use of topography with RSI in gully mapping, Shandong Peninsula, China.
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed.
An Integrated Use of Topography with RSI in Gully Mapping, Shandong Peninsula, China
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed. PMID:25302333
Pain Intensity Recognition Rates via Biopotential Feature Patterns with Support Vector Machines
Gruss, Sascha; Treister, Roi; Werner, Philipp; Traue, Harald C.; Crawcour, Stephen; Andrade, Adriano; Walter, Steffen
2015-01-01
Background The clinically used methods of pain diagnosis do not allow for objective and robust measurement, and physicians must rely on the patient’s report on the pain sensation. Verbal scales, visual analog scales (VAS) or numeric rating scales (NRS) count among the most common tools, which are restricted to patients with normal mental abilities. There also exist instruments for pain assessment in people with verbal and / or cognitive impairments and instruments for pain assessment in people who are sedated and automated ventilated. However, all these diagnostic methods either have limited reliability and validity or are very time-consuming. In contrast, biopotentials can be automatically analyzed with machine learning algorithms to provide a surrogate measure of pain intensity. Methods In this context, we created a database of biopotentials to advance an automated pain recognition system, determine its theoretical testing quality, and optimize its performance. Eighty-five participants were subjected to painful heat stimuli (baseline, pain threshold, two intermediate thresholds, and pain tolerance threshold) under controlled conditions and the signals of electromyography, skin conductance level, and electrocardiography were collected. A total of 159 features were extracted from the mathematical groupings of amplitude, frequency, stationarity, entropy, linearity, variability, and similarity. Results We achieved classification rates of 90.94% for baseline vs. pain tolerance threshold and 79.29% for baseline vs. pain threshold. The most selected pain features stemmed from the amplitude and similarity group and were derived from facial electromyography. Conclusion The machine learning measurement of pain in patients could provide valuable information for a clinical team and thus support the treatment assessment. PMID:26474183
Measurand transient signal suppressor
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.
Effect of thermal insulation on the electrical characteristics of NbOx threshold switches
NASA Astrophysics Data System (ADS)
Wang, Ziwen; Kumar, Suhas; Wong, H.-S. Philip; Nishi, Yoshio
2018-02-01
Threshold switches based on niobium oxide (NbOx) are promising candidates as bidirectional selector devices in crossbar memory arrays and building blocks for neuromorphic computing. Here, it is experimentally demonstrated that the electrical characteristics of NbOx threshold switches can be tuned by engineering the thermal insulation. Increasing the thermal insulation by ˜10× is shown to produce ˜7× reduction in threshold current and ˜45% reduction in threshold voltage. The reduced threshold voltage leads to ˜5× reduction in half-selection leakage, which highlights the effectiveness of reducing half-selection leakage of NbOx selectors by engineering the thermal insulation. A thermal feedback model based on Poole-Frenkel conduction in NbOx can explain the experimental results very well, which also serves as a piece of strong evidence supporting the validity of the Poole-Frenkel based mechanism in NbOx threshold switches.
Hu, Yanzhu; Ai, Xinbo
2016-01-01
Complex network methodology is very useful for complex system explorer. However, the relationships among variables in complex system are usually not clear. Therefore, inferring association networks among variables from their observed data has been a popular research topic. We propose a synthetic method, named small-shuffle partial symbolic transfer entropy spectrum (SSPSTES), for inferring association network from multivariate time series. The method synthesizes surrogate data, partial symbolic transfer entropy (PSTE) and Granger causality. A proper threshold selection is crucial for common correlation identification methods and it is not easy for users. The proposed method can not only identify the strong correlation without selecting a threshold but also has the ability of correlation quantification, direction identification and temporal relation identification. The method can be divided into three layers, i.e. data layer, model layer and network layer. In the model layer, the method identifies all the possible pair-wise correlation. In the network layer, we introduce a filter algorithm to remove the indirect weak correlation and retain strong correlation. Finally, we build a weighted adjacency matrix, the value of each entry representing the correlation level between pair-wise variables, and then get the weighted directed association network. Two numerical simulated data from linear system and nonlinear system are illustrated to show the steps and performance of the proposed approach. The ability of the proposed method is approved by an application finally. PMID:27832153
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results. Acknowledgments The research project was implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and co-financed by the European Social Fund (ESF) and the Greek State. The work conducted by Roberto Deidda was funded under the Sardinian Regional Law 7/2007 (funding call 2013).
Jensen, Ralph J; Rizzo, Joseph F; Ziv, Ofer R; Grumet, Andrew; Wyatt, John
2003-08-01
To determine electrical thresholds required for extracellular activation of retinal ganglion cells as part of a project to develop an epiretinal prosthesis. Retinal ganglion cells were recorded extracellularly in retinas isolated from adult New Zealand White rabbits. Electrical current pulses of 100- micro s duration were delivered to the inner surface of the retina from a 5- micro m long electrode. In about half of the cells, the point of lowest threshold was found by searching with anodal current pulses; in the other cells, cathodal current pulses were used. Threshold measurements were obtained near the cell bodies of 20 ganglion cells and near the axons of 19 ganglion cells. Both cathodal and anodal stimuli evoked a neural response in the ganglion cells that consisted of a single action potential of near-constant latency that persisted when retinal synaptic transmission was blocked with cadmium chloride. For cell bodies, but not axons, thresholds for both cathodal and anodal stimulation were dependent on the search method used to find the point of lowest threshold. With search and stimulation of matching polarity, cathodal stimuli evoked a ganglion cell response at lower currents (approximately one seventh to one tenth axonal threshold) than did anodal stimuli for both cell bodies and axons. With cathodal search and stimulation, cell body median thresholds were somewhat lower (approximately one half) than the axonal median thresholds. With anodal search and stimulation, cell body median thresholds were approximately the same as axonal median thresholds. The results suggest that cathodal stimulation should produce lower thresholds, more localized stimulation, and somewhat better selectivity for cell bodies over axons than would anodal stimulation.
Bikel, Shirley; Jacobo-Albavera, Leonor; Sánchez-Muñoz, Fausto; Cornejo-Granados, Fernanda; Canizales-Quinteros, Samuel; Soberón, Xavier; Sotelo-Mundo, Rogerio R.; del Río-Navarro, Blanca E.; Mendoza-Vargas, Alfredo; Sánchez, Filiberto
2017-01-01
Background In spite of the emergence of RNA sequencing (RNA-seq), microarrays remain in widespread use for gene expression analysis in the clinic. There are over 767,000 RNA microarrays from human samples in public repositories, which are an invaluable resource for biomedical research and personalized medicine. The absolute gene expression analysis allows the transcriptome profiling of all expressed genes under a specific biological condition without the need of a reference sample. However, the background fluorescence represents a challenge to determine the absolute gene expression in microarrays. Given that the Y chromosome is absent in female subjects, we used it as a new approach for absolute gene expression analysis in which the fluorescence of the Y chromosome genes of female subjects was used as the background fluorescence for all the probes in the microarray. This fluorescence was used to establish an absolute gene expression threshold, allowing the differentiation between expressed and non-expressed genes in microarrays. Methods We extracted the RNA from 16 children leukocyte samples (nine males and seven females, ages 6–10 years). An Affymetrix Gene Chip Human Gene 1.0 ST Array was carried out for each sample and the fluorescence of 124 genes of the Y chromosome was used to calculate the absolute gene expression threshold. After that, several expressed and non-expressed genes according to our absolute gene expression threshold were compared against the expression obtained using real-time quantitative polymerase chain reaction (RT-qPCR). Results From the 124 genes of the Y chromosome, three genes (DDX3Y, TXLNG2P and EIF1AY) that displayed significant differences between sexes were used to calculate the absolute gene expression threshold. Using this threshold, we selected 13 expressed and non-expressed genes and confirmed their expression level by RT-qPCR. Then, we selected the top 5% most expressed genes and found that several KEGG pathways were significantly enriched. Interestingly, these pathways were related to the typical functions of leukocytes cells, such as antigen processing and presentation and natural killer cell mediated cytotoxicity. We also applied this method to obtain the absolute gene expression threshold in already published microarray data of liver cells, where the top 5% expressed genes showed an enrichment of typical KEGG pathways for liver cells. Our results suggest that the three selected genes of the Y chromosome can be used to calculate an absolute gene expression threshold, allowing a transcriptome profiling of microarray data without the need of an additional reference experiment. Discussion Our approach based on the establishment of a threshold for absolute gene expression analysis will allow a new way to analyze thousands of microarrays from public databases. This allows the study of different human diseases without the need of having additional samples for relative expression experiments. PMID:29230367
Code of Federal Regulations, 2010 CFR
2010-10-01
... for contracts not to exceed the simplified acquisition threshold. 836.602-5 Section 836.602-5 Federal... contracts not to exceed the simplified acquisition threshold. Either of the procedures provided in FAR 36... simplified acquisition threshold. ...
Cooperation and charity in spatial public goods game under different strategy update rules
NASA Astrophysics Data System (ADS)
Li, Yixiao; Jin, Xiaogang; Su, Xianchuang; Kong, Fansheng; Peng, Chengbin
2010-03-01
Human cooperation can be influenced by other human behaviors and recent years have witnessed the flourishing of studying the coevolution of cooperation and punishment, yet the common behavior of charity is seldom considered in game-theoretical models. In this article, we investigate the coevolution of altruistic cooperation and egalitarian charity in spatial public goods game, by considering charity as the behavior of reducing inter-individual payoff differences. Our model is that, in each generation of the evolution, individuals play games first and accumulate payoff benefits, and then each egalitarian makes a charity donation by payoff transfer in its neighborhood. To study the individual-level evolutionary dynamics, we adopt different strategy update rules and investigate their effects on charity and cooperation. These rules can be classified into two global rules: random selection rule in which individuals randomly update strategies, and threshold selection rule where only those with payoffs below a threshold update strategies. Simulation results show that random selection enhances the cooperation level, while threshold selection lowers the threshold of the multiplication factor to maintain cooperation. When charity is considered, it is incapable in promoting cooperation under random selection, whereas it promotes cooperation under threshold selection. Interestingly, the evolution of charity strongly depends on the dispersion of payoff acquisitions of the population, which agrees with previous results. Our work may shed light on understanding human egalitarianism.
CombiROC: an interactive web tool for selecting accurate marker combinations of omics data.
Mazzara, Saveria; Rossi, Riccardo L; Grifantini, Renata; Donizetti, Simone; Abrignani, Sergio; Bombaci, Mauro
2017-03-30
Diagnostic accuracy can be improved considerably by combining multiple markers, whose performance in identifying diseased subjects is usually assessed via receiver operating characteristic (ROC) curves. The selection of multimarker signatures is a complicated process that requires integration of data signatures with sophisticated statistical methods. We developed a user-friendly tool, called CombiROC, to help researchers accurately determine optimal markers combinations from diverse omics methods. With CombiROC data from different domains, such as proteomics and transcriptomics, can be analyzed using sensitivity/specificity filters: the number of candidate marker panels rising from combinatorial analysis is easily optimized bypassing limitations imposed by the nature of different experimental approaches. Leaving to the user full control on initial selection stringency, CombiROC computes sensitivity and specificity for all markers combinations, performances of best combinations and ROC curves for automatic comparisons, all visualized in a graphic interface. CombiROC was designed without hard-coded thresholds, allowing a custom fit to each specific data: this dramatically reduces the computational burden and lowers the false negative rates given by fixed thresholds. The application was validated with published data, confirming the marker combination already originally described or even finding new ones. CombiROC is a novel tool for the scientific community freely available at http://CombiROC.eu.
Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models
Rice, John D.; Taylor, Jeremy M. G.
2016-01-01
One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492
An adaptive design for updating the threshold value of a continuous biomarker.
Spencer, Amy V; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2016-11-30
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker 'positive' and 'negative' is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that 'no population subset exists in which the novel treatment has a desirable response rate' to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-08-01
The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).
Stefan, Sabina; Schorr, Barbara; Lopez-Rolon, Alex; Kolassa, Iris-Tatjana; Shock, Jonathan P; Rosenfelder, Martin; Heck, Suzette; Bender, Andreas
2018-04-17
We applied the following methods to resting-state EEG data from patients with disorders of consciousness (DOC) for consciousness indexing and outcome prediction: microstates, entropy (i.e. approximate, permutation), power in alpha and delta frequency bands, and connectivity (i.e. weighted symbolic mutual information, symbolic transfer entropy, complex network analysis). Patients with unresponsive wakefulness syndrome (UWS) and patients in a minimally conscious state (MCS) were classified into these two categories by fitting and testing a generalised linear model. We aimed subsequently to develop an automated system for outcome prediction in severe DOC by selecting an optimal subset of features using sequential floating forward selection (SFFS). The two outcome categories were defined as UWS or dead, and MCS or emerged from MCS. Percentage of time spent in microstate D in the alpha frequency band performed best at distinguishing MCS from UWS patients. The average clustering coefficient obtained from thresholding beta coherence performed best at predicting outcome. The optimal subset of features selected with SFFS consisted of the frequency of microstate A in the 2-20 Hz frequency band, path length obtained from thresholding alpha coherence, and average path length obtained from thresholding alpha coherence. Combining these features seemed to afford high prediction power. Python and MATLAB toolboxes for the above calculations are freely available under the GNU public license for non-commercial use ( https://qeeg.wordpress.com ).
Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M
2015-10-01
New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Ruth, Veikko; Kolditz, Daniel; Steiding, Christian; Kalender, Willi A
2017-06-01
The performance of metal artifact reduction (MAR) methods in x-ray computed tomography (CT) suffers from incorrect identification of metallic implants in the artifact-affected volumetric images. The aim of this study was to investigate potential improvements of state-of-the-art MAR methods by using prior information on geometry and material of the implant. The influence of a novel prior knowledge-based segmentation (PS) compared with threshold-based segmentation (TS) on 2 MAR methods (linear interpolation [LI] and normalized-MAR [NORMAR]) was investigated. The segmentation is the initial step of both MAR methods. Prior knowledge-based segmentation uses 3-dimensional registered computer-aided design (CAD) data as prior knowledge to estimate the correct position and orientation of the metallic objects. Threshold-based segmentation uses an adaptive threshold to identify metal. Subsequently, for LI and NORMAR, the selected voxels are projected into the raw data domain to mark metal areas. Attenuation values in these areas are replaced by different interpolation schemes followed by a second reconstruction. Finally, the previously selected metal voxels are replaced by the metal voxels determined by PS or TS in the initial reconstruction. First, we investigated in an elaborate phantom study if the knowledge of the exact implant shape extracted from the CAD data provided by the manufacturer of the implant can improve the MAR result. Second, the leg of a human cadaver was scanned using a clinical CT system before and after the implantation of an artificial knee joint. The results were compared regarding segmentation accuracy, CT number accuracy, and the restoration of distorted structures. The use of PS improved the efficacy of LI and NORMAR compared with TS. Artifacts caused by insufficient segmentation were reduced, and additional information was made available within the projection data. The estimation of the implant shape was more exact and not dependent on a threshold value. Consequently, the visibility of structures was improved when comparing the new approach to the standard method. This was further confirmed by improved CT value accuracy and reduced image noise. The PS approach based on prior implant information provides image quality which is superior to TS-based MAR, especially when the shape of the metallic implant is complex. The new approach can be useful for improving MAR methods and dose calculations within radiation therapy based on the MAR corrected CT images.
Code of Federal Regulations, 2010 CFR
2010-10-01
... for contracts not to exceed the simplified acquisition threshold. 1436.602-5 Section 1436.602-5... for contracts not to exceed the simplified acquisition threshold. At each occurrence, CO approval...-engineer contracts not expected to exceed the simplified acquisition threshold. ...
2010-01-01
Background The origin and stability of cooperation is a hot topic in social and behavioural sciences. A complicated conundrum exists as defectors have an advantage over cooperators, whenever cooperation is costly so consequently, not cooperating pays off. In addition, the discovery that humans and some animal populations, such as lions, are polymorphic, where cooperators and defectors stably live together -- while defectors are not being punished--, is even more puzzling. Here we offer a novel explanation based on a Threshold Public Good Game (PGG) that includes the interaction of individual and group level selection, where individuals can contribute to multiple collective actions, in our model group hunting and group defense. Results Our results show that there are polymorphic equilibria in Threshold PGGs; that multi-level selection does not select for the most cooperators per group but selects those close to the optimum number of cooperators (in terms of the Threshold PGG). In particular for medium cost values division of labour evolves within the group with regard to the two types of cooperative actions (hunting vs. defense). Moreover we show evidence that spatial population structure promotes cooperation in multiple PGGs. We also demonstrate that these results apply for a wide range of non-linear benefit function types. Conclusions We demonstrate that cooperation can be stable in Threshold PGG, even when the proportion of so called free riders is high in the population. A fundamentally new mechanism is proposed how laggards, individuals that have a high tendency to defect during one specific group action can actually contribute to the fitness of the group, by playing part in an optimal resource allocation in Threshold Public Good Games. In general, our results show that acknowledging a multilevel selection process will open up novel explanations for collective actions. PMID:21044340
Salt, A N; DeMott, J
1992-01-01
A physiologic technique was developed to measure endolymphatic cross-sectional area in vivo using tetramethylammonium (TMA) as a volume marker. The technique was evaluated in guinea pigs as an animal model. In the method, the cochlea was exposed surgically and TMA was injected into endolymph of the second turn at a constant rate by iontophoresis. The concentration of TMA was monitored during and after the injection using ion-selective electrodes. Cross-section estimates derived from the TMA concentration measurements were compared in normal animals and animals in which endolymphatic hydrops had been induced by ablation of the endolymphatic duct and sac 8 weeks earlier. The method demonstrated a mean increase in cross-sectional area of 258% in the hydropic group. Individually measured area values were compared with action potential threshold shifts and the magnitude of the endocochlear potential (EP). Hydropic animals typically showed an increase in threshold to 2 kHz stimuli and a decrease in EP. However, the degree of threshold shift or EP decrease did not correlate well with the degree of hydrops present.
Automatic detection of malaria parasite in blood images using two parameters.
Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong
2015-01-01
Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.
Color vision testing with a computer graphics system: preliminary results.
Arden, G; Gündüz, K; Perry, S
1988-06-01
We report a method for computer enhancement of color vision tests. In our graphics system 256 colors are selected from a much larger range and displayed on a screen divided into 768 x 288 pixels. Eight-bit digital-to-analogue converters drive a high quality monitor with separate inputs to the red, green, and blue amplifiers and calibrated gun chromaticities. The graphics are controlled by a PASCAL program written for a personal computer, which calculates the values of the red, green, and blue signals and specifies them in Commité Internationale d'Eclairage X, Y, and Z fundamentals, so changes in chrominance occur without changes in luminance. The system for measuring color contrast thresholds with gratings is more than adequate in normal observers. In patients with mild retinal damage in whom other tests of visual function are normal, this method of testing color vision shows specific increases in contrast thresholds along tritan color-confusion lines. By the time the Hardy-Rand-Rittler and Farnsworth-Munsell 100-hue tests disclose abnormalities, gross defects in color contrast threshold can be seen with our system.
Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.
2017-01-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532
Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W
2017-04-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.
Loziuk, Philip L.; Sederoff, Ronald R.; Chiang, Vincent L.; Muddiman, David C.
2014-01-01
Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification. PMID:25154770
Fundamental Vocabulary Selection Based on Word Familiarity
NASA Astrophysics Data System (ADS)
Sato, Hiroshi; Kasahara, Kaname; Kanasugi, Tomoko; Amano, Shigeaki
This paper proposes a new method for selecting fundamental vocabulary. We are presently constructing the Fundamental Vocabulary Knowledge-base of Japanese that contains integrated information on syntax, semantics and pragmatics, for the purposes of advanced natural language processing. This database mainly consists of a lexicon and a treebank: Lexeed (a Japanese Semantic Lexicon) and the Hinoki Treebank. Fundamental vocabulary selection is the first step in the construction of Lexeed. The vocabulary should include sufficient words to describe general concepts for self-expandability, and should not be prohibitively large to construct and maintain. There are two conventional methods for selecting fundamental vocabulary. The first is intuition-based selection by experts. This is the traditional method for making dictionaries. A weak point of this method is that the selection strongly depends on personal intuition. The second is corpus-based selection. This method is superior in objectivity to intuition-based selection, however, it is difficult to compile a sufficiently balanced corpora. We propose a psychologically-motivated selection method that adopts word familiarity as the selection criterion. Word familiarity is a rating that represents the familiarity of a word as a real number ranging from 1 (least familiar) to 7 (most familiar). We determined the word familiarity ratings statistically based on psychological experiments over 32 subjects. We selected about 30,000 words as the fundamental vocabulary, based on a minimum word familiarity threshold of 5. We also evaluated the vocabulary by comparing its word coverage with conventional intuition-based and corpus-based selection over dictionary definition sentences and novels, and demonstrated the superior coverage of our lexicon. Based on this, we conclude that the proposed method is superior to conventional methods for fundamental vocabulary selection.
NASA Astrophysics Data System (ADS)
Zarekarizi, M.; Moradkhani, H.
2015-12-01
Extreme events are proven to be affected by climate change, influencing hydrologic simulations for which stationarity is usually a main assumption. Studies have discussed that this assumption would lead to large bias in model estimations and higher flood hazard consequently. Getting inspired by the importance of non-stationarity, we determined how the exceedance probabilities have changed over time in Johnson Creek River, Oregon. This could help estimate the probability of failure of a structure that was primarily designed to resist less likely floods according to common practice. Therefore, we built a climate informed Bayesian hierarchical model and non-stationarity was considered in modeling framework. Principle component analysis shows that North Atlantic Oscillation (NAO), Western Pacific Index (WPI) and Eastern Asia (EA) are mostly affecting stream flow in this river. We modeled flood extremes using peaks over threshold (POT) method rather than conventional annual maximum flood (AMF) mainly because it is possible to base the model on more information. We used available threshold selection methods to select a suitable threshold for the study area. Accounting for non-stationarity, model parameters vary through time with climate indices. We developed a couple of model scenarios and chose one which could best explain the variation in data based on performance measures. We also estimated return periods under non-stationarity condition. Results show that ignoring stationarity could increase the flood hazard up to four times which could increase the probability of an in-stream structure being overtopped.
Torque-onset determination: Unintended consequences of the threshold method.
Dotan, Raffy; Jenkins, Glenn; O'Brien, Thomas D; Hansen, Steve; Falk, Bareket
2016-12-01
Compared with visual torque-onset-detection (TOD), threshold-based TOD produces onset bias, which increases with lower torques or rates of torque development (RTD). To compare the effects of differential TOD-bias on common contractile parameters in two torque-disparate groups. Fifteen boys and 12 men performed maximal, explosive, isometric knee-extensions. Torque and EMG were recorded for each contraction. Best contractions were selected by peak torque (MVC) and peak RTD. Visual-TOD-based torque-time traces, electromechanical delays (EMD), and times to peak RTD (tRTD) were compared with corresponding data derived from fixed 4-Nm- and relative 5%MVC-thresholds. The 5%MVC TOD-biases were similar for boys and men, but the corresponding 4-Nm-based biases were markedly different (40.3±14.1 vs. 18.4±7.1ms, respectively; p<0.001). Boys-men EMD differences were most affected, increasing from 5.0ms (visual) to 26.9ms (4Nm; p<0.01). Men's visually-based torque kinetics tended to be faster than the boys' (NS), but the 4-Nm-based kinetics erroneously depicted the boys as being much faster to any given %MVC (p<0.001). When comparing contractile properties of dissimilar groups, e.g., children vs. adults, threshold-based TOD methods can misrepresent reality and lead to erroneous conclusions. Relative-thresholds (e.g., 5% MVC) still introduce error, but group-comparisons are not confounded. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Ko, Hae-Youn; Kang, Si-Mook; Kim, Hee Eun; Kwon, Ho-Keun; Kim, Baek-Il
2015-05-01
Detection of approximal caries lesions can be difficult due to their anatomical position. This study aimed to assess the ability of the quantitative light-induced fluorescence-digital (QLF-D) in detecting approximal caries, and to compare the performance with those of the International Caries Detection and Assessment System II (ICDAS II) and digital radiography (DR). Extracted permanent teeth (n=100) were selected and mounted in pairs. The simulation pairs were assessed by one calibrated dentist using each detection method. After all the examinations, the teeth (n=95) were sectioned and examined histologically as gold standard. The modalities were compared in terms of sensitivity, specificity, areas under receiver operating characteristic curves (AUROC) for enamel (D1) and dentine (D3) levels. The intra-examiner reliability was assessed for all modalities. At D1 threshold, the ICDAS II presented the highest sensitivity (0.80) while the DR showed the highest specificity (0.89); however, the methods with the greatest AUC values at D1 threshold were DR and QLF-D (0.80 and 0.80 respectively). At D3 threshold, the methods with the highest sensitivity were ICDAS II and QLF-D (0.64 and 0.64 respectively) while the method with the lowest sensitivity was DR (0.50). However, with regard to the AUC values at D3 threshold, the QLF-D presented the highest value (0.76). All modalities showed to have excellent intra-examiner reliability. The newly developed QLF-D was not only able to detect proximal caries, but also showed to have comparable performance to the visual inspection and radiography in detecting proximal caries. QLF-D has the potential to be a useful detection method for proximal caries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
Kelley, Keven M; Stenson, Alexandra C; Cooley, Racheal; Dey, Rajarashi; Whelton, Andrew J
2015-12-01
The influence of four different cleaning methods used for newly installed polyethylene (PEX) pipes on chemical and odor quality was determined. Bench-scale testing of two PEX (type b) pipe brands showed that the California Plumbing Code PEX installation method does not maximize total organic carbon (TOC) removal. TOC concentration and threshold odor number values significantly varied between two pipe brands. Different cleaning methods impacted carbon release, odor, as well the level of drinking water odorant ethyl tert-butyl ether. Both pipes caused odor values up to eight times greater than the US federal drinking water odor limit. Unique to this project was that organic chemicals released by PEX pipe were affected by pipe brand, fill/empty cycle frequency, and the pipe cleaning method selected by the installer.
Local neutral networks help maintain inaccurately replicating ribozymes.
Szilágyi, András; Kun, Ádám; Szathmáry, Eörs
2014-01-01
The error threshold of replication limits the selectively maintainable genome size against recurrent deleterious mutations for most fitness landscapes. In the context of RNA replication a distinction between the genotypic and the phenotypic error threshold has been made; where the latter concerns the maintenance of secondary structure rather than sequence. RNA secondary structure is treated as a proxy for function. The phenotypic error threshold allows higher per digit mutation rates than its genotypic counterpart, and is known to increase with the frequency of neutral mutations in sequence space. Here we show that the degree of neutrality, i.e. the frequency of nearest-neighbour (one-step) neutral mutants is a remarkably accurate proxy for the overall frequency of such mutants in an experimentally verifiable formula for the phenotypic error threshold; this we achieve by the full numerical solution for the concentration of all sequences in mutation-selection balance up to length 16. We reinforce our previous result that currently known ribozymes could be selectively maintained by the accuracy known from the best available polymerase ribozymes. Furthermore, we show that in silico stabilizing selection can increase the mutational robustness of ribozymes due to the fact that they were produced by artificial directional selection in the first place. Our finding offers a better understanding of the error threshold and provides further insight into the plausibility of an ancient RNA world.
NASA Technical Reports Server (NTRS)
Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.
2015-01-01
The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.
Rainfall thresholds and susceptibility mapping for shallow landslides and debris flows in Scotland
NASA Astrophysics Data System (ADS)
Postance, Benjamin; Hillier, John; Dijkstra, Tom; Dixon, Neil
2017-04-01
Shallow translational slides and debris flows (hereafter 'landslides') pose a significant threat to life and cause significant annual economic impacts (e.g. by damage and disruption of infrastructure). The focus of this research is on the definition of objective rainfall thresholds using a weather radar system and landslide susceptibility mapping. In the study area Scotland, an inventory of 75 known landslides was used for the period 2003 to 2016. First, the effect of using different rain records (i.e. time series length) on two threshold selection techniques in receiver operating characteristic (ROC) analysis was evaluated. The results show that thresholds selected by 'Threat Score' (minimising false alarms) are sensitive to rain record length and which is not routinely considered, whereas thresholds selected using 'Optimal Point' (minimising failed alarms) are not; therefore these may be suited to establishing lower limit thresholds and be of interest to those developing early warning systems. Robust thresholds are found for combinations of normalised rain duration and accumulation at 1 and 12 day's antecedence respectively; these are normalised using the rainy-day normal and an equivalent measure for rain intensity. This research indicates that, in Scotland, rain accumulation provides a better indicator than rain intensity and that landslides may be generated by threshold conditions lower than previously thought. Second, a landslide susceptibility map is constructed using a cross-validated logistic regression model. A novel element of the approach is that landslide susceptibility is calculated for individual hillslope sections. The developed thresholds and susceptibility map are combined to assess potential hazards and impacts posed to the national highway network in Scotland.
Coltharp, Carla; Kessler, Rene P.; Xiao, Jie
2012-01-01
Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and allows a variety of quantitative measurements tailored to specific needs of different biological systems. PMID:23251611
A method for combining passive microwave and infrared rainfall observations
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1995-01-01
Because passive microwave instruments are confined to polar-orbiting satellites, rainfall estimates must interpolate across long time periods, during which no measurements are available. In this paper the authors discuss a technique that allows one to partially overcome the sampling limitations by using frequent infrared observations from geosynchronous platforms. To accomplish this, the technique compares all coincident microwave and infrared observations. From each coincident pair, the infrared temperature threshold is selected that corresponds to an area equal to the raining area observed in the microwave image. The mean conditional rainfall rate as determined from the microwave image is then assigned to pixels in the infrared image that are colder than the selected threshold. The calibration is also applied to a fixed threshold of 235 K for comparison with established infrared techniques. Once a calibration is determined, it is applied to all infrared images. Monthly accumulations for both methods are then obtained by summing rainfall from all available infrared images. Two examples are used to evaluate the performance of the technique. The first consists of a one-month period (February 1988) over Darwin, Australia, where good validation data are available from radar and rain gauges. For this case it was found that the technique approximately doubled the rain inferred by the microwave method alone and produced exceptional agreement with the validation data. The second example involved comparisons with atoll rain gauges in the western Pacific for June 1989. Results here are overshadowed by the fact that the hourly infrared estimates from established techniques, by themselves, produced very good correlations with the rain gauges. The calibration technique was not able to improve upon these results.
The sequence relay selection strategy based on stochastic dynamic programming
NASA Astrophysics Data System (ADS)
Zhu, Rui; Chen, Xihao; Huang, Yangchao
2017-07-01
Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.
Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; White, Stuart C.
1992-05-01
This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.
Impervious surface mapping with Quickbird imagery
Lu, Dengsheng; Hetrick, Scott; Moran, Emilio
2010-01-01
This research selects two study areas with different urban developments, sizes, and spatial patterns to explore the suitable methods for mapping impervious surface distribution using Quickbird imagery. The selected methods include per-pixel based supervised classification, segmentation-based classification, and a hybrid method. A comparative analysis of the results indicates that per-pixel based supervised classification produces a large number of “salt-and-pepper” pixels, and segmentation based methods can significantly reduce this problem. However, neither method can effectively solve the spectral confusion of impervious surfaces with water/wetland and bare soils and the impacts of shadows. In order to accurately map impervious surface distribution from Quickbird images, manual editing is necessary and may be the only way to extract impervious surfaces from the confused land covers and the shadow problem. This research indicates that the hybrid method consisting of thresholding techniques, unsupervised classification and limited manual editing provides the best performance. PMID:21643434
Woolley, J.D.; Lam, O.; Chuang, B.; Ford, J.M.; Mathalon, D.H.; Vinogradov, S.
2015-01-01
Summary Background Olfaction plays an important role in mammalian social behavior. Olfactory deficits are common in schizophrenia and correlate with negative symptoms and low social drive. Despite their prominence and possible clinical relevance, little is understood about the pathological mechanisms underlying olfactory deficits in schizophrenia and there are currently no effective treatments for these deficits. The prosocial neuropeptide oxytocin may affect the olfactory system when administered intranasally to humans and there is growing interest in its therapeutic potential in schizophrenia. Methods To examine this model, we administered 40 IU of oxytocin and placebo intranasally to 31 patients with a schizophrenia spectrum illness and 34 age-matched healthy control participants in a randomized, double-blind, placebo-controlled, cross-over study. On each test day, participants completed an olfactory detection threshold test for two different odors: (1) lyral, a synthetic fragrance compound for which patients with schizophrenia have specific olfactory detection threshold deficits, possibly related to decreased cyclic adenosine 3′,5′-monophosphate (cAMP) signaling; and (2) anise, a compound for which olfactory detection thresholds change with menstrual cycle phase in women. Results On the placebo test day, patients with schizophrenia did not significantly differ from healthy controls in detection of either odor. We found that oxytocin administration significantly and selectively improved olfactory detection thresholds for lyral but not for anise in patients with schizophrenia. In contrast, oxytocin had no effect on detection of either odor in healthy controls. Discussion Our data indicate that oxytocin administration may ameliorate olfactory deficits in schizophrenia and suggest the effects of intranasal oxytocin may extend to influencing the olfactory system. Given that oxytocin has been found to increase cAMP signaling in vitro a possible mechanism for these effects is discussed. PMID:25637811
NASA Astrophysics Data System (ADS)
Prasad, M. N.; Brown, M. S.; Ahmad, S.; Abtin, F.; Allen, J.; da Costa, I.; Kim, H. J.; McNitt-Gray, M. F.; Goldin, J. G.
2008-03-01
Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis. Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to perform segmentation of the lungs using a technique that selects an optimal threshold for a given scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build the model for selection of the best fitting lung boundary. The performance of the new technique was compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing. The two techniques were evaluated against manual reference segmentations using a volumetric overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the fixed threshold technique.
a Method of Generating dem from Dsm Based on Airborne Insar Data
NASA Astrophysics Data System (ADS)
Lu, W.; Zhang, J.; Xue, G.; Wang, C.
2018-04-01
Traditional methods of terrestrial survey to acquire DEM cannot meet the requirement of acquiring large quantities of data in real time, but the DSM can be quickly obtained by using the dual antenna synthetic aperture radar interferometry and the DEM generated by the DSM is more fast and accurate. Therefore it is most important to acquire DEM from DSM based on airborne InSAR data. This paper aims to the method that generate DEM from DSM accurately. Two steps in this paper are applied to acquire accurate DEM. First of all, when the DSM is generated by interferometry, unavoidable factors such as overlay and shadow will produce gross errors to affect the data accuracy, so the adaptive threshold segmentation method is adopted to remove the gross errors and the threshold is selected according to the coherence of the interferometry. Secondly DEM will be generated by the progressive triangulated irregular network densification filtering algorithm. Finally, experimental results are compared with the existing high-precision DEM results. The results show that this method can effectively filter out buildings, vegetation and other objects to obtain the high-precision DEM.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Threshold thickness for applying diffusion equation in thin tissue optical imaging
NASA Astrophysics Data System (ADS)
Zhang, Yunyao; Zhu, Jingping; Cui, Weiwen; Nie, Wei; Li, Jie; Xu, Zhenghong
2014-08-01
We investigated the suitability of the semi-infinite model of the diffusion equation when using diffuse optical imaging (DOI) to image thin tissues with double boundaries. Both diffuse approximation and Monte Carlo methods were applied to simulate light propagation in the thin tissue model with variable optical parameters and tissue thicknesses. A threshold value of the tissue thickness was defined as the minimum thickness in which the semi-infinite model exhibits the same reflected intensity as that from the double-boundary model and was generated as the final result. In contrast to our initial hypothesis that all optical properties would affect the threshold thickness, our results show that only absorption coefficient is the dominant parameter and the others are negligible. The threshold thickness decreases from 1 cm to 4 mm as the absorption coefficient grows from 0.01 mm-1 to 0.2 mm-1. A look-up curve was derived to guide the selection of the appropriate model during the optical diagnosis of thin tissue cancers. These results are useful in guiding the development of the endoscopic DOI for esophageal, cervical and colorectal cancers, among others.
Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi
2018-02-06
This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.
Wyrwich, Kathleen W; Guo, Shien; Medori, Rossella; Altincatal, Arman; Wagner, Linda; Elkins, Jacob
2014-01-01
Background: The 29-item Multiple Sclerosis Impact Scale (MSIS-29) was developed to examine the impact of multiple sclerosis (MS) on physical and psychological functioning from a patient’s perspective. Objective: To determine the responder definition (RD) of the MSIS-29 physical impact subscale (PHYS) in a group of patients with relapsing–remitting MS (RRMS) participating in a clinical trial. Methods: Data from the SELECT trial comparing daclizumab high-yield process with placebo in patients with RRMS were used. Physical function was evaluated in SELECT using three patient-reported outcomes measures and the Expanded Disability Status Scale (EDSS). Anchor- and distribution-based methods were used to identify an RD for the MSIS-29. Results: Results across the anchor-based approach suggested MSIS-29 PHYS RD values of 6.91 (mean), 7.14 (median) and 7.50 (mode). Distribution-based RD estimates ranged from 6.24 to 10.40. An RD of 7.50 was selected as the most appropriate threshold for physical worsening based on corresponding changes in the EDSS (primary anchor of interest). Conclusion: These findings indicate that a ≥7.50 point worsening on the MSIS-29 PHYS is a reasonable and practical threshold for identifying patients with RRMS who have experienced a clinically significant change in the physical impact of MS. PMID:24740371
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
Data Transmission Signal Design and Analysis
NASA Technical Reports Server (NTRS)
Moore, J. D.
1972-01-01
The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2006-12-12
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2004-05-18
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Swiderska, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Slodkowska, Janina
2015-01-01
The paper presents a combined method for an automatic hot-spot areas selection based on penalty factor in the whole slide images to support the pathomorphological diagnostic procedure. The studied slides represent the meningiomas and oligodendrogliomas tumor on the basis of the Ki-67/MIB-1 immunohistochemical reaction. It allows determining the tumor proliferation index as well as gives an indication to the medical treatment and prognosis. The combined method based on mathematical morphology, thresholding, texture analysis and classification is proposed and verified. The presented algorithm includes building a specimen map, elimination of hemorrhages from them, two methods for detection of hot-spot fields with respect to an introduced penalty factor. Furthermore, we propose localization concordance measure to evaluation localization of hot spot selection by the algorithms in respect to the expert's results. Thus, the results of the influence of the penalty factor are presented and discussed. It was found that the best results are obtained for 0.2 value of them. They confirm effectiveness of applied approach.
Cell separation using electric fields
NASA Technical Reports Server (NTRS)
Eppich, Henry M. (Inventor); Mangano, Joseph A. (Inventor)
2003-01-01
The present invention involves methods and devices which enable discrete objects having a conducting inner core, surrounded by a dielectric membrane to be selectively inactivated by electric fields via irreversible breakdown of their dielectric membrane. One important application of the invention is in the selection, purification, and/or purging of desired or undesired biological cells from cell suspensions. According to the invention, electric fields can be utilized to selectively inactivate and render non-viable particular subpopulations of cells in a suspension, while not adversely affecting other desired subpopulations. According to the inventive methods, the cells can be selected on the basis of intrinsic or induced differences in a characteristic electroporation threshold, which can depend, for example, on a difference in cell size and/or critical dielectric membrane breakdown voltage. The invention enables effective cell separation without the need to employ undesirable exogenous agents, such as toxins or antibodies. The inventive method also enables relatively rapid cell separation involving a relatively low degree of trauma or modification to the selected, desired cells. The inventive method has a variety of potential applications in clinical medicine, research, etc., with two of the more important foreseeable applications being stem cell enrichment/isolation, and cancer cell purging.
Cell separation using electric fields
NASA Technical Reports Server (NTRS)
Mangano, Joseph (Inventor); Eppich, Henry (Inventor)
2009-01-01
The present invention involves methods and devices which enable discrete objects having a conducting inner core, surrounded by a dielectric membrane to be selectively inactivated by electric fields via irreversible breakdown of their dielectric membrane. One important application of the invention is in the selection, purification, and/or purging of desired or undesired biological cells from cell suspensions. According to the invention, electric fields can be utilized to selectively inactivate and render non-viable particular subpopulations of cells in a suspension, while not adversely affecting other desired subpopulations. According to the inventive methods, the cells can be selected on the basis of intrinsic or induced differences in a characteristic electroporation threshold, which can depend, for example, on a difference in cell size and/or critical dielectric membrane breakdown voltage. The invention enables effective cell separation without the need to employ undesirable exogenous agents, such as toxins or antibodies. The inventive method also enables relatively rapid cell separation involving a relatively low degree of trauma or modification to the selected, desired cells. The inventive method has a variety of potential applications in clinical medicine, research, etc., with two of the more important foreseeable applications being stem cell enrichment/isolation, and cancer cell purging.
NASA Astrophysics Data System (ADS)
Wang, Chao; Song, Bing; Li, Qingjiang; Zeng, Zhongming
2018-03-01
We herein present a novel unidirectional threshold selector for cross-point bipolar RRAM array. The proposed Ag/amorphous Si based threshold selector showed excellent threshold characteristics in positive field, such as high selectivity ( 105), steep slope (< 5 mV/decade) and low off-state current (< 300 pA). Meanwhile, the selector exhibited rectifying characteristics in the high resistance state as well and the rectification ratio was as high as 103 at ± 1.5 V. Nevertheless, due to the high reverse current about 9 mA at - 3 V, this unidirectional threshold selector can be used as a selection element for bipolar-type RRAM. By integrating a bipolar RRAM device with the selector, experiments showed that the undesired sneak was significantly suppressed, indicating its potentiality for high-density integrated nonvolatile memory applications.
NCEP Air Quality Forecast(AQF) Verification. NOAA/NWS/NCEP/EMC
average Select forecast four: Day 1 AOD skill for all thresholds Day 1 Time series for AOD GT 0 Day 2 AOD skill for all thresholds Day 2 Time series for AOD GT 0 Diurnal plots for AOD GT 0 Select statistic type
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks
NASA Astrophysics Data System (ADS)
Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan
In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.
MOJANA, FRANCESCA; BRAR, MANPREET; CHENG, LINGYUN; BARTSCH, DIRK-UWE G.; FREEMAN, WILLIAM R.
2012-01-01
PURPOSE To determine the long-term effect of sub-threshold diode laser treatment for drusen in patients with non-exudative age-related macular degeneration (AMD) with spectral domain optical coherence tomography combined with simultaneous scanning laser ophthalmoscope (SD-OCT/SLO). METHODS 8 eyes of 4 consecutive AMD patients with bilateral drusen previously treated with sub-threshold diode laser were imaged with SD-OCT/SLO. Abnormalities in the outer retina layers reflectivity as seen with SD-OCT/SLO were retrospectively analyzed and compared with color fundus pictures and autofluorescence images (AF) acquired immediately before and after the laser treatment. RESULTS A focal discrete disruptions in the reflectivity of the outer retinal layers was noted in 29% of the laser lesions. The junction in between the inner and outer segment of the photoreceptor was more frequently affected, with associated focal damage of the outer nuclear layer. Defects of the RPE were occasionally detected. These changes did not correspond to threshold burns on color fundus photography, but corresponded to focal areas of increased AF in the majority of the cases. CONCLUSIONS Sub-threshold diode laser treatment causes long-term disruption of the retinal photoreceptor layer as analyzed by SD-OCT/SLO. The concept that sub-threshold laser treatment can achieve a selected RPE effect without damage to rods and cones may be flawed. PMID:21157398
Patterns of threshold evolution in polyphenic insects under different developmental models.
Tomkins, Joseph L; Moczek, Armin P
2009-02-01
Two hypotheses address the evolution of polyphenic traits in insects. Under the developmental reprogramming model, individuals exceeding a threshold follow a different developmental pathway from individuals below the threshold. This decoupling is thought to free selection to independently hone alternative morphologies, increasing phenotypic plasticity and morphological diversity. Under the alternative model, extreme positive allometry explains the existence of alternative phenotypes and divergent phenotypes are developmentally coupled by a continuous reaction norm, such that selection on either morph acts on both. We test the hypothesis that continuous reaction norm polyphenisms, evolve through changes in the allometric parameters of even the smallest males with minimal trait expression, whereas threshold polyphenisms evolve independent of the allometric parameters of individuals below the threshold. We compare two polyphenic species; the dung beetle Onthophagus taurus, whose allometry has been modeled both as a threshold polyphenism and a continuous reaction norm and the earwig Forficula auricularia, whose allometry is best modeled with a discontinuous threshold. We find that across populations of both species, variation in forceps or horn allometry in minor males are correlated to the population's threshold. These findings suggest that regardless of developmental mode, alternative morphs do not evolve independently of one another.
Downie, Laura E; Naranjo Golborne, Cecilia; Chen, Merry; Ho, Ngoc; Hoac, Cam; Liyanapathirana, Dasun; Luo, Carol; Wu, Ruo Bing; Chinnery, Holly R
2018-06-01
Our aim was to compare regeneration of the sub-basal nerve plexus (SBNP) and superficial nerve terminals (SNT) following corneal epithelial injury. We also sought to compare agreement when quantifying nerve parameters using different image analysis techniques. Anesthetized, female C57BL/6 mice received central 1-mm corneal epithelial abrasions. Four-weeks post-injury, eyes were enucleated and processed for PGP9.5 to visualize the corneal nerves using wholemount immunofluorescence staining and confocal microscopy. The percentage area of the SBNP and SNT were quantified using: ImageJ automated thresholds, ImageJ manual thresholds and manual tracings in NeuronJ. Nerve sum length was quantified using NeuronJ and Imaris. Agreement between methods was considered with Bland-Altman analyses. Four-weeks post-injury, the sum length of nerve fibers in the SBNP, but not the SNT, was reduced compared with naïve eyes. In the periphery, but not central cornea, of both naïve and injured eyes, nerve fiber lengths in the SBNP and SNT were strongly correlated. For quantifying SBNP nerve axon area, all image analysis methods were highly correlated. In the SNT, there was poor correlation between manual methods and auto-thresholding, with a trend towards underestimating nerve fiber area using auto-thresholding when higher proportions of nerve fibers were present. In conclusion, four weeks after superficial corneal injury, there is differential recovery of epithelial nerve axons; SBNP sum length is reduced, however the sum length of SNTs is similar to naïve eyes. Care should be taken when selecting image analysis methods to compare nerve parameters in different depths of the corneal epithelium due to differences in background autofluorescence. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sub-threshold Post Traumatic Stress Disorder in the WHO World Mental Health Surveys
McLaughlin, Katie A.; Koenen, Karestan C.; Friedman, Matthew J.; Ruscio, Ayelet Meron; Karam, Elie G.; Shahly, Victoria; Stein, Dan J.; Hill, Eric D.; Petukhova, Maria; Alonso, Jordi; Andrade, Laura Helena; Angermeyer, Matthias C.; Borges, Guilherme; de Girolamo, Giovanni; de Graaf, Ron; Demyttenaere, Koen; Florescu, Silvia E.; Mladenova, Maya; Posada-Villa, Jose; Scott, Kate M.; Takeshima, Tadashi; Kessler, Ronald C.
2014-01-01
Background Although only a minority of people exposed to a traumatic event (TE) develops PTSD, symptoms not meeting full PTSD criteria are common and often clinically significant. Individuals with these symptoms have sometimes been characterized as having sub-threshold PTSD, but no consensus exists on the optimal definition of this term. Data from a large cross-national epidemiological survey are used to provide a principled basis for such a definition. Methods The WHO World Mental Health (WMH) Surveys administered fully-structured psychiatric diagnostic interviews to community samples in 13 countries containing assessments of PTSD associated with randomly selected TEs. Focusing on the 23,936 respondents reporting lifetime TE exposure, associations of approximated DSM-5 PTSD symptom profiles with six outcomes (distress-impairment, suicidality, comorbid fear-distress disorders, PTSD symptom duration) were examined to investigate implications of different sub-threshold definitions. Results Although consistently highest distress-impairment, suicidality, comorbidity, and symptom duration were observed among the 3.0% of respondents with DSM-5 PTSD than other symptom profiles, the additional 3.6% of respondents meeting two or three of DSM-5 Criteria BE also had significantly elevated scores for most outcomes. The proportion of cases with threshold versus sub-threshold PTSD varied depending on TE type, with threshold PTSD more common following interpersonal violence and sub-threshold PTSD more common following events happening to loved ones. Conclusions Sub-threshold DSM-5 PTSD is most usefully defined as meeting two or three of the DSM-5 Criteria B-E. Use of a consistent definition is critical to advance understanding of the prevalence, predictors, and clinical significance of sub-threshold PTSD. PMID:24842116
2013-01-01
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405
Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene
2013-03-25
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.
AmpliVar: mutation detection in high-throughput sequence from amplicon-based libraries.
Hsu, Arthur L; Kondrashova, Olga; Lunke, Sebastian; Love, Clare J; Meldrum, Cliff; Marquis-Nicholson, Renate; Corboy, Greg; Pham, Kym; Wakefield, Matthew; Waring, Paul M; Taylor, Graham R
2015-04-01
Conventional means of identifying variants in high-throughput sequencing align each read against a reference sequence, and then call variants at each position. Here, we demonstrate an orthogonal means of identifying sequence variation by grouping the reads as amplicons prior to any alignment. We used AmpliVar to make key-value hashes of sequence reads and group reads as individual amplicons using a table of flanking sequences. Low-abundance reads were removed according to a selectable threshold, and reads above this threshold were aligned as groups, rather than as individual reads, permitting the use of sensitive alignment tools. We show that this approach is more sensitive, more specific, and more computationally efficient than comparable methods for the analysis of amplicon-based high-throughput sequencing data. The method can be extended to enable alignment-free confirmation of variants seen in hybridization capture target-enrichment data. © 2015 WILEY PERIODICALS, INC.
Burr, Tom; Hamada, Michael S.; Howell, John; ...
2013-01-01
Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less
Ayers, Christopher A; Fisher, Lee E; Gaunt, Robert A; Weber, Douglas J
2016-07-01
Patterned microstimulation of the dorsal root ganglion (DRG) has been proposed as a method for delivering tactile and proprioceptive feedback to amputees. Previous studies demonstrated that large- and medium-diameter afferent neurons could be recruited separately, even several months after implantation. However, those studies did not examine the anatomical localization of sensory fibers recruited by microstimulation in the DRG. Achieving precise recruitment with respect to both modality and receptive field locations will likely be crucial to create a viable sensory neuroprosthesis. In this study, penetrating microelectrode arrays were implanted in the L5, L6, and L7 DRG of four isoflurane-anesthetized cats instrumented with nerve cuff electrodes around the proximal and distal branches of the sciatic and femoral nerves. A binary search was used to find the recruitment threshold for evoking a response in each nerve cuff. The selectivity of DRG stimulation was characterized by the ability to recruit individual distal branches to the exclusion of all others at threshold; 84.7% (n = 201) of the stimulation electrodes recruited a single nerve branch, with 9 of the 15 instrumented nerves recruited selectively. The median stimulation threshold was 0.68 nC/phase, and the median dynamic range (increase in charge while stimulation remained selective) was 0.36 nC/phase. These results demonstrate the ability of DRG microstimulation to achieve selective recruitment of the major nerve branches of the hindlimb, suggesting that this approach could be used to drive sensory input from localized regions of the limb. This sensory input might be useful for restoring tactile and proprioceptive feedback to a lower-limb amputee. Copyright © 2016 the American Physiological Society.
Thymic selection threshold defined by compartmentalization of Ras/MAPK signalling.
Daniels, Mark A; Teixeiro, Emma; Gill, Jason; Hausmann, Barbara; Roubaty, Dominique; Holmberg, Kaisa; Werlen, Guy; Holländer, Georg A; Gascoigne, Nicholas R J; Palmer, Ed
2006-12-07
A healthy individual can mount an immune response to exogenous pathogens while avoiding an autoimmune attack on normal tissues. The ability to distinguish between self and non-self is called 'immunological tolerance' and, for T lymphocytes, involves the generation of a diverse pool of functional T cells through positive selection and the removal of overtly self-reactive thymocytes by negative selection during T-cell ontogeny. To elucidate how thymocytes arrive at these cell fate decisions, here we have identified ligands that define an extremely narrow gap spanning the threshold that distinguishes positive from negative selection. We show that, at the selection threshold, a small increase in ligand affinity for the T-cell antigen receptor leads to a marked change in the activation and subcellular localization of Ras and mitogen-activated protein kinase (MAPK) signalling intermediates and the induction of negative selection. The ability to compartmentalize signalling molecules differentially in the cell endows the thymocyte with the ability to convert a small change in analogue input (affinity) into a digital output (positive versus negative selection) and provides the basis for establishing central tolerance.
Systematic wavelength selection for improved multivariate spectral analysis
Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.
1995-01-01
Methods and apparatus for determining in a biological material one or more unknown values of at least one known characteristic (e.g. the concentration of an analyte such as glucose in blood or the concentration of one or more blood gas parameters) with a model based on a set of samples with known values of the known characteristics and a multivariate algorithm using several wavelength subsets. The method includes selecting multiple wavelength subsets, from the electromagnetic spectral region appropriate for determining the known characteristic, for use by an algorithm wherein the selection of wavelength subsets improves the model's fitness of the determination for the unknown values of the known characteristic. The selection process utilizes multivariate search methods that select both predictive and synergistic wavelengths within the range of wavelengths utilized. The fitness of the wavelength subsets is determined by the fitness function F=.function.(cost, performance). The method includes the steps of: (1) using one or more applications of a genetic algorithm to produce one or more count spectra, with multiple count spectra then combined to produce a combined count spectrum; (2) smoothing the count spectrum; (3) selecting a threshold count from a count spectrum to select these wavelength subsets which optimize the fitness function; and (4) eliminating a portion of the selected wavelength subsets. The determination of the unknown values can be made: (1) noninvasively and in vivo; (2) invasively and in vivo; or (3) in vitro.
NASA Astrophysics Data System (ADS)
Wen, Hongwei; Liu, Yue; Wang, Shengpei; Li, Zuoyong; Zhang, Jishui; Peng, Yun; He, Huiguang
2017-03-01
Tourette syndrome (TS) is a childhood-onset neurobehavioral disorder. To date, TS is still misdiagnosed due to its varied presentation and lacking of obvious clinical symptoms. Therefore, studies of objective imaging biomarkers are of great importance for early TS diagnosis. As tic generation has been linked to disturbed structural networks, and many efforts have been made recently to investigate brain functional or structural networks using machine learning methods, for the purpose of disease diagnosis. However, few studies were related to TS and some drawbacks still existed in them. Therefore, we propose a novel classification framework integrating a multi-threshold strategy and a network fusion scheme to address the preexisting drawbacks. Here we used diffusion MRI probabilistic tractography to construct the structural networks of 44 TS children and 48 healthy children. We ameliorated the similarity network fusion algorithm specially to fuse the multi-threshold structural networks. Graph theoretical analysis was then implemented, and nodal degree, nodal efficiency and nodal betweenness centrality were selected as features. Finally, support vector machine recursive feature extraction (SVM-RFE) algorithm was used for feature selection, and then optimal features are fed into SVM to automatically discriminate TS children from controls. We achieved a high accuracy of 89.13% evaluated by a nested cross validation, demonstrated the superior performance of our framework over other comparison methods. The involved discriminative regions for classification primarily located in the basal ganglia and frontal cortico-cortical networks, all highly related to the pathology of TS. Together, our study may provide potential neuroimaging biomarkers for early-stage TS diagnosis.
Zinc oxide nanoparticles as selective killers of proliferating cells
Taccola, Liuba; Raffa, Vittoria; Riggio, Cristina; Vittorio, Orazio; Iorio, Maria Carla; Vanacore, Renato; Pietrabissa, Andrea; Cuschieri, Alfred
2011-01-01
Background: It has recently been demonstrated that zinc oxide nanoparticles (ZnO NPs) induce death of cancerous cells whilst having no cytotoxic effect on normal cells. However, there are several issues which need to be resolved before translation of zinc oxide nanoparticles into medical use, including lack of suitable biocompatible dispersion protocols and a better understanding being needed of the mechanism of their selective cytotoxic action. Methods: Nanoparticle dose affecting cell viability was evaluated in a model of proliferating cells both experimentally and mathematically. The key issue of selective toxicity of ZnO NPs toward proliferating cells was addressed by experiments using a biological model of noncancerous cells, ie, mesenchymal stem cells before and after cell differentiation to the osteogenic lineage. Results: In this paper, we report a biocompatible protocol for preparation of stable aqueous solutions of monodispersed zinc oxide nanoparticles. We found that the threshold of intracellular ZnO NP concentration required to induce cell death in proliferating cells is 0.4 ± 0.02 mM. Finally, flow cytometry analysis revealed that the threshold dose of zinc oxide nanoparticles was lethal to proliferating pluripotent mesenchymal stem cells but exhibited negligible cytotoxic effects to osteogenically differentiated mesenchymal stem cells. Conclusion: Results confirm the ZnO NP selective cytotoxic action on rapidly proliferating cells, whether benign or malignant. PMID:21698081
Wang, Yi-Ting; Sung, Pei-Yuan; Lin, Peng-Lin; Yu, Ya-Wen; Chung, Ren-Hua
2015-05-15
Genome-wide association studies (GWAS) have become a common approach to identifying single nucleotide polymorphisms (SNPs) associated with complex diseases. As complex diseases are caused by the joint effects of multiple genes, while the effect of individual gene or SNP is modest, a method considering the joint effects of multiple SNPs can be more powerful than testing individual SNPs. The multi-SNP analysis aims to test association based on a SNP set, usually defined based on biological knowledge such as gene or pathway, which may contain only a portion of SNPs with effects on the disease. Therefore, a challenge for the multi-SNP analysis is how to effectively select a subset of SNPs with promising association signals from the SNP set. We developed the Optimal P-value Threshold Pedigree Disequilibrium Test (OPTPDT). The OPTPDT uses general nuclear families. A variable p-value threshold algorithm is used to determine an optimal p-value threshold for selecting a subset of SNPs. A permutation procedure is used to assess the significance of the test. We used simulations to verify that the OPTPDT has correct type I error rates. Our power studies showed that the OPTPDT can be more powerful than the set-based test in PLINK, the multi-SNP FBAT test, and the p-value based test GATES. We applied the OPTPDT to a family-based autism GWAS dataset for gene-based association analysis and identified MACROD2-AS1 with genome-wide significance (p-value=2.5×10(-6)). Our simulation results suggested that the OPTPDT is a valid and powerful test. The OPTPDT will be helpful for gene-based or pathway association analysis. The method is ideal for the secondary analysis of existing GWAS datasets, which may identify a set of SNPs with joint effects on the disease.
Quantitative Ultrasound for Measuring Obstructive Severity in Children with Hydronephrosis.
Cerrolaza, Juan J; Peters, Craig A; Martin, Aaron D; Myers, Emmarie; Safdar, Nabile; Linguraru, Marius George
2016-04-01
We define sonographic biomarkers for hydronephrotic renal units that can predict the necessity of diuretic nuclear renography. We selected a cohort of 50 consecutive patients with hydronephrosis of varying severity in whom 2-dimensional sonography and diuretic mercaptoacetyltriglycine renography had been performed. A total of 131 morphological parameters were computed using quantitative image analysis algorithms. Machine learning techniques were then applied to identify ultrasound based safety thresholds that agreed with the t½ for washout. A best fit model was then derived for each threshold level of t½ that would be clinically relevant at 20, 30 and 40 minutes. Receiver operating characteristic curve analysis was performed. Sensitivity, specificity and area under the receiver operating characteristic curve were determined. Improvement obtained by the quantitative imaging method compared to the Society for Fetal Urology grading system and the hydronephrosis index was statistically verified. For the 3 thresholds considered and at 100% sensitivity the specificities of the quantitative imaging method were 94%, 70% and 74%, respectively. Corresponding area under the receiver operating characteristic curve values were 0.98, 0.94 and 0.94, respectively. Improvement obtained by the quantitative imaging method over the Society for Fetal Urology grade and hydronephrosis index was statistically significant (p <0.05 in all cases). Quantitative imaging analysis of renal sonograms in children with hydronephrosis can identify thresholds of clinically significant washout times with 100% sensitivity to decrease the number of diuretic renograms in up to 62% of children. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
24 CFR 1003.302 - Project specific threshold requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false Project specific threshold requirements. 1003.302 Section 1003.302 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN... Purpose Grant Application and Selection Process § 1003.302 Project specific threshold requirements. (a...
Bikel, Shirley; Jacobo-Albavera, Leonor; Sánchez-Muñoz, Fausto; Cornejo-Granados, Fernanda; Canizales-Quinteros, Samuel; Soberón, Xavier; Sotelo-Mundo, Rogerio R; Del Río-Navarro, Blanca E; Mendoza-Vargas, Alfredo; Sánchez, Filiberto; Ochoa-Leyva, Adrian
2017-01-01
In spite of the emergence of RNA sequencing (RNA-seq), microarrays remain in widespread use for gene expression analysis in the clinic. There are over 767,000 RNA microarrays from human samples in public repositories, which are an invaluable resource for biomedical research and personalized medicine. The absolute gene expression analysis allows the transcriptome profiling of all expressed genes under a specific biological condition without the need of a reference sample. However, the background fluorescence represents a challenge to determine the absolute gene expression in microarrays. Given that the Y chromosome is absent in female subjects, we used it as a new approach for absolute gene expression analysis in which the fluorescence of the Y chromosome genes of female subjects was used as the background fluorescence for all the probes in the microarray. This fluorescence was used to establish an absolute gene expression threshold, allowing the differentiation between expressed and non-expressed genes in microarrays. We extracted the RNA from 16 children leukocyte samples (nine males and seven females, ages 6-10 years). An Affymetrix Gene Chip Human Gene 1.0 ST Array was carried out for each sample and the fluorescence of 124 genes of the Y chromosome was used to calculate the absolute gene expression threshold. After that, several expressed and non-expressed genes according to our absolute gene expression threshold were compared against the expression obtained using real-time quantitative polymerase chain reaction (RT-qPCR). From the 124 genes of the Y chromosome, three genes (DDX3Y, TXLNG2P and EIF1AY) that displayed significant differences between sexes were used to calculate the absolute gene expression threshold. Using this threshold, we selected 13 expressed and non-expressed genes and confirmed their expression level by RT-qPCR. Then, we selected the top 5% most expressed genes and found that several KEGG pathways were significantly enriched. Interestingly, these pathways were related to the typical functions of leukocytes cells, such as antigen processing and presentation and natural killer cell mediated cytotoxicity. We also applied this method to obtain the absolute gene expression threshold in already published microarray data of liver cells, where the top 5% expressed genes showed an enrichment of typical KEGG pathways for liver cells. Our results suggest that the three selected genes of the Y chromosome can be used to calculate an absolute gene expression threshold, allowing a transcriptome profiling of microarray data without the need of an additional reference experiment. Our approach based on the establishment of a threshold for absolute gene expression analysis will allow a new way to analyze thousands of microarrays from public databases. This allows the study of different human diseases without the need of having additional samples for relative expression experiments.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 36.602-5 Section 36.602-5 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 36.602-5 Section 36.602-5 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1336.602-5 Section 1336.602-5 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 836.602-5 Section 836.602-5 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Short selection processes for contracts not to exceed the simplified acquisition threshold. 636.602-5 Section 636.602-5 Federal Acquisition Regulations System DEPARTMENT OF STATE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 436.602-5 Section 436.602-5 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection processes for contracts not to exceed the simplified acquisition threshold. 1436.602-5 Section 1436.602-5 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection process for procurements not to exceed the simplified acquisition threshold. 736.602-5 Section 736.602-5 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACT...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1336.602-5 Section 1336.602-5 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Short selection processes for contracts not to exceed the simplified acquisition threshold. 636.602-5 Section 636.602-5 Federal Acquisition Regulations System DEPARTMENT OF STATE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1036.602-5 Section 1036.602-5 Federal Acquisition Regulations System DEPARTMENT OF THE TREASURY SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 436.602-5 Section 436.602-5 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 436.602-5 Section 436.602-5 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection processes for contracts not to exceed the simplified acquisition threshold. 1436.602-5 Section 1436.602-5 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Short selection process for procurements not to exceed the simplified acquisition threshold. 736.602-5 Section 736.602-5 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACT...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 436.602-5 Section 436.602-5 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 836.602-5 Section 836.602-5 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1336.602-5 Section 1336.602-5 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 36.602-5 Section 36.602-5 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 836.602-5 Section 836.602-5 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1336.602-5 Section 1336.602-5 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Short selection processes for contracts not to exceed the simplified acquisition threshold. 1436.602-5 Section 1436.602-5 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Short selection processes for contracts not to exceed the simplified acquisition threshold. 636.602-5 Section 636.602-5 Federal Acquisition Regulations System DEPARTMENT OF STATE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 36.602-5 Section 36.602-5 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection process for procurements not to exceed the simplified acquisition threshold. 736.602-5 Section 736.602-5 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACT...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1036.602-5 Section 1036.602-5 Federal Acquisition Regulations System DEPARTMENT OF THE TREASURY SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 36.602-5 Section 36.602-5 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1036.602-5 Section 1036.602-5 Federal Acquisition Regulations System DEPARTMENT OF THE TREASURY SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1036.602-5 Section 1036.602-5 Federal Acquisition Regulations System DEPARTMENT OF THE TREASURY SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Short selection processes for contracts not to exceed the simplified acquisition threshold. 1436.602-5 Section 1436.602-5 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Short selection processes for contracts not to exceed the simplified acquisition threshold. 636.602-5 Section 636.602-5 Federal Acquisition Regulations System DEPARTMENT OF STATE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS...
Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak
2013-01-01
The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Lin, Daniel W; Crawford, E David; Keane, Thomas; Evans, Brent; Reid, Julia; Rajamani, Saradha; Brown, Krystal; Gutin, Alexander; Tward, Jonathan; Scardino, Peter; Brawer, Michael; Stone, Steven; Cuzick, Jack
2018-06-01
A combined clinical cell-cycle risk (CCR) score that incorporates prognostic molecular and clinical information has been recently developed and validated to improve prostate cancer mortality (PCM) risk stratification over clinical features alone. As clinical features are currently used to select men for active surveillance (AS), we developed and validated a CCR score threshold to improve the identification of men with low-risk disease who are appropriate for AS. The score threshold was selected based on the 90th percentile of CCR scores among men who might typically be considered for AS based on NCCN low/favorable-intermediate risk criteria (CCR = 0.8). The threshold was validated using 10-year PCM in an unselected, conservatively managed cohort and in the subset of the same cohort after excluding men with high-risk features. The clinical effect was evaluated in a contemporary clinical cohort. In the unselected validation cohort, men with CCR scores below the threshold had a predicted mean 10-year PCM of 2.7%, and the threshold significantly dichotomized low- and high-risk disease (P = 1.2 × 10 -5 ). After excluding high-risk men from the validation cohort, men with CCR scores below the threshold had a predicted mean 10-year PCM of 2.3%, and the threshold significantly dichotomized low- and high-risk disease (P = 0.020). There were no prostate cancer-specific deaths in men with CCR scores below the threshold in either analysis. The proportion of men in the clinical testing cohort identified as candidates for AS was substantially higher using the threshold (68.8%) compared to clinicopathologic features alone (42.6%), while mean 10-year predicted PCM risks remained essentially identical (1.9% vs. 2.0%, respectively). The CCR score threshold appropriately dichotomized patients into low- and high-risk groups for 10-year PCM, and may enable more appropriate selection of patients for AS. Copyright © 2018 Elsevier Inc. All rights reserved.
Methods of Muscle Activation Onset Timing Recorded During Spinal Manipulation.
Currie, Stuart J; Myers, Casey A; Krishnamurthy, Ashok; Enebo, Brian A; Davidson, Bradley S
2016-05-01
The purpose of this study was to determine electromyographic threshold parameters that most reliably characterize the muscular response to spinal manipulation and compare 2 methods that detect muscle activity onset delay: the double-threshold method and cross-correlation method. Surface and indwelling electromyography were recorded during lumbar side-lying manipulations in 17 asymptomatic participants. Muscle activity onset delays in relation to the thrusting force were compared across methods and muscles using a generalized linear model. The threshold combinations that resulted in the lowest Detection Failures were the "8 SD-0 milliseconds" threshold (Detection Failures = 8) and the "8 SD-10 milliseconds" threshold (Detection Failures = 9). The average muscle activity onset delay for the double-threshold method across all participants was 149 ± 152 milliseconds for the multifidus and 252 ± 204 milliseconds for the erector spinae. The average onset delay for the cross-correlation method was 26 ± 101 for the multifidus and 67 ± 116 for the erector spinae. There were no statistical interactions, and a main effect of method demonstrated that the delays were higher when using the double-threshold method compared with cross-correlation. The threshold parameters that best characterized activity onset delays were an 8-SD amplitude and a 10-millisecond duration threshold. The double-threshold method correlated well with visual supervision of muscle activity. The cross-correlation method provides several advantages in signal processing; however, supervision was required for some results, negating this advantage. These results help standardize methods when recording neuromuscular responses of spinal manipulation and improve comparisons within and across investigations. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
Wavelet-based adaptive thresholding method for image segmentation
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl
2001-05-01
A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.
Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan
2016-01-01
In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.
Li, Ke; Liu, Yi; Wang, Quanxin; Wu, Yalei; Song, Shimin; Sun, Yi; Liu, Tengchong; Wang, Jun; Li, Yang; Du, Shaoyi
2015-01-01
This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively. PMID:26544549
Sparse Zero-Sum Games as Stable Functional Feature Selection
Sokolovska, Nataliya; Teytaud, Olivier; Rizkalla, Salwa; Clément, Karine; Zucker, Jean-Daniel
2015-01-01
In large-scale systems biology applications, features are structured in hidden functional categories whose predictive power is identical. Feature selection, therefore, can lead not only to a problem with a reduced dimensionality, but also reveal some knowledge on functional classes of variables. In this contribution, we propose a framework based on a sparse zero-sum game which performs a stable functional feature selection. In particular, the approach is based on feature subsets ranking by a thresholding stochastic bandit. We provide a theoretical analysis of the introduced algorithm. We illustrate by experiments on both synthetic and real complex data that the proposed method is competitive from the predictive and stability viewpoints. PMID:26325268
Rios, Anthony; Kavuluru, Ramakanth
2013-09-01
Extracting diagnosis codes from medical records is a complex task carried out by trained coders by reading all the documents associated with a patient's visit. With the popularity of electronic medical records (EMRs), computational approaches to code extraction have been proposed in the recent years. Machine learning approaches to multi-label text classification provide an important methodology in this task given each EMR can be associated with multiple codes. In this paper, we study the the role of feature selection, training data selection, and probabilistic threshold optimization in improving different multi-label classification approaches. We conduct experiments based on two different datasets: a recent gold standard dataset used for this task and a second larger and more complex EMR dataset we curated from the University of Kentucky Medical Center. While conventional approaches achieve results comparable to the state-of-the-art on the gold standard dataset, on our complex in-house dataset, we show that feature selection, training data selection, and probabilistic thresholding provide significant gains in performance.
NASA Astrophysics Data System (ADS)
Taki, Majid; San Miguel, Maxi; Santagiustina, Marco
2000-02-01
Degenerate optical parametric oscillators can exhibit both uniformly translating fronts and nonuniformly translating envelope fronts under the walk-off effect. The nonlinear dynamics near threshold is shown to be described by a real convective Swift-Hohenberg equation, which provides the main characteristics of the walk-off effect on pattern selection. The predictions of the selected wave vector and the absolute instability threshold are in very good quantitative agreement with numerical solutions found from the equations describing the optical parametric oscillator.
Simulation optimization of PSA-threshold based prostate cancer screening policies
Zhang, Jingyu; Denton, Brian T.; Shah, Nilay D.; Inman, Brant A.
2013-01-01
We describe a simulation optimization method to design PSA screening policies based on expected quality adjusted life years (QALYs). Our method integrates a simulation model in a genetic algorithm which uses a probabilistic method for selection of the best policy. We present computational results about the efficiency of our algorithm. The best policy generated by our algorithm is compared to previously recommended screening policies. Using the policies determined by our model, we present evidence that patients should be screened more aggressively but for a shorter length of time than previously published guidelines recommend. PMID:22302420
NASA Astrophysics Data System (ADS)
Giovanna, Vessia; Luca, Pisano; Carmela, Vennari; Mauro, Rossi; Mario, Parise
2016-01-01
This paper proposes an automated method for the selection of rainfall data (duration, D, and cumulated, E), responsible for shallow landslide initiation. The method mimics an expert person identifying D and E from rainfall records through a manual procedure whose rules are applied according to her/his judgement. The comparison between the two methods is based on 300 D-E pairs drawn from temporal rainfall data series recorded in a 30 days time-lag before the landslide occurrence. Statistical tests, employed on D and E samples considered both paired and independent values to verify whether they belong to the same population, show that the automated procedure is able to replicate the expert pairs drawn by the expert judgment. Furthermore, a criterion based on cumulated distribution functions (CDFs) is proposed to select the most related D-E pairs to the expert one among the 6 drawn from the coded procedure for tracing the empirical rainfall threshold line.
NASA Astrophysics Data System (ADS)
Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo
We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Honorio, J.; Goldstein, R.; Honorio, J.
We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less
Thermal detection thresholds in 5-year-old preterm born children; IQ does matter.
de Graaf, Joke; Valkenburg, Abraham J; Tibboel, Dick; van Dijk, Monique
2012-07-01
Experiencing pain at newborn age may have consequences on one's somatosensory perception later in life. Children's perception for cold and warm stimuli may be determined with the Thermal Sensory Analyzer (TSA) device by two different methods. This pilot study in 5-year-old children born preterm aimed at establishing whether the TSA method of limits, which is dependent of reaction time, and the method of levels, which is independent of reaction time, would yield different cold and warm detection thresholds. The second aim was to establish possible associations between intellectual ability and the detection thresholds obtained with either method. A convenience sample was drawn from the participants in an ongoing 5-year follow-up study of a randomized controlled trial on effects of morphine during mechanical ventilation. Thresholds were assessed using both methods and statistically compared. Possible associations between the child's intelligence quotient (IQ) and threshold levels were analyzed. The method of levels yielded more sensitive thresholds than did the method of limits, i.e. mean (SD) cold detection thresholds: 30.3 (1.4) versus 28.4 (1.7) (Cohen'sd=1.2, P=0.001) and warm detection thresholds; 33.9 (1.9) versus 35.6 (2.1) (Cohen's d=0.8, P=0.04). IQ was statistically significantly associated only with the detection thresholds obtained with the method of limits (cold: r=0.64, warm: r=-0.52). The TSA method of levels, is to be preferred over the method of limits in 5-year-old preterm born children, as it establishes more sensitive detection thresholds and is independent of IQ. Copyright © 2011 Elsevier Ltd. All rights reserved.
Wang, Z; Gu, J; Jiang, X J
2017-04-20
Objective: To learn the relationship between the auditory steady state responses(ASSR)threshold and C-level and behavior T-level in cochlear implants in prelingually deaf children. Method: One hundred and twelve children with Nucleus CI24R(CA) cochlear implants were divided into residual hearing group and no residual hearing group on the basis of the results of ASSR before operation in this study.Compare the difference between the two groups in C-level and behavior T-level one year after operation. Result: There was difference in C-level and behavior T-level between residual hearing group and no residual hearing group( P <0.05 or P <0.01). Conclusion: According to the results of ASSR before operation,we can estimate the effect of cochlear implants,providing reference for the selection of choosing operating ears,and providing a reasonable expectation for physicians and parents of the patients. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Optimal estimation of recurrence structures from time series
NASA Astrophysics Data System (ADS)
beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel
2016-05-01
Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
NASA Astrophysics Data System (ADS)
Chang, Q.; Jiao, W.
2017-12-01
Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.
Rosecrans, Celia Z.; Nolan, Bernard T.; Gronberg, JoAnn M.
2018-01-31
The purpose of the prediction grids for selected redox constituents—dissolved oxygen and dissolved manganese—are intended to provide an understanding of groundwater-quality conditions at the domestic and public-supply drinking water depths. The chemical quality of groundwater and the fate of many contaminants is influenced by redox processes in all aquifers, and understanding the redox conditions horizontally and vertically is critical in evaluating groundwater quality. The redox condition of groundwater—whether oxic (oxygen present) or anoxic (oxygen absent)—strongly influences the oxidation state of a chemical in groundwater. The anoxic dissolved oxygen thresholds of <0.5 milligram per liter (mg/L), <1.0 mg/L, and <2.0 mg/L were selected to apply broadly to regional groundwater-quality investigations. Although the presence of dissolved manganese in groundwater indicates strongly reducing (anoxic) groundwater conditions, it is also considered a “nuisance” constituent in drinking water, making drinking water undesirable with respect to taste, staining, or scaling. Three dissolved manganese thresholds, <50 micrograms per liter (µg/L), <150 µg/L, and <300 µg/L, were selected to create predicted probabilities of exceedances in depth zones used by domestic and public-supply water wells. The 50 µg/L event threshold represents the secondary maximum contaminant level (SMCL) benchmark for manganese (U.S. Environmental Protection Agency, 2017; California Division of Drinking Water, 2014), whereas the 300 µg/L event threshold represents the U.S. Geological Survey (USGS) health-based screening level (HBSL) benchmark, used to put measured concentrations of drinking-water contaminants into a human-health context (Toccalino and others, 2014). The 150 µg/L event threshold represents one-half the USGS HBSL. The resultant dissolved oxygen and dissolved manganese prediction grids may be of interest to water-resource managers, water-quality researchers, and groundwater modelers concerned with the occurrence of natural and anthropogenic contaminants related to anoxic conditions. Prediction grids for selected redox constituents and thresholds were created by the USGS National Water-Quality Assessment (NAWQA) modeling and mapping team.
Dexter, Franklin; Epstein, Richard H; Ledolter, Johannes; Dasovich, Susan M; Herman, Jay H; Maga, Joni M; Schwenk, Eric S
2018-05-01
Hospitals review allogeneic red blood cell (RBC) transfusions for appropriateness. Audit criteria have been published that apply to 5 common procedures. We expanded on this work to study the management decision of selecting which cases involving transfusion of at least 1 RBC unit to audit (review) among all surgical procedures, including those previously studied. This retrospective, observational study included 400,000 cases among 1891 different procedures over an 11-year period. There were 12,616 cases with RBC transfusion. We studied the proportions of cases that would be audited based on criteria of nadir hemoglobin (Hb) greater than the hospital's selected transfusion threshold, or absent Hb or missing estimated blood loss (EBL) among procedures with median EBL <500 mL. This threshold EBL was selected because it is approximately the volume removed during the donation of a single unit of whole blood at a blood bank. Missing EBL is important to the audit decision for cases in which the procedures' median EBL is <500 mL because, without an indication of the extent of bleeding, there are insufficient data to assume that there was sufficient blood loss to justify the transfusion. Most cases (>50%) that would be audited and most cases (>50%) with transfusion were among procedures with median EBL <500 mL (P < .0001). Among cases with transfusion and nadir Hb >9 g/dL, the procedure's median EBL was <500 mL for 3.0 times more cases than for procedures having a median EBL ≥500 mL. A greater percentage of cases would be recommended for audit based on missing values for Hb and/or EBL than based on exceeding the Hb threshold among cases of procedures with median EBL ≥500 mL (P < .0001). There were 3.7 times as many cases with transfusion that had missing values for Hb and/or EBL than had a nadir Hb >9 g/dL and median EBL for the procedure ≥500 mL. An automated process to select cases for audit of intraoperative transfusion of RBC needs to consider the median EBL of the procedure, whether the nadir Hb is below the hospital's Hb transfusion threshold for surgical cases, and the absence of either a Hb or entry of the EBL for the case. This conclusion applies to all surgical cases and procedures.
Improvements to the modal holographic wavefront sensor.
Kong, Fanpeng; Lambert, Andrew
2016-05-01
The Zernike coefficients of a light wavefront can be calculated directly by intensity ratios of pairs of spots in the reconstructed image plane of a holographic wavefront sensor (HWFS). However, the response curve of the HWFS heavily depends on the position and size of the detector for each spot and the distortions introduced by other aberrations. In this paper, we propose a method to measure the intensity of each spot by setting a threshold to select effective pixels and using the weighted average intensity within a selected window. Compared with using the integral intensity over a small window for each spot, we show through a numerical simulation that the proposed method reduces the dependency of the HWFS's response curve on the selection of the detector window. We also recorded a HWFS on a holographic plate using a blue laser and demonstrated its capability to detect the strength of encoded Zernike terms in an aberrated beam.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng
2015-01-01
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315
Beam, Elena; Germer, Jeffrey J; Lahr, Brian; Yao, Joseph D C; Limper, Andrew Harold; Binnicker, Matthew J; Razonable, Raymund R
2018-01-01
Cytomegalovirus (CMV) pneumonia causes major morbidity and mortality. Its diagnosis requires demonstration of viral cytopathic changes in tissue, entailing risks of lung biopsy. This study aimed to determine CMV viral load (VL) thresholds in bronchoalveolar lavage fluid (BALF) for diagnosis of CMV pneumonia in immunocompromised patients. CMV VL in BALF was studied in 17 patients (83% transplant recipients) and 21 control subjects with and without CMV pneumonia, respectively, using an FDA-approved PCR assay (Cobas ® AmpliPrep/Cobas TaqMan ® CMV Test, Roche Molecular Systems, Inc.) calibrated to the WHO International Standard for CMV DNA (NIBSC: 09/162). Receiver operating characteristic curve analysis produced a BALF CMV VL threshold of 34 800, IU/mL with 91.7% sensitivity and 100.0% specificity for diagnosis of possible, probable, and proven CMV pneumonia in transplant patients, while a threshold of 656 000 IU/mL yielded 100% sensitivity and specificity among biopsy-proven cases. For all immunocompromised patients, a VL threshold of 274 IU/mL was selected. VL thresholds also were normalized to BALF cell count yielding a threshold of 0.32 IU/10 6 cells with 91.7% sensitivity and 90.5% specificity for possible, probable, and proven CMV pneumonia in transplant recipients. Monitoring CMV VL in BALF may be a less invasive method for diagnosing CMV pneumonia in immunocompromised patients. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Henry, Kenneth S.; Kale, Sushrut; Scheidt, Ryan E.; Heinz, Michael G.
2011-01-01
Non-invasive auditory brainstem responses (ABRs) are commonly used to assess cochlear pathology in both clinical and research environments. In the current study, we evaluated the relationship between ABR characteristics and more direct measures of cochlear function. We recorded ABRs and auditory nerve (AN) single-unit responses in seven chinchillas with noise induced hearing loss. ABRs were recorded for 1–8 kHz tone burst stimuli both before and several weeks after four hours of exposure to a 115 dB SPL, 50 Hz band of noise with a center frequency of 2 kHz. Shifts in ABR characteristics (threshold, wave I amplitude, and wave I latency) following hearing loss were compared to AN-fiber tuning curve properties (threshold and frequency selectivity) in the same animals. As expected, noise exposure generally resulted in an increase in ABR threshold and decrease in wave I amplitude at equal SPL. Wave I amplitude at equal sensation level (SL), however, was similar before and after noise exposure. In addition, noise exposure resulted in decreases in ABR wave I latency at equal SL and, to a lesser extent, at equal SPL. The shifts in ABR characteristics were significantly related to AN-fiber tuning curve properties in the same animal at the same frequency. Larger shifts in ABR thresholds and ABR wave I amplitude at equal SPL were associated with greater AN threshold elevation. Larger reductions in ABR wave I latency at equal SL, on the other hand, were associated with greater loss of AN frequency selectivity. This result is consistent with linear systems theory, which predicts shorter time delays for broader peripheral frequency tuning. Taken together with other studies, our results affirm that ABR thresholds and wave I amplitude provide useful estimates of cochlear sensitivity. Furthermore, comparisons of ABR wave I latency to normative data at the same SL may prove useful for detecting and characterizing loss of cochlear frequency selectivity. PMID:21699970
Kumagai, Naoki H; Yamano, Hiroya
2018-01-01
Coral reefs are one of the world's most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004-2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper.
Yamano, Hiroya
2018-01-01
Coral reefs are one of the world’s most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004–2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper. PMID:29473007
Tzaneva, L
1996-09-01
The discomfort threshold problem is not yet clear from the audiological point of view. Its significance for work physiology and hygiene is not enough clarified. This paper discussed the results of a study of the discomfort threshold, performed including 385 operators from the State Company "Kremikovtzi", divided into 4 groups (3 groups according to length of service and one control group). The most prominent changes were found in operators with increased tonal auditory threshold up to 45 and over 50 dB with high confidential probability. The observed changes are distributed in 3 groups: 1. increased tonal auditory threshold (up to 30 dB) without decrease of the discomfort threshold; 2. decreased discomfort threshold (with about 15-20 dB) at increased tonal auditory threshold (up to 45 dB); 3. decreased discomfort threshold at increased (over 50 dB) tonal auditory threshold. The auditory scope of the operators, belonging to groups III and IV (with the longest length of service) is narrowed, being distorted for the latter. This pathophysiological phenomenon can be explained by an enhanced effect of sound irritation and the presence of a recruitment phenomenon with possible engagement of the central part of the auditory analyzer. It is concluded that the discomfort threshold is a sensitive indicator for the state of the individual norms for speech-sound-noise discomfort. The comparison of the discomfort threshold with the hygienic standards and the noise levels at each particular working place can be used as a criterion for the professional selection for work in conditions of masking noise effect and its tolerance with respect to achieving the individual discomfort level depending on the intensity of the speech-sound-noise signals at a particular working place.
Automatic rice crop height measurement using a field server and digital image processing.
Sritarapipat, Tanakorn; Rakwatin, Preesan; Kasetkasem, Teerasit
2014-01-07
Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.
Alabbadi, Ibrahim; Crealey, Grainne; Scott, Michael; Baird, Simon; Trouton, Tom; Mairs, Jill; McElnay, James
2006-01-01
System of Objectified Judgement Analysis (SOJA) is a structured approach to the selection of drugs for formulary inclusion. How- ever, while SOJA is a very important advance in drug selection for formulary purposes, it is hospital based and can only be applied to one indication at a time. In SOJA, cost has been given a primary role in the selection process as it has been included as a selection criterion from the start. Cost may therefore drive the selection of a particular drug product at the expense of other basic criteria such as safety or efficacy. The aims of this study were to use a modified SOJA approach in the selection of ACE inhibitors (ACEIs) for use in a joint formulary that bridges primary and secondary care within a health board in Northern Ireland, and to investigate the potential impact of the joint formulary on prescribing costs of ACEIs in that health board. The modified SOJA approach involved four phases in sequence: an evidence-based pharmacotherapeutic evaluation of all available ACEI drug entities, a separate safety/risk assessment analysis of products containing agents that exceeded the pharmacotherapeutic threshold, a budget-impact analysis and, finally, the selection of product lines. A comprehensive literature review and expert panel judgement informed the selection of criteria (and their relative weighting) for the pharmacotherapeutic evaluation. The resultant criteria/scoring system was circulated (in questionnaire format) to prescribers and stakeholders for comment. Based on statistical analysis of the latter survey results, the final scoring system was developed. Drug entities that exceeded the evidence threshold were sequentially entered into the second and third phases of the process. Five drug entities (11 currently available in the UK) exceeded the evidence threshold and 22 of 26 submitted product lines containing these drug entities satisfied the safety/risk assessment criteria. Three product lines, each containing a different drug entity, were selected for formulary inclusion after budget impact analysis was performed. The estimated potential annual cost savings for ACEIs (based on estimated annual usage in defined daily doses) for this particular health board was 42%. The modified SOJA approach has a significant contribution to make in containing the costs of ACEIs. Applying modified SOJA as a practical method for all indications will allow the development of a unified formulary that bridges secondary and primary care.
Design and comparison of laser windows for high-power lasers
NASA Astrophysics Data System (ADS)
Niu, Yanxiong; Liu, Wenwen; Liu, Haixia; Wang, Caili; Niu, Haisha; Man, Da
2014-11-01
High-power laser systems are getting more and more widely used in industry and military affairs. It is necessary to develop a high-power laser system which can operate over long periods of time without appreciable degradation in performance. When a high-energy laser beam transmits through a laser window, it is possible that the permanent damage is caused to the window because of the energy absorption by window materials. So, when we design a high-power laser system, a suitable laser window material must be selected and the laser damage threshold of the window must be known. In this paper, a thermal analysis model of high-power laser window is established, and the relationship between the laser intensity and the thermal-stress field distribution is studied by deducing the formulas through utilizing the integral-transform method. The influence of window radius, thickness and laser intensity on the temperature and stress field distributions is analyzed. Then, the performance of K9 glass and the fused silica glass is compared, and the laser-induced damage mechanism is analyzed. Finally, the damage thresholds of laser windows are calculated. The results show that compared with K9 glass, the fused silica glass has a higher damage threshold due to its good thermodynamic properties. The presented theoretical analysis and simulation results are helpful for the design and selection of high-power laser windows.
Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.
Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari
2014-07-01
[Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.
An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image
NASA Astrophysics Data System (ADS)
Yu, Zhijie; Yu, Hui; Wang, Chen-sheng
2014-11-01
Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Naranjo, Steven E; Ellsworth, Peter C
2009-01-01
Fifty years ago, Stern, Smith, van den Bosch and Hagen outlined a simple but sophisticated idea of pest control predicated on the complementary action of chemical and biological control. This integrated control concept has since been a driving force and conceptual foundation for all integrated pest management (IPM) programs. The four basic elements include thresholds for determining the need for control, sampling to determine critical densities, understanding and conserving the biological control capacity in the system and the use of selective insecticides or selective application methods, when needed, to augment biological control. Here we detail the development, evolution, validation and implementation of an integrated control (IC) program for whitefly, Bemisia tabaci (Genn.), in the Arizona cotton system that provides a rare example of the vision of Stern and his colleagues. Economic thresholds derived from research-based economic injury levels were developed and integrated with rapid and accurate sampling plans into validated decision tools widely adopted by consultants and growers. Extensive research that measured the interplay among pest population dynamics, biological control by indigenous natural enemies and selective insecticides using community ordination methods, predator:prey ratios, predator exclusion and demography validated the critical complementary roles played by chemical and biological control. The term ‘bioresidual’ was coined to describe the extended environmental resistance from biological control and other forces possible when selective insecticides are deployed. The tangible benefits have been a 70% reduction in foliar insecticides, a >$200 million saving in control costs and yield, along with enhanced utilization of ecosystem services over the last 14 years. Published in 2009 by John Wiley & Sons, Ltd. PMID:19834884
Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection
NASA Astrophysics Data System (ADS)
Naveau, Philippe; Huser, Raphael; Ribereau, Pierre; Hannart, Alexis
2016-04-01
In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.
Cochlear neuropathy and the coding of supra-threshold sound.
Bharadwaj, Hari M; Verhulst, Sarah; Shaheen, Luke; Liberman, M Charles; Shinn-Cunningham, Barbara G
2014-01-01
Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.
Proposal and Implementation of a Robust Sensing Method for DVB-T Signal
NASA Astrophysics Data System (ADS)
Song, Chunyi; Rahman, Mohammad Azizur; Harada, Hiroshi
This paper proposes a sensing method for TV signals of DVB-T standard to realize effective TV White Space (TVWS) Communication. In the TVWS technology trial organized by the Infocomm Development Authority (iDA) of Singapore, with regard to the sensing level and sensing time, detecting DVB-T signal at the level of -120dBm over an 8MHz channel with a sensing time below 1 second is required. To fulfill such a strict sensing requirement, we propose a smart sensing method which combines feature detection and energy detection (CFED), and is also characterized by using dynamic threshold selection (DTS) based on a threshold table to improve sensing robustness to noise uncertainty. The DTS based CFED (DTS-CFED) is evaluated by computer simulations and is also implemented into a hardware sensing prototype. The results show that the DTS-CFED achieves a detection probability above 0.9 for a target false alarm probability of 0.1 for DVB-T signals at the level of -120dBm over an 8MHz channel with the sensing time equals to 0.1 second.
Ding, Wen-jie; Chen, Wen-he; Deng, Ming-jia; Luo, Hui; Li, Lin; Liu, Jun-xin
2016-02-15
Co-processing of sewage sludge using the cement kiln can realize sludge harmless treatment, quantity reduction, stabilization and reutilization. The moisture content should be reduced to below 30% to meet the requirement of combustion. Thermal drying is an effective way for sludge desiccation. Odors and volatile organic compounds are generated and released during the sludge drying process, which could lead to odor pollution. The main odor pollutants were selected by the multi-index integrated assessment method. The concentration, olfactory threshold, threshold limit value, smell security level and saturated vapor pressure were considered as indexes based on the related regulations in China and foreign countries. Taking the pollution potential as the evaluation target, and the risk index and odor emission intensity as evaluation indexes, the odor pollution potential rated evaluation model of the pollutants was built according to the Weber-Fechner law. The aim of the present study is to form the rating evaluation method of odor potential pollution capacity suitable for the directly drying process of sludge.
Automatic threshold selection for multi-class open set recognition
NASA Astrophysics Data System (ADS)
Scherreik, Matthew; Rigling, Brian
2017-05-01
Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.
An integrative perspective of the anaerobic threshold.
Sales, Marcelo Magalhães; Sousa, Caio Victor; da Silva Aguiar, Samuel; Knechtle, Beat; Nikolaidis, Pantelis Theodoros; Alves, Polissandro Mortoza; Simões, Herbert Gustavo
2017-12-14
The concept of anaerobic threshold (AT) was introduced during the nineteen sixties. Since then, several methods to identify the anaerobic threshold (AT) have been studied and suggested as novel 'thresholds' based upon the variable used for its detection (i.e. lactate threshold, ventilatory threshold, glucose threshold). These different techniques have brought some confusion about how we should name this parameter, for instance, anaerobic threshold or the physiological measure used (i.e. lactate, ventilation). On the other hand, the modernization of scientific methods and apparatus to detect AT, as well as the body of literature formed in the past decades, could provide a more cohesive understanding over the AT and the multiple physiological systems involved. Thus, the purpose of this review was to provide an integrative perspective of the methods to determine AT. Copyright © 2017 Elsevier Inc. All rights reserved.
Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.
Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela
Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedam, S.; Archambault, L.; Starkschall, G.
2007-11-15
Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the deliverymore » gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8{+-}11% and 14{+-}21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4{+-}7% and 8{+-}15% with and without audio-visual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.« less
Vedam, S; Archambault, L; Starkschall, G; Mohan, R; Beddar, S
2007-11-01
Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the delivery gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8 +/- 11% and 14 +/- 21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4 +/- 7% and 8 +/- 15% with and without audiovisual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.
Activity Detection and Retrieval for Image and Video Data with Limited Training
2015-06-10
applications. Here we propose two techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a... automata . For our second approach to segmentation, we employ a region based segmentation technique that is capable of handling intensity inhomogeneity...techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a mixture of Gaussian is fitted to the
Bart, Thomas; Boo, Michael; Balabanova, Snejana; Fischer, Yvonne; Nicoloso, Grazia; Foeken, Lydia; Oudshoorn, Machteld; Passweg, Jakob; Tichelli, Andre; Kindler, Vincent; Kurtzberg, Joanne; Price, Thomas; Regan, Donna; Shpall, Elizabeth J.; Schwabe, Rudolf
2013-01-01
Background Over the last 2 decades, cord blood (CB) has become an important source of blood stem cells. Clinical experience has shown that CB is a viable source for blood stem cells in the field of unrelated hematopoietic blood stem cell transplantation. Methods Studies of CB units (CBUs) stored and ordered from the US (National Marrow Donor Program (NMDP) and Swiss (Swiss Blood Stem Cells (SBSQ)) CB registries were conducted to assess whether these CBUs met the needs of transplantation patients, as evidenced by units being selected for transplantation. These data were compared to international banking and selection data (Bone Marrow Donors Worldwide (BMDW), World Marrow Donor Association (WMDA)). Further analysis was conducted on whether current CB banking practices were economically viable given the units being selected from the registries for transplant. It should be mentioned that our analysis focused on usage, deliberately omitting any information about clinical outcomes of CB transplantation. Results A disproportionate number of units with high total nucleated cell (TNC) counts are selected, compared to the distribution of units by TNC available. Therefore, the decision to use a low threshold for banking purposes cannot be supported by economic analysis and may limit the economic viability of future public CB banking. Conclusions We suggest significantly raising the TNC level used to determine a bankable unit. A level of 125 × 107 TNCs, maybe even 150 × 107 TNCs, might be a viable banking threshold. This would improve the return on inventory investments while meeting transplantation needs based on current selection criteria. PMID:23637645
The uncertain response in humans and animals
NASA Technical Reports Server (NTRS)
Smith, J. D.; Shields, W. E.; Schull, J.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)
1997-01-01
There has been no comparative psychological study of uncertainty processes. Accordingly, the present experiments asked whether animals, like humans, escape adaptively when they are uncertain. Human and animal observers were given two primary responses in a visual discrimination task, and the opportunity to escape from some trials into easier ones. In one psychophysical task (using a threshold paradigm), humans escaped selectively the difficult trials that left them uncertain of the stimulus. Two rhesus monkeys (Macaca mulatta) also showed this pattern. In a second psychophysical task (using the method of constant stimuli), some humans showed this pattern but one escaped infrequently and nonoptimally. Monkeys showed equivalent individual differences. The data suggest that escapes by humans and monkeys are interesting cognitive analogs and may reflect controlled decisional processes prompted by the perceptual ambiguity at threshold.
FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)
NASA Astrophysics Data System (ADS)
Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.
2017-02-01
This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.
Overview of field gamma spectrometries based on Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
Characterizing air quality data from complex network perspective.
Fan, Xinghua; Wang, Li; Xu, Huihui; Li, Shasha; Tian, Lixin
2016-02-01
Air quality depends mainly on changes in emission of pollutants and their precursors. Understanding its characteristics is the key to predicting and controlling air quality. In this study, complex networks were built to analyze topological characteristics of air quality data by correlation coefficient method. Firstly, PM2.5 (particulate matter with aerodynamic diameter less than 2.5 μm) indexes of eight monitoring sites in Beijing were selected as samples from January 2013 to December 2014. Secondly, the C-C method was applied to determine the structure of phase space. Points in the reconstructed phase space were considered to be nodes of the network mapped. Then, edges were determined by nodes having the correlation greater than a critical threshold. Three properties of the constructed networks, degree distribution, clustering coefficient, and modularity, were used to determine the optimal value of the critical threshold. Finally, by analyzing and comparing topological properties, we pointed out that similarities and difference in the constructed complex networks revealed influence factors and their different roles on real air quality system.
Linear segmentation algorithm for detecting layer boundary with lidar.
Mao, Feiyue; Gong, Wei; Logan, Timothy
2013-11-04
The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.
Wavelet Fusion for Concealed Object Detection Using Passive Millimeter Wave Sequence Images
NASA Astrophysics Data System (ADS)
Chen, Y.; Pang, L.; Liu, H.; Xu, X.
2018-04-01
PMMW imaging system can create interpretable imagery on the objects concealed under clothing, which gives the great advantage to the security check system. Paper addresses wavelet fusion to detect concealed objects using passive millimeter wave (PMMW) sequence images. According to PMMW real-time imager acquired image characteristics and storage methods firstly, using the sum of squared difference (SSD) as the image-related parameters to screen the sequence images. Secondly, the selected images are optimized using wavelet fusion algorithm. Finally, the concealed objects are detected by mean filter, threshold segmentation and edge detection. The experimental results show that this method improves the detection effect of concealed objects by selecting the most relevant images from PMMW sequence images and using wavelet fusion to enhance the information of the concealed objects. The method can be effectively applied to human body concealed object detection in millimeter wave video.
NASA Astrophysics Data System (ADS)
Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng
2018-04-01
This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.
NASA Technical Reports Server (NTRS)
Brucker, G. J.; Stassinopoulos, E. G.
1991-01-01
An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.
Chen, Xiao-Min; Feng, Ming-Jun; Shen, Cai-Jie; He, Bin; Du, Xian-Feng; Yu, Yi-Bo; Liu, Jing; Chu, Hui-Min
2017-07-01
The present study was designed to develop a novel method for identifying significant pathways associated with human hypertrophic cardiomyopathy (HCM), based on gene co‑expression analysis. The microarray dataset associated with HCM (E‑GEOD‑36961) was obtained from the European Molecular Biology Laboratory‑European Bioinformatics Institute database. Informative pathways were selected based on the Reactome pathway database and screening treatments. An empirical Bayes method was utilized to construct co‑expression networks for informative pathways, and a weight value was assigned to each pathway. Differential pathways were extracted based on weight threshold, which was calculated using a random model. In order to assess whether the co‑expression method was feasible, it was compared with traditional pathway enrichment analysis of differentially expressed genes, which were identified using the significance analysis of microarrays package. A total of 1,074 informative pathways were screened out for subsequent investigations and their weight values were also obtained. According to the threshold of weight value of 0.01057, 447 differential pathways, including folding of actin by chaperonin containing T‑complex protein 1 (CCT)/T‑complex protein 1 ring complex (TRiC), purine ribonucleoside monophosphate biosynthesis and ubiquinol biosynthesis, were obtained. Compared with traditional pathway enrichment analysis, the number of pathways obtained from the co‑expression approach was increased. The results of the present study demonstrated that this method may be useful to predict marker pathways for HCM. The pathways of folding of actin by CCT/TRiC and purine ribonucleoside monophosphate biosynthesis may provide evidence of the underlying molecular mechanisms of HCM, and offer novel therapeutic directions for HCM.
NASA Astrophysics Data System (ADS)
Jakob, Matthias; Weatherly, Hamish
2003-09-01
Landslides triggered by rainfall are the cause of thousands of deaths worldwide every year. One possible approach to limit the socioeconomic consequences of such events is the development of climatic thresholds for landslide initiation. In this paper, we propose a method that incorporates antecedent rainfall and streamflow data to develop a landslide initiation threshold for the North Shore Mountains of Vancouver, British Columbia. Hydroclimatic data were gathered for 18 storms that triggered landslides and 18 storms that did not. Discriminant function analysis separated the landslide-triggering storms from those storms that did not trigger landslides and selected the most meaningful variables that allow this separation. Discriminant functions were also developed for the landslide-triggering and nonlandslide-triggering storms. The difference of the discriminant scores, ΔCS, for both groups is a measure of landslide susceptibility during a storm. The variables identified that optimize the separation of the two storm groups are 4-week rainfall prior to a significant storm, 6-h rainfall during a storm, and the number of hours 1 m 3/s discharge was exceeded at Mackay Creek during a storm. Three thresholds were identified. The Landslide Warning Threshold (LWT) is reached when ΔCS is -1. The Conditional Landslide Initiation Threshold (CTL I) is reached when ΔCS is zero, and it implies that landslides are likely if 4 mm/h rainfall intensity is exceeded at which point the Imminent Landslide Initiation Threshold (ITL I) is reached. The LWT allows time for the issuance of a landslide advisory and to move personnel out of hazardous areas. The methodology proposed in this paper can be transferred to other regions worldwide where type and quality of data are appropriate for this type of analysis.
The Principle of the Micro-Electronic Neural Bridge and a Prototype System Design.
Huang, Zong-Hao; Wang, Zhi-Gong; Lu, Xiao-Ying; Li, Wen-Yuan; Zhou, Yu-Xuan; Shen, Xiao-Yan; Zhao, Xin-Tai
2016-01-01
The micro-electronic neural bridge (MENB) aims to rebuild lost motor function of paralyzed humans by routing movement-related signals from the brain, around the damage part in the spinal cord, to the external effectors. This study focused on the prototype system design of the MENB, including the principle of the MENB, the neural signal detecting circuit and the functional electrical stimulation (FES) circuit design, and the spike detecting and sorting algorithm. In this study, we developed a novel improved amplitude threshold spike detecting method based on variable forward difference threshold for both training and bridging phase. The discrete wavelet transform (DWT), a new level feature coefficient selection method based on Lilliefors test, and the k-means clustering method based on Mahalanobis distance were used for spike sorting. A real-time online spike detecting and sorting algorithm based on DWT and Euclidean distance was also implemented for the bridging phase. Tested by the data sets available at Caltech, in the training phase, the average sensitivity, specificity, and clustering accuracies are 99.43%, 97.83%, and 95.45%, respectively. Validated by the three-fold cross-validation method, the average sensitivity, specificity, and classification accuracy are 99.43%, 97.70%, and 96.46%, respectively.
Yuan, Su-Fen; Liu, Ze-Hua; Lian, Hai-Xian; Yang, Chuangtao; Lin, Qing; Yin, Hua; Dang, Zhi
2016-10-01
A simple online headspace solid-phase microextraction (HS-SPME) coupled with the gas chromatography-mass spectrometry (GC-MS) method was developed for simultaneous determination of trace amounts of nine estrogenic odorant alkylphenols and chlorophenols and their derivatives in water samples. The extraction conditions of HS-SPME were optimized including fiber selection, extraction temperature, extraction time, and salt concentration. Results showed that divinylbenzene/Carboxen/polydimethylsiloxane (DVB/CAR/PDMS) fiber was the most appropriate one among the three selected commercial fibers, and the optimal extraction temperature, time, and salt concentration were 70 °C, 30 min, and 0.25 g/mL, respectively. The developed method was validated and showed good linearity (R (2) > 0.989), low limit of detection (LOD, 0.002-0.5 μg/L), and excellent recoveries (76-126 %) with low relative standard deviation (RSD, 0.7-12.9 %). The developed method was finally applied to two surface water samples and some of these target compounds were detected. All these detected compounds were below their odor thresholds, except for 2,4,6-TCAS and 2,4,6-TBAS wherein their concentrations were near their odor thresholds. However, in the two surface water samples, these detected compounds contributed to a certain amount of estrogenicity, which seemed to suggest that more attention should be paid to the issue of estrogenicity rather than to the odor problem.
Elizabeth A. Freeman; Gretchen G. Moisen
2008-01-01
Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence - absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have...
Pool desiccation and developmental thresholds in the common frog, Rana temporaria.
Lind, Martin I; Persbo, Frida; Johansson, Frank
2008-05-07
The developmental threshold is the minimum size or condition that a developing organism must have reached in order for a life-history transition to occur. Although developmental thresholds have been observed for many organisms, inter-population variation among natural populations has not been examined. Since isolated populations can be subjected to strong divergent selection, population divergence in developmental thresholds can be predicted if environmental conditions favour fast or slow developmental time in different populations. Amphibian metamorphosis is a well-studied life-history transition, and using a common garden approach we compared the development time and the developmental threshold of metamorphosis in four island populations of the common frog Rana temporaria: two populations originating from islands with only temporary breeding pools and two from islands with permanent pools. As predicted, tadpoles from time-constrained temporary pools had a genetically shorter development time than those from permanent pools. Furthermore, the variation in development time among females from temporary pools was low, consistent with the action of selection on rapid development in this environment. However, there were no clear differences in the developmental thresholds between the populations, indicating that the main response to life in a temporary pool is to shorten the development time.
Mulsow, Jason; Finneran, James J; Houser, Dorian S
2011-04-01
Although electrophysiological methods of measuring the hearing sensitivity of pinnipeds are not yet as refined as those for dolphins and porpoises, they appear to be a promising supplement to traditional psychophysical procedures. In order to further standardize electrophysiological methods with pinnipeds, a within-subject comparison of psychophysical and auditory steady-state response (ASSR) measures of aerial hearing sensitivity was conducted with a 1.5-yr-old California sea lion. The psychophysical audiogram was similar to those previously reported for otariids, with a U-shape, and thresholds near 10 dB re 20 μPa at 8 and 16 kHz. ASSR thresholds measured using both single and multiple simultaneous amplitude-modulated tones closely reproduced the psychophysical audiogram, although the mean ASSR thresholds were elevated relative to psychophysical thresholds. Differences between psychophysical and ASSR thresholds were greatest at the low- and high-frequency ends of the audiogram. Thresholds measured using the multiple ASSR method were not different from those measured using the single ASSR method. The multiple ASSR method was more rapid than the single ASSR method, and allowed for threshold measurements at seven frequencies in less than 20 min. The multiple ASSR method may be especially advantageous for hearing sensitivity measurements with otariid subjects that are untrained for psychophysical procedures.
Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification
Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.
2013-01-01
Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761
Using principal component analysis for selecting network behavioral anomaly metrics
NASA Astrophysics Data System (ADS)
Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex
2010-04-01
This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.
Bechard, Jeff; Gibson, John Ken; Killingsworth, Cheryl R; Wheeler, Jeffery J; Schneidkraut, Marlowe J; Huang, Jian; Ideker, Raymond E; McAfee, Donald A
2011-03-01
Vernakalant is a novel antiarrhythmic agent that has demonstrated clinical efficacy for the treatment of atrial fibrillation. Vernakalant blocks, to various degrees, cardiac sodium and potassium channels with a pattern that suggests atrial selectivity. We hypothesized, therefore, that vernakalant would affect atrial more than ventricular effective refractory period (ERP) and have little or no effect on ventricular defibrillation threshold (DFT). Atrial and ventricular ERP and ventricular DFT were determined before and after treatment with vernakalant or vehicle in 23 anesthetized male mixed-breed pigs. Vernakalant was infused at a rate designed to achieve stable plasma levels similar to those in human clinical trials. Atrial and ventricular ERP were determined by endocardial extrastimuli delivered to the right atria or right ventricle. Defibrillation was achieved using external biphasic shocks delivered through adhesive defibrillation patches placed on the thorax after 10 seconds of electrically induced ventricular fibrillation. The DFT was estimated using the Dixon "up-and-down" method. Vernakalant significantly increased atrial ERP compared with vehicle controls (34 ± 8 versus 9 ± 7 msec, respectively) without significantly affecting ventricular ERP or DFT. This is consistent with atrial selective actions and supports the conclusion that vernakalant does not alter the efficacy of electrical defibrillation.
Phillips, Glenn A; Wyrwich, Kathleen W; Guo, Shien; Medori, Rossella; Altincatal, Arman; Wagner, Linda; Elkins, Jacob
2014-11-01
The 29-item Multiple Sclerosis Impact Scale (MSIS-29) was developed to examine the impact of multiple sclerosis (MS) on physical and psychological functioning from a patient's perspective. To determine the responder definition (RD) of the MSIS-29 physical impact subscale (PHYS) in a group of patients with relapsing-remitting MS (RRMS) participating in a clinical trial. Data from the SELECT trial comparing daclizumab high-yield process with placebo in patients with RRMS were used. Physical function was evaluated in SELECT using three patient-reported outcomes measures and the Expanded Disability Status Scale (EDSS). Anchor- and distribution-based methods were used to identify an RD for the MSIS-29. Results across the anchor-based approach suggested MSIS-29 PHYS RD values of 6.91 (mean), 7.14 (median) and 7.50 (mode). Distribution-based RD estimates ranged from 6.24 to 10.40. An RD of 7.50 was selected as the most appropriate threshold for physical worsening based on corresponding changes in the EDSS (primary anchor of interest). These findings indicate that a ≥7.50 point worsening on the MSIS-29 PHYS is a reasonable and practical threshold for identifying patients with RRMS who have experienced a clinically significant change in the physical impact of MS. © The Author(s), 2014.
Lane identification and path planning for autonomous mobile robots
NASA Astrophysics Data System (ADS)
McKeon, Robert T.; Paulik, Mark; Krishnan, Mohan
2006-10-01
This work has been performed in conjunction with the University of Detroit Mercy's (UDM) ECE Department autonomous vehicle entry in the 2006 Intelligent Ground Vehicle Competition (www.igvc.org). The IGVC challenges engineering students to design autonomous vehicles and compete in a variety of unmanned mobility competitions. The course to be traversed in the competition consists of a lane demarcated by painted lines on grass with the possibility of one of the two lines being deliberately left out over segments of the course. The course also consists of other challenging artifacts such as sandpits, ramps, potholes, and colored tarps that alter the color composition of scenes, and obstacles set up using orange and white construction barrels. This paper describes a composite lane edge detection approach that uses three algorithms to implement noise filters enabling increased removal of noise prior to the application of image thresholding. The first algorithm uses a row-adaptive statistical filter to establish an intensity floor followed by a global threshold based on a reverse cumulative intensity histogram and a priori knowledge about lane thickness and separation. The second method first improves the contrast of the image by implementing an arithmetic combination of the blue plane (RGB format) and a modified saturation plane (HSI format). A global threshold is then applied based on the mean of the intensity image and a user-defined offset. The third method applies the horizontal component of the Sobel mask to a modified gray scale of the image, followed by a thresholding method similar to the one used in the second method. The Hough transform is applied to each of the resulting binary images to select the most probable line candidates. Finally, a heuristics-based confidence interval is determined, and the results sent on to a separate fuzzy polar-based navigation algorithm, which fuses the image data with that produced by a laser scanner (for obstacle detection).
Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis
NASA Astrophysics Data System (ADS)
Perttu, A. B.; Taisne, B.
2016-12-01
Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.
Should we expect population thresholds for wildlife disease?
Lloyd-Smith, James O.; Cross, P.C.; Briggs, C.J.; Daugherty, M.; Getz, W.M.; Latto, J.; Sanchez, M.; Smith, A.; Swei, A.
2005-01-01
Host population thresholds for invasion or persistence of infectious disease are core concepts of disease ecology, and underlie on-going and controversial disease control policies based on culling and vaccination. Empirical evidence for these thresholds in wildlife populations has been sparse, however, though recent studies have narrowed this gap. Here we review the theoretical bases for population thresholds for disease, revealing why they are difficult to measure and sometimes are not even expected, and identifying important facets of wildlife ecology left out of current theories. We discuss strengths and weaknesses of selected empirical studies that have reported disease thresholds for wildlife, identify recurring obstacles, and discuss implications of our imperfect understanding of wildlife thresholds for disease control policy.
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maspero, M.; Meijer, G.J.; Lagendijk, J.J.W.
2015-06-15
Purpose: To develop an image processing method for MRI-based generation of electron density maps, known as pseudo-CT (pCT), without usage of model- or atlas-based segmentation, and to evaluate the method in the pelvic and head-neck region against CT. Methods: CT and MRI scans were obtained from the pelvic region of four patients in supine position using a flat table top only for CT. Stratified CT maps were generated by classifying each voxel based on HU ranges into one of four classes: air, adipose tissue, soft tissue or bone.A hierarchical region-selective algorithm, based on automatic thresholding and clustering, was used tomore » classify tissues from MR Dixon reconstructed fat, In-Phase (IP) and Opposed-Phase (OP) images. First, a body mask was obtained by thresholding the IP image. Subsequently, an automatic threshold on the Dixon fat image differentiated soft and adipose tissue. K-means clustering on IP and OP images resulted in a mask that, via a connected neighborhood analysis, allowing the user to select the components corresponding to bone structures.The pCT was estimated through assignment of bulk HU to the tissue classes. Bone-only Digital Reconstructed Radiographs (DRR) were generated as well. The pCT images were rigidly registered to the stratified CT to allow a volumetric and voxelwise comparison. Moreover, pCTs were also calculated within the head-neck region in two volunteers using the same pipeline. Results: The volumetric comparison resulted in differences <1% for each tissue class. A voxelwise comparison showed a good classification, ranging from 64% to 98%. The primary misclassified classes were adipose/soft tissue and bone/soft tissue. As the patients have been imaged on different table tops, part of the misclassification error can be explained by misregistration. Conclusion: The proposed approach does not rely on an anatomy model providing the flexibility to successfully generate the pCT in two different body sites. This research is founded by ZonMw IMDI Programme, project name: “RASOR sharp: MRI based radiotherapy planning using a single MRI sequence”, project number: 10-104003010.« less
Wilhelm, K P; Kaspar, K; Funkel, O
2001-04-01
Sun protection factor (SPF) measurement is based on the determination of the minimal erythema dose (MED). The ratio of doses required to induce a minimal erythema between product-treated and untreated skin is defined as SPF. The aim of this study was to validate the conventionally used visual scoring with two non-invasive methods: high resolution laser Doppler imaging (HR-LDI) and colorimetry. Another goal was to check whether suberythemal reactions could be detected by means of HR-LDI measurements. Four sunscreens were selected. The measurements were made on the back of 10 subjects. A solar simulator SU 5000 (m.u.t., Wedel, Germany) served as radiation source. For the visual assessment, the erythema was defined according to COLIPA as the first perceptible, clearly defined unambiguous redness of the skin. For the colorimetric determination of the erythema, a Chromameter CR 300 (Minolta, Osaka, Japan) was used. The threshold for the colorimetry was chosen according to the COLIPA recommendation as an increase of the redness parameter delta a* = 2.5. For the non-contact perfusion measurements of skin blood flow, a two-dimensional high resolution laser Doppler imager (HR-LDI) (Lisca, Linköping, Sweden) was used. For the HR-LDI measurements, an optimal threshold perfusion needed to be established. For the HR-LDI measurements basal perfusion +1 standard deviation of all basal measurements was found to be a reliable threshold perfusion corresponding to the minimal erythema. Smaller thresholds, which would be necessary for detection of suberythemal responses, did not provide unambiguous data. All three methods, visual scoring, colorimetry and HR-LDI, produced similar SPFs for the test products with a variability of < 5% between methods. The HR-LDI method showed the lowest variation of the mean SPF. Neither of the instrumental methods, however, resulted in an increase of the sensitivity of SPF determination as compared with visual scoring. Both HR-LDI and colorimetry are suitable, reliable and observer-independent methods for MED determination. However, they do not provide greater sensitivity and thus do not result in lower UV dose requirements for testing.
2014-02-01
installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang
2009-11-01
Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.
Gas composition sensing using carbon nanotube arrays
NASA Technical Reports Server (NTRS)
Li, Jing (Inventor); Meyyappan, Meyya (Inventor)
2008-01-01
A method and system for estimating one, two or more unknown components in a gas. A first array of spaced apart carbon nanotubes (''CNTs'') is connected to a variable pulse voltage source at a first end of at least one of the CNTs. A second end of the at least one CNT is provided with a relatively sharp tip and is located at a distance within a selected range of a constant voltage plate. A sequence of voltage pulses {V(t.sub.n)}.sub.n at times t=t.sub.n (n=1, . . . , N1; N1.gtoreq.3) is applied to the at least one CNT, and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of a curve I(t.sub.n) for current or a curve e(t.sub.n) for electric charge transported from the at least one CNT to the constant voltage plate. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas.
[Selection of distance thresholds of urban forest landscape connectivity in Shenyang City].
Liu, Chang-fu; Zhou, Bin; He, Xing-yuan; Chen, Wei
2010-10-01
By using the QuickBird remote sensing image interpretation data of urban forests in Shenyang City in 2006, and with the help of geographical information system, this paper analyzed the landscape patches of the urban forests in the area inside the third ring-road of Shenyang. Based on the habitat availability and the dispersal potential of animal and plant species, 8 distance thresholds (50, 100, 200, 400, 600, 800, 1000, and 1200 m) were selected to compute the integral index of connectivity, probability of connectivity, and important value of the landscape patches, and the computed values were used for analyzing and screening the distance thresholds of urban forest landscape connectivity in the City. The results showed that the appropriate distance thresholds of the urban forest landscape connectivity in Shenyang City in 2006 ranged from 100 to 400 m, with 200 m being most appropriate. It was suggested that the distance thresholds should be increased or decreased according to the performability of urban forest landscape connectivity and the different demands for landscape levels.
Voxel classification based airway tree segmentation
NASA Astrophysics Data System (ADS)
Lo, Pechin; de Bruijne, Marleen
2008-03-01
This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.
NASA Technical Reports Server (NTRS)
Hirsch, David
2009-01-01
Spacecraft fire safety emphasizes fire prevention, which is achieved primarily through the use of fire-resistant materials. Materials selection for spacecraft is based on conventional flammability acceptance tests, along with prescribed quantity limitations and configuration control for items that are non-pass or questionable. ISO 14624-1 and -2 are the major methods used to evaluate flammability of polymeric materials intended for use in the habitable environments of spacecraft. The methods are upward flame-propagation tests initiated in static environments and using a well-defined igniter flame at the bottom of the sample. The tests are conducted in the most severe flaming combustion environment expected in the spacecraft. The pass/fail test logic of ISO 14624-1 and -2 does not allow a quantitative comparison with reduced gravity or microgravity test results; therefore their use is limited, and possibilities for in-depth theoretical analyses and realistic estimates of spacecraft fire extinguishment requirements are practically eliminated. To better understand the applicability of laboratory test data to actual spacecraft environments, a modified ISO 14624 protocol has been proposed that, as an alternative to qualifying materials as pass/fail in the worst-expected environments, measures the actual upward flammability limit for the material. A working group established by NASA to provide recommendations for exploration spacecraft internal atmospheres realized the importance of correlating laboratory data with real-life environments and recommended NASA to develop a flammability threshold test method. The working group indicated that for the Constellation Program, the flammability threshold information will allow NASA to identify materials with increased flammability risk from oxygen concentration and total pressure changes, minimize potential impacts, and allow for development of sound requirements for new spacecraft and extravehicular landers and habitats. Furthermore, recent research has shown that current normal gravity materials flammability tests do not correlate with flammability in ventilated, micro- or reduced-gravity conditions. Currently, the materials selection for spacecraft is based on the assumption of commonality between ground flammability test results and spacecraft environments, which does not appear to be valid. Materials flammability threshold data acquired in normal gravity can be correlated with data obtained in microgravity or reduced-gravity experiments, and consequently a more accurate assessment of the margin of safety of the material in the real environment can be made. In addition, the method allows the option of selecting better or best space system materials, as opposed to what would be considered just acceptable from a flammability point of view and realistic assessment of spacecraft fire extinguishment needs, which could result in significant weight savings. The knowledge afforded by this technique allows for limited extrapolations of flammability behavior to conditions not specifically tested and that could potentially result in significant cost and time savings. The intent of this Technical Specification is to bring to the attention of International Aerospace Community the importance of correlating laboratory test data with real-life space systems applications. The method presented is just one of the possibilities that are believed will lead to better understanding the applicability of laboratory aerospace materials flammability test data. International feedback on improving the proposed method, as well as suggestions for correlating other laboratory aerospace test data with real-life applications relevant to space systems are being sought.
A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements
NASA Astrophysics Data System (ADS)
Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.
2014-12-01
A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.
Garrusi, Behshid; Baneshi, Mohammad Reza
2013-01-01
Backgrounds: Many socio cultural variables could be affect eating disorders in Asian countries. In Iran, there are few researches regarding eating disorders and their contributing factors. The aim of this study is to explore frequency of eating disorders and their risk factors in an Iranian population. Materials and Methods: About 1204 participants were selected aged between fourteen to 55 years. Frequency of eating disorders and effects of variables such as demographic characteristics, Body Mass Index (BMI), use of media, body dissatisfaction, self-esteem, social comparison and social pressure for thinness in individuals with and without eating disorders, were assessed. Findings: The prevalence of eating disorders was 11.5% that included 0.8% anorexia nervosa, 6.2% full threshold bulimia nervosa, 1.4% sub threshold anorexia nervosa and 30% sub threshold binge eating disorder. Symptoms of bulimic syndrome were greater in males. Conclusion: In Iran, eating disorders and related problems are new issue that could be mentioned seriously The identification of these disorders and their related contributing factors are necessity of management and preventive programs planning. PMID:23283053
Influenza surveillance in Europe: establishing epidemic thresholds by the Moving Epidemic Method
Vega, Tomás; Lozano, Jose Eugenio; Meerhoff, Tamara; Snacken, René; Mott, Joshua; Ortiz de Lejarazu, Raul; Nunes, Baltazar
2012-01-01
Please cite this paper as: Vega et al. (2012) Influenza surveillance in Europe: establishing epidemic thresholds by the moving epidemic method. Influenza and Other Respiratory Viruses 7(4), 546–558. Background Timely influenza surveillance is important to monitor influenza epidemics. Objectives (i) To calculate the epidemic threshold for influenza‐like illness (ILI) and acute respiratory infections (ARI) in 19 countries, as well as the thresholds for different levels of intensity. (ii) To evaluate the performance of these thresholds. Methods The moving epidemic method (MEM) has been developed to determine the baseline influenza activity and an epidemic threshold. False alerts, detection lags and timeliness of the detection of epidemics were calculated. The performance was evaluated using a cross‐validation procedure. Results The overall sensitivity of the MEM threshold was 71·8% and the specificity was 95·5%. The median of the timeliness was 1 week (range: 0–4·5). Conclusions The method produced a robust and specific signal to detect influenza epidemics. The good balance between the sensitivity and specificity of the epidemic threshold to detect seasonal epidemics and avoid false alerts has advantages for public health purposes. This method may serve as standard to define the start of the annual influenza epidemic in countries in Europe. PMID:22897919
Oldenkamp, Rik; Huijbregts, Mark A J; Ragas, Ad M J
2016-05-01
The selection of priority APIs (Active Pharmaceutical Ingredients) can benefit from a spatially explicit approach, since an API might exceed the threshold of environmental concern in one location, while staying below that same threshold in another. However, such a spatially explicit approach is relatively data intensive and subject to parameter uncertainty due to limited data. This raises the question to what extent a spatially explicit approach for the environmental prioritisation of APIs remains worthwhile when accounting for uncertainty in parameter settings. We show here that the inclusion of spatially explicit information enables a more efficient environmental prioritisation of APIs in Europe, compared with a non-spatial EU-wide approach, also under uncertain conditions. In a case study with nine antibiotics, uncertainty distributions of the PAF (Potentially Affected Fraction) of aquatic species were calculated in 100∗100km(2) environmental grid cells throughout Europe, and used for the selection of priority APIs. Two APIs have median PAF values that exceed a threshold PAF of 1% in at least one environmental grid cell in Europe, i.e., oxytetracycline and erythromycin. At a tenfold lower threshold PAF (i.e., 0.1%), two additional APIs would be selected, i.e., cefuroxime and ciprofloxacin. However, in 94% of the environmental grid cells in Europe, no APIs exceed either of the thresholds. This illustrates the advantage of following a location-specific approach in the prioritisation of APIs. This added value remains when accounting for uncertainty in parameter settings, i.e., if the 95th percentile of the PAF instead of its median value is compared with the threshold. In 96% of the environmental grid cells, the location-specific approach still enables a reduction of the selection of priority APIs of at least 50%, compared with a EU-wide prioritisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.
2017-02-01
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.
Selection Strategies for Social Influence in the Threshold Model
NASA Astrophysics Data System (ADS)
Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy
The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.
NASA Astrophysics Data System (ADS)
Sosnowski, M.; Eager, G. S., Jr.
1983-06-01
Threshold voltage of oil-impregnated paper insulated cables are investigaed. Experimental work was done on model cables specially manufactured for this project. The cables were impregnated with mineral and with synthetic oils. Standard impulse breakdown voltage tests and impulse voltage breakdown tests with dc prestressing were performed at room temperature and at 1000C. The most important result is the finding of very high level of threshold voltage stress for oil-impregnated paper insulated cables. This threshold voltage is approximately 1.5 times higher than the threshold voltage or crosslinked polyethylene insulated cables.
VUV photodynamics and chiral asymmetry in the photoionization of gas phase alanine enantiomers.
Tia, Maurice; Cunha de Miranda, Barbara; Daly, Steven; Gaie-Levrel, François; Garcia, Gustavo A; Nahon, Laurent; Powis, Ivan
2014-04-17
The valence shell photoionization of the simplest proteinaceous chiral amino acid, alanine, is investigated over the vacuum ultraviolet region from its ionization threshold up to 18 eV. Tunable and variable polarization synchrotron radiation was coupled to a double imaging photoelectron/photoion coincidence (i(2)PEPICO) spectrometer to produce mass-selected threshold photoelectron spectra and derive the state-selected fragmentation channels. The photoelectron circular dichroism (PECD), an orbital-sensitive, conformer-dependent chiroptical effect, was also recorded at various photon energies and compared to continuum multiple scattering calculations. Two complementary vaporization methods-aerosol thermodesorption and a resistively heated sample oven coupled to an adiabatic expansion-were applied to promote pure enantiomers of alanine into the gas phase, yielding neutral alanine with different internal energy distributions. A comparison of the photoelectron spectroscopy, fragmentation, and dichroism measured for each of the vaporization methods was rationalized in terms of internal energy and conformer populations and supported by theoretical calculations. The analytical potential of the so-called PECD-PICO detection technique-where the electron spectroscopy and circular dichroism can be obtained as a function of mass and ion translational energy-is underlined and applied to characterize the origin of the various species found in the experimental mass spectra. Finally, the PECD findings are discussed within an astrochemical context, and possible implications regarding the origin of biomolecular asymmetry are identified.
Derkach, Andriy; Chiang, Theodore; Gong, Jiafen; Addis, Laura; Dobbins, Sara; Tomlinson, Ian; Houlston, Richard; Pal, Deb K.; Strug, Lisa J.
2014-01-01
Motivation: Sufficiently powered case–control studies with next-generation sequence (NGS) data remain prohibitively expensive for many investigators. If feasible, a more efficient strategy would be to include publicly available sequenced controls. However, these studies can be confounded by differences in sequencing platform; alignment, single nucleotide polymorphism and variant calling algorithms; read depth; and selection thresholds. Assuming one can match cases and controls on the basis of ethnicity and other potential confounding factors, and one has access to the aligned reads in both groups, we investigate the effect of systematic differences in read depth and selection threshold when comparing allele frequencies between cases and controls. We propose a novel likelihood-based method, the robust variance score (RVS), that substitutes genotype calls by their expected values given observed sequence data. Results: We show theoretically that the RVS eliminates read depth bias in the estimation of minor allele frequency. We also demonstrate that, using simulated and real NGS data, the RVS method controls Type I error and has comparable power to the ‘gold standard’ analysis with the true underlying genotypes for both common and rare variants. Availability and implementation: An RVS R script and instructions can be found at strug.research.sickkids.ca, and at https://github.com/strug-lab/RVS. Contact: lisa.strug@utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24733292
Fuzzy connected object definition in images with respect to co-objects
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.; Saha, Punam K.; Lotufo, Roberto A.
1999-05-01
Tangible solutions to practical image segmentation are vital to ensure progress in many applications of medical imaging. Toward this goal, we previously proposed a theory and algorithms for fuzzy connected object definition in n- dimensional images. Their effectiveness has been demonstrated in several applications including multiple sclerosis lesion detection/delineation, MR Angiography, and craniofacial imaging. The purpose of this work is to extend the earlier theory and algorithms to fuzzy connected object definition that considers all relevant objects in the image simultaneously. In the previous theory, delineation of the final object from the fuzzy connectivity scene required the selection of a threshold that specifies the weakest `hanging-togetherness' of image elements relative to each other in the object. Selection of such a threshold was not trivial and has been an active research area. In the proposed method of relative fuzzy connectivity, instead of defining an object on its own based on the strength of connectedness, all co-objects of importance that are present in the image are also considered and the objects are let to compete among themselves in having image elements as their members. In this competition, every pair of elements in the image will have a strength of connectedness in each object. The object in which this strength is highest will claim membership of the elements. This approach to fuzzy object definition using a relative strength of connectedness eliminates the need for a threshold of strength of connectedness that was part of the previous definition. It seems to be more natural since it relies on the fact that an object gets defined in an image by the presence of other objects that coexist in the image. All specified objects are defined simultaneously in this approach. The concept of iterative relative fuzzy connectivity has also been introduced. Robustness of relative fuzzy objects with respect to selection of reference image elements has been established. The effectiveness of the proposed method has been demonstrated using a patient's 3D contrast enhanced MR angiogram and a 2D phantom scene.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
NASA Astrophysics Data System (ADS)
Antonov, E. N.; Krotova, L. I.; Minaev, N. V.; Minaeva, S. A.; Mironov, A. V.; Popov, V. K.; Bagratashvili, V. N.
2015-11-01
We report the implementation of a novel scheme for surface-selective laser sintering (SSLS) of polymer particles, based on using water as a sensitizer of laser heating and sintering of particles as well as laser radiation at a wavelength of 1.94 μm, corresponding to the strong absorption band of water. A method of sintering powders of poly(lactide-co-glycolide), a hydrophobic bioresorbable polymer, after modifying its surface with an aqueous solution of hyaluronic acid is developed. The sintering thresholds for wetted polymer are by 3 - 4 times lower than those for sintering in air. The presence of water restricts the temperature of the heated polymer, preventing its thermal destruction. Polymer matrices with a developed porous structure are obtained. The proposed SSLS method can be applied to produce bioresorbable polymer matrices for tissue engineering.
NASA Astrophysics Data System (ADS)
Oh, Kyonghwan; Kwon, Oh-Kyong
2012-03-01
A threshold-voltage-shift compensation and suppression method for active matrix organic light-emitting diode (AMOLED) displays fabricated using a hydrogenated amorphous silicon thin-film transistor (TFT) backplane is proposed. The proposed method compensates for the threshold voltage variation of TFTs due to different threshold voltage shifts during emission time and extends the lifetime of the AMOLED panel. Measurement results show that the error range of emission current is from -1.1 to +1.7% when the threshold voltage of TFTs varies from 1.2 to 3.0 V.
Reliability of the method of levels for determining cutaneous temperature sensitivity
NASA Astrophysics Data System (ADS)
Jakovljević, Miroljub; Mekjavić, Igor B.
2012-09-01
Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.
NASA Astrophysics Data System (ADS)
Tibell, Lena A. E.; Harms, Ute
2017-11-01
Modern evolutionary theory is both a central theory and an integrative framework of the life sciences. This is reflected in the common references to evolution in modern science education curricula and contexts. In fact, evolution is a core idea that is supposed to support biology learning by facilitating the organization of relevant knowledge. In addition, evolution can function as a pivotal link between concepts and highlight similarities in the complexity of biological concepts. However, empirical studies in many countries have for decades identified deficiencies in students' scientific understanding of evolution mainly focusing on natural selection. Clearly, there are major obstacles to learning natural selection, and we argue that to overcome them, it is essential to address explicitly the general abstract concepts that underlie the biological processes, e.g., randomness or probability. Hence, we propose a two-dimensional framework for analyzing and structuring teaching of natural selection. The first—purely biological—dimension embraces the three main principles variation, heredity, and selection structured in nine key concepts that form the core idea of natural selection. The second dimension encompasses four so-called thresholds, i.e., general abstract and/or non-perceptual concepts: randomness, probability, spatial scales, and temporal scales. We claim that both of these dimensions must be continuously considered, in tandem, when teaching evolution in order to allow development of a meaningful understanding of the process. Further, we suggest that making the thresholds tangible with the aid of appropriate kinds of visualizations will facilitate grasping of the threshold concepts, and thus, help learners to overcome the difficulties in understanding the central theory of life.
Effects of sound intensity on temporal properties of inhibition in the pallid bat auditory cortex.
Razak, Khaleel A
2013-01-01
Auditory neurons in bats that use frequency modulated (FM) sweeps for echolocation are selective for the behaviorally-relevant rates and direction of frequency change. Such selectivity arises through spectrotemporal interactions between excitatory and inhibitory components of the receptive field. In the pallid bat auditory system, the relationship between FM sweep direction/rate selectivity and spectral and temporal properties of sideband inhibition have been characterized. Of note is the temporal asymmetry in sideband inhibition, with low-frequency inhibition (LFI) exhibiting faster arrival times compared to high-frequency inhibition (HFI). Using the two-tone inhibition over time (TTI) stimulus paradigm, this study investigated the interactions between two sound parameters in shaping sideband inhibition: intensity and time. Specifically, the impact of changing relative intensities of the excitatory and inhibitory tones on arrival time of inhibition was studied. Using this stimulation paradigm, single unit data from the auditory cortex of pentobarbital-anesthetized cortex show that the threshold for LFI is on average ~8 dB lower than HFI. For equal intensity tones near threshold, LFI is stronger than HFI. When the inhibitory tone intensity is increased further from threshold, the strength asymmetry decreased. The temporal asymmetry in LFI vs. HFI arrival time is strongest when the excitatory and inhibitory tones are of equal intensities or if excitatory tone is louder. As inhibitory tone intensity is increased, temporal asymmetry decreased suggesting that the relative magnitude of excitatory and inhibitory inputs shape arrival time of inhibition and FM sweep rate and direction selectivity. Given that most FM bats use downward sweeps as echolocation calls, a similar asymmetry in threshold and strength of LFI vs. HFI may be a general adaptation to enhance direction selectivity while maintaining sweep-rate selective responses to downward sweeps.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
NASA Astrophysics Data System (ADS)
Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.
2017-12-01
Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.
Cascades in the Threshold Model for varying system sizes
NASA Astrophysics Data System (ADS)
Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy
2015-03-01
A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.
24 CFR 1003.301 - Selection process.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Selection process. 1003.301 Section... Application and Selection Process § 1003.301 Selection process. (a) Threshold requirement. An applicant that... establish weights for the selection criteria, will specify the maximum points available, and will describe...
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
NASA Astrophysics Data System (ADS)
Wang, J.; Feng, B.
2016-12-01
Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.
Chipinda, Itai; Mbiya, Wilbes; Adigun, Risikat Ajibola; Morakinyo, Moshood K.; Law, Brandon F.; Simoyi, Reuben H.; Siegel, Paul D.
2015-01-01
Chemical allergens bind directly, or after metabolic or abiotic activation, to endogenous proteins to become allergenic. Assessment of this initial binding has been suggested as a target for development of assays to screen chemicals for their allergenic potential. Recently we reported a nitrobenzenethiol (NBT) based method for screening thiol reactive skin sensitizers, however, amine selective sensitizers are not detected by this assay. In the present study we describe an amine (pyridoxylamine (PDA)) based kinetic assay to complement the NBT assay for identification of amine-selective and non-selective skin sensitizers. UV-Vis spectrophotometry and fluorescence were used to measure PDA reactivity for 57 chemicals including anhydrides, aldehydes, and quinones where reaction rates ranged from 116 to 6.2 × 10−6 M−1 s−1 for extreme to weak sensitizers, respectively. No reactivity towards PDA was observed with the thiol-selective sensitizers, non-sensitizers and prohaptens. The PDA rate constants correlated significantly with their respective murine local lymph node assay (LLNA) threshold EC3 values (R2 = 0.76). The use of PDA serves as a simple, inexpensive amine based method that shows promise as a preliminary screening tool for electrophilic, amine-selective skin sensitizers. PMID:24333919
DOT National Transportation Integrated Search
2003-01-01
Stainless steel-clad rebar provides an opportunity to significantly increase the Cl- threshold concentration associated with active corrosion initiation compared to plain carbon steel. However, threshold Cl- concentrations for 316L stainless steel-cl...
Kanerva's sparse distributed memory with multiple hamming thresholds
NASA Technical Reports Server (NTRS)
Pohja, Seppo; Kaski, Kimmo
1992-01-01
If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
A novel method for deriving thresholds of toxicological concern for vaccine constituents.
White, Jennifer; Wrzesinski, Claudia; Green, Martin; Johnson, Giffe T; McCluskey, James D; Abritis, Alison; Harbison, Raymond D
2016-05-01
Safety assessment evaluating the presence of impurities, residual materials, and contaminants in vaccines is a focus of current research. Thresholds of toxicological concern (TTCs) are mathematically modeled levels used for assessing the safety of many food and medication constituents. In this study, six algorithms are selected from the open-access ToxTree software program to derive a method for calculating TTCs for vaccine constituents: In Vivo Rodent Micronucleus assay/LD50, Benigni-Bossa/LD50, Cramer Extended/LD50, In Vivo Rodent Micronucleus assay/TDLo, Benigni-Bossa/TDLo, and the Cramer Extended/TDLo. Using an initial dataset (n = 197) taken from INCHEM, RepDose, RTECS, and TOXNET, the chemicals were divided into two families: "positive" - based on the presence of structures associated with adverse outcomes, or "negative" - no such structures or having structures that appear to be protective of health. The final validation indicated that the Benigni-Bossa/LD50 method is the most appropriate for calculating TTCs for vaccine constituents. Final TTCs were designated as 18.06 μg/person and 20.61 μg/person for the Benigni-Bossa/LD50 positive and negative structural families, respectively.
Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.
Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth
2018-03-26
When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.
Fuzzy pulmonary vessel segmentation in contrast enhanced CT data
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til
2008-03-01
Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.
Relationship between slow visual processing and reading speed in people with macular degeneration
Cheong, Allen MY; Legge, Gordon E; Lawrence, Mary G; Cheung, Sing-Hang; Ruff, Mary A
2007-01-01
Purpose People with macular degeneration (MD) often read slowly even with adequate magnification to compensate for acuity loss. Oculomotor deficits may affect reading in MD, but cannot fully explain the substantial reduction in reading speed. Central-field loss (CFL) is often a consequence of macular degeneration, necessitating the use of peripheral vision for reading. We hypothesized that slower temporal processing of visual patterns in peripheral vision is a factor contributing to slow reading performance in MD patients. Methods Fifteen subjects with MD, including 12 with CFL, and five age-matched control subjects were recruited. Maximum reading speed and critical print size were measured with RSVP (Rapid Serial Visual Presentation). Temporal processing speed was studied by measuring letter-recognition accuracy for strings of three randomly selected letters centered at fixation for a range of exposure times. Temporal threshold was defined as the exposure time yielding 80% recognition accuracy for the central letter. Results Temporal thresholds for the MD subjects ranged from 159 to 5881 ms, much longer than values for age-matched controls in central vision (13 ms, p<0.01). The mean temporal threshold for the 11 MD subjects who used eccentric fixation (1555.8 ± 1708.4 ms) was much longer than the mean temporal threshold (97.0 ms ± 34.2 ms, p<0.01) for the age-matched controls at 10° in the lower visual field. Individual temporal thresholds accounted for 30% of the variance in reading speed (p<0.05). Conclusion The significant association between increased temporal threshold for letter recognition and reduced reading speed is consistent with the hypothesis that slower visual processing of letter recognition is one of the factors limiting reading speed in MD subjects. PMID:17881032
Santos, Frédéric; Guyomarc'h, Pierre; Bruzek, Jaroslav
2014-12-01
Accuracy of identification tools in forensic anthropology primarily rely upon the variations inherent in the data upon which they are built. Sex determination methods based on craniometrics are widely used and known to be specific to several factors (e.g. sample distribution, population, age, secular trends, measurement technique, etc.). The goal of this study is to discuss the potential variations linked to the statistical treatment of the data. Traditional craniometrics of four samples extracted from documented osteological collections (from Portugal, France, the U.S.A., and Thailand) were used to test three different classification methods: linear discriminant analysis (LDA), logistic regression (LR), and support vector machines (SVM). The Portuguese sample was set as a training model on which the other samples were applied in order to assess the validity and reliability of the different models. The tests were performed using different parameters: some included the selection of the best predictors; some included a strict decision threshold (sex assessed only if the related posterior probability was high, including the notion of indeterminate result); and some used an unbalanced sex-ratio. Results indicated that LR tends to perform slightly better than the other techniques and offers a better selection of predictors. Also, the use of a decision threshold (i.e. p>0.95) is essential to ensure an acceptable reliability of sex determination methods based on craniometrics. Although the Portuguese, French, and American samples share a similar sexual dimorphism, application of Western models on the Thai sample (that displayed a lower degree of dimorphism) was unsuccessful. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
de Kleijn, Jasper L; van Kalmthout, Ludwike W M; van der Vossen, Martijn J B; Vonck, Bernard M D; Topsakal, Vedat; Bruijnzeel, Hanneke
2018-05-24
Although current guidelines recommend cochlear implantation only for children with profound hearing impairment (HI) (>90 decibel [dB] hearing level [HL]), studies show that children with severe hearing impairment (>70-90 dB HL) could also benefit from cochlear implantation. To perform a systematic review to identify audiologic thresholds (in dB HL) that could serve as an audiologic candidacy criterion for pediatric cochlear implantation using 4 domains of speech and language development as independent outcome measures (speech production, speech perception, receptive language, and auditory performance). PubMed and Embase databases were searched up to June 28, 2017, to identify studies comparing speech and language development between children who were profoundly deaf using cochlear implants and children with severe hearing loss using hearing aids, because no studies are available directly comparing children with severe HI in both groups. If cochlear implant users with profound HI score better on speech and language tests than those with severe HI who use hearing aids, this outcome could support adjusting cochlear implantation candidacy criteria to lower audiologic thresholds. Literature search, screening, and article selection were performed using a predefined strategy. Article screening was executed independently by 4 authors in 2 pairs; consensus on article inclusion was reached by discussion between these 4 authors. This study is reported according to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) statement. Title and abstract screening of 2822 articles resulted in selection of 130 articles for full-text review. Twenty-one studies were selected for critical appraisal, resulting in selection of 10 articles for data extraction. Two studies formulated audiologic thresholds (in dB HLs) at which children could qualify for cochlear implantation: (1) at 4-frequency pure-tone average (PTA) thresholds of 80 dB HL or greater based on speech perception and auditory performance subtests and (2) at PTA thresholds of 88 and 96 dB HL based on a speech perception subtest. In 8 of the 18 outcome measures, children with profound HI using cochlear implants performed similarly to children with severe HI using hearing aids. Better performance of cochlear implant users was shown with a picture-naming test and a speech perception in noise test. Owing to large heterogeneity in study population and selected tests, it was not possible to conduct a meta-analysis. Studies indicate that lower audiologic thresholds (≥80 dB HL) than are advised in current national and manufacturer guidelines would be appropriate as audiologic candidacy criteria for pediatric cochlear implantation.
Band selection method based on spectrum difference in targets of interest in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Xiaohan; Yang, Guang; Yang, Yongbo; Huang, Junhua
2016-10-01
While hyperspectral data shares rich spectrum information, it has numbers of bands with high correlation coefficients, causing great data redundancy. A reasonable band selection is important for subsequent processing. Bands with large amount of information and low correlation should be selected. On this basis, according to the needs of target detection applications, the spectral characteristics of the objects of interest are taken into consideration in this paper, and a new method based on spectrum difference is proposed. Firstly, according to the spectrum differences of targets of interest, a difference matrix which represents the different spectral reflectance of different targets in different bands is structured. By setting a threshold, the bands satisfying the conditions would be left, constituting a subset of bands. Then, the correlation coefficients between bands are calculated and correlation matrix is given. According to the size of the correlation coefficient, the bands can be set into several groups. At last, the conception of normalized variance is used on behalf of the information content of each band. The bands are sorted by the value of its normalized variance. Set needing number of bands, and the optimum band combination solution can be get by these three steps. This method retains the greatest degree of difference between the target of interest and is easy to achieve by computer automatically. Besides, false color image synthesis experiment is carried out using the bands selected by this method as well as other 3 methods to show the performance of method in this paper.
Cost-effectiveness thresholds: methods for setting and examples from around the world.
Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano
2018-06-01
Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.
A study of the threshold method utilizing raingage data
NASA Technical Reports Server (NTRS)
Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David
1993-01-01
The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.
Yin, Xiaoxia; Ng, Brian W-H; He, Jing; Zhang, Yanchun; Abbott, Derek
2014-01-01
In this paper, we demonstrate a comprehensive method for segmenting the retinal vasculature in camera images of the fundus. This is of interest in the area of diagnostics for eye diseases that affect the blood vessels in the eye. In a departure from other state-of-the-art methods, vessels are first pre-grouped together with graph partitioning, using a spectral clustering technique based on morphological features. Local curvature is estimated over the whole image using eigenvalues of Hessian matrix in order to enhance the vessels, which appear as ridges in images of the retina. The result is combined with a binarized image, obtained using a threshold that maximizes entropy, to extract the retinal vessels from the background. Speckle type noise is reduced by applying a connectivity constraint on the extracted curvature based enhanced image. This constraint is varied over the image according to each region's predominant blood vessel size. The resultant image exhibits the central light reflex of retinal arteries and veins, which prevents the segmentation of whole vessels. To address this, the earlier entropy-based binarization technique is repeated on the original image, but crucially, with a different threshold to incorporate the central reflex vessels. The final segmentation is achieved by combining the segmented vessels with and without central light reflex. We carry out our approach on DRIVE and REVIEW, two publicly available collections of retinal images for research purposes. The obtained results are compared with state-of-the-art methods in the literature using metrics such as sensitivity (true positive rate), selectivity (false positive rate) and accuracy rates for the DRIVE images and measured vessel widths for the REVIEW images. Our approach out-performs the methods in the literature. PMID:24781033
Mizumura, Sunao; Nishikawa, Kazuhiro; Murata, Akihiro; Yoshimura, Kosei; Ishii, Nobutomo; Kokubo, Tadashi; Morooka, Miyako; Kajiyama, Akiko; Terahara, Atsuro
2018-05-01
In Japan, the Southampton method for dopamine transporter (DAT) SPECT is widely used to quantitatively evaluate striatal radioactivity. The specific binding ratio (SBR) is the ratio of specific to non-specific binding observed after placing pentagonal striatal voxels of interest (VOIs) as references. Although the method can reduce the partial volume effect, the SBR may fluctuate due to the presence of low-count areas of cerebrospinal fluid (CSF), caused by brain atrophy, in the striatal VOIs. We examined the effect of the exclusion of low-count VOIs on SBR measurement. We retrospectively reviewed DAT imaging of 36 patients with parkinsonian syndromes performed after injection of 123 I-FP-CIT. SPECT data were reconstructed using three conditions. We defined the CSF area in each SPECT image after segmenting the brain tissues. A merged image of gray and white matter images was constructed from each patient's magnetic resonance imaging (MRI) to create an idealized brain image that excluded the CSF fraction (MRI-mask method). We calculated the SBR and asymmetric index (AI) in the MRI-mask method for each reconstruction condition. We then calculated the mean and standard deviation (SD) of voxel RI counts in the reference VOI without the striatal VOIs in each image, and determined the SBR by excluding the low-count pixels (threshold method) using five thresholds: mean-0.0SD, mean-0.5SD, mean-1.0SD, mean-1.5SD, and mean-2.0SD. We also calculated the AIs from the SBRs measured using the threshold method. We examined the correlation among the SBRs of the threshold method, between the uncorrected SBRs and the SBRs of the MRI-mask method, and between the uncorrected AIs and the AIs of the MRI-mask method. The intraclass correlation coefficient indicated an extremely high correlation among the SBRs and among the AIs of the MRI-mask and threshold methods at thresholds between mean-2.0D and mean-1.0SD, regardless of the reconstruction correction. The differences among the SBRs and the AIs of the two methods were smallest at thresholds between man-2.0SD and mean-1.0SD. The SBR calculated using the threshold method was highly correlated with the MRI-SBR. These results suggest that the CSF correction of the threshold method is effective for the calculation of idealized SBR and AI values.
NASA Astrophysics Data System (ADS)
Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook
2004-04-01
We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.
Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.
2003-01-01
A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.
Prediction of insufficient serum vitamin D status in older women: a validated model.
Merlijn, T; Swart, K M A; Lips, P; Heymans, M W; Sohl, E; Van Schoor, N M; Netelenbos, C J; Elders, P J M
2018-05-28
We developed an externally validated simple prediction model to predict serum 25(OH)D levels < 30, < 40, < 50 and 60 nmol/L in older women with risk factors for fractures. The benefit of the model reduces when a higher 25(OH)D threshold is chosen. Vitamin D deficiency is associated with increased fracture risk in older persons. General supplementation of all older women with vitamin D could cause medicalization and costs. We developed a clinical model to identify insufficient serum 25-hydroxyvitamin D (25(OH)D) status in older women at risk for fractures. In a sample of 2689 women ≥ 65 years selected from general practices, with at least one risk factor for fractures, a questionnaire was administered and serum 25(OH)D was measured. Multivariable logistic regression models with backward selection were developed to select predictors for insufficient serum 25(OH)D status, using separate thresholds 30, 40, 50 and 60 nmol/L. Internal and external model validations were performed. Predictors in the models were as follows: age, BMI, vitamin D supplementation, multivitamin supplementation, calcium supplementation, daily use of margarine, fatty fish ≥ 2×/week, ≥ 1 hours/day outdoors in summer, season of blood sampling, the use of a walking aid and smoking. The AUC was 0.77 for the model using a 30 nmol/L threshold and decreased in the models with higher thresholds to 0.72 for 60 nmol/L. We demonstrate that the model can help to distinguish patients with or without insufficient serum 25(OH)D levels at thresholds of 30 and 40 nmol/L, but not when a threshold of 50 nmol/L is demanded. This externally validated model can predict the presence of vitamin D insufficiency in women at risk for fractures. The potential clinical benefit of this tool is highly dependent of the chosen 25(OH)D threshold and decreases when a higher threshold is used.
NASA Astrophysics Data System (ADS)
Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao
2018-06-01
Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4.0 °C at most stations. The results of this study provide information for climate change research and hydrological process simulations in the SRB, as well as provide reference information for discriminating precipitation phase in other regions.
DYNAMIC PATTERN RECOGNITION BY MEANS OF THRESHOLD NETS,
A method is expounded for the recognition of visual patterns. A circuit diagram of a device is described which is based on a multilayer threshold ...structure synthesized in accordance with the proposed method. Coded signals received each time an image is displayed are transmitted to the threshold ...circuit which distinguishes the signs, and from there to the layers of threshold resolving elements. The image at each layer is made to correspond
NASA Astrophysics Data System (ADS)
Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.
2017-12-01
Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
Electrical and optical co-stimulation in the deaf white cat
NASA Astrophysics Data System (ADS)
Cao, Zhiping; Xu, Yingyue; Tan, Xiaodong; Suematsu, Naofumi; Robinson, Alan; Richter, Claus-Peter
2018-02-01
Spatial selectivity of neural stimulation with photons, such as infrared neural stimulation (INS) is higher than the selectivity obtained with electrical stimulation. To obtain more independent channels for stimulation in neural prostheses, INS may be implemented to better restore the fidelity of the damaged neural system. However, irradiation with infrared light also bares the risk of heat accumulation in the target tissue with subsequent neural damage. Lowering the threshold for stimulation could reduce the amount of heat delivered to the tissue and the risk for subsequent tissue damage. It has been shown in the rat sciatic nerve that simultaneous irradiation with infrared light and the delivery of biphasic sub-threshold electrical pulses can reduce the threshold for INS [1]. In this study, deaf white cats have been used to test whether opto-electrical co-stimulation can reduce the stimulation threshold for INS in the auditory system too. The cochleae of the deaf white cats have largely reduced spiral ganglion neuron counts and significant degeneration of the organ of Corti and do not respond to acoustic stimuli. Combined electrical and optical stimulation was used to demonstrate that simultaneous stimulation with infrared light and biphasic electrical pulses can reduce the threshold for stimulation.
Physiology-Based Modeling May Predict Surgical Treatment Outcome for Obstructive Sleep Apnea
Li, Yanru; Ye, Jingying; Han, Demin; Cao, Xin; Ding, Xiu; Zhang, Yuhuan; Xu, Wen; Orr, Jeremy; Jen, Rachel; Sands, Scott; Malhotra, Atul; Owens, Robert
2017-01-01
Study Objectives: To test whether the integration of both anatomical and nonanatomical parameters (ventilatory control, arousal threshold, muscle responsiveness) in a physiology-based model will improve the ability to predict outcomes after upper airway surgery for obstructive sleep apnea (OSA). Methods: In 31 patients who underwent upper airway surgery for OSA, loop gain and arousal threshold were calculated from preoperative polysomnography (PSG). Three models were compared: (1) a multiple regression based on an extensive list of PSG parameters alone; (2) a multivariate regression using PSG parameters plus PSG-derived estimates of loop gain, arousal threshold, and other trait surrogates; (3) a physiological model incorporating selected variables as surrogates of anatomical and nonanatomical traits important for OSA pathogenesis. Results: Although preoperative loop gain was positively correlated with postoperative apnea-hypopnea index (AHI) (P = .008) and arousal threshold was negatively correlated (P = .011), in both model 1 and 2, the only significant variable was preoperative AHI, which explained 42% of the variance in postoperative AHI. In contrast, the physiological model (model 3), which included AHIREM (anatomy term), fraction of events that were hypopnea (arousal term), the ratio of AHIREM and AHINREM (muscle responsiveness term), loop gain, and central/mixed apnea index (control of breathing terms), was able to explain 61% of the variance in postoperative AHI. Conclusions: Although loop gain and arousal threshold are associated with residual AHI after surgery, only preoperative AHI was predictive using multivariate regression modeling. Instead, incorporating selected surrogates of physiological traits on the basis of OSA pathophysiology created a model that has more association with actual residual AHI. Commentary: A commentary on this article appears in this issue on page 1023. Clinical Trial Registration: ClinicalTrials.Gov; Title: The Impact of Sleep Apnea Treatment on Physiology Traits in Chinese Patients With Obstructive Sleep Apnea; Identifier: NCT02696629; URL: https://clinicaltrials.gov/show/NCT02696629 Citation: Li Y, Ye J, Han D, Cao X, Ding X, Zhang Y, Xu W, Orr J, Jen R, Sands S, Malhotra A, Owens R. Physiology-based modeling may predict surgical treatment outcome for obstructive sleep apnea. J Clin Sleep Med. 2017;13(9):1029–1037. PMID:28818154
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
NASA Astrophysics Data System (ADS)
Hori, Y.; Cheng, V. Y. S.; Gough, W. A.
2017-12-01
A network of winter roads in northern Canada connects a number of remote First Nations communities to all-season roads and rails. The extent of the winter road networks depends on the geographic features, socio-economic activities, and the numbers of remote First Nations so that it differs among the provinces. The most extensive winter road networks below the 60th parallel south are located in Ontario and Manitoba, serving 32 and 18 communities respectively. In recent years, a warmer climate has resulted in a shorter winter road season and an increase in unreliable road conditions; thus, limiting access among remote communities. This study focused on examining the future freezing degree-days (FDDs) accumulations during the winter road season at selected locations throughout Ontario's Far North and northern Manitoba using recent climate model projections from the multi-model ensembles of General Circulation Models (GCMs) under the Representative Concentration Pathway (RCP) scenarios. First, the non-parametric Mann-Kendall correlation test and the Theil-Sen method were used to identify any statistically significant trends between FDDs and time for the base period (1981-2010). Second, future climate scenarios are developed for the study areas using statistical downscaling methods. This study also examined the lowest threshold of FDDs during the winter road construction in a future period. Our previous study established the lowest threshold of 380 FDDs, which derived from the relationship between the FDDs and the opening dates of James Bay Winter Road near the Hudson-James Bay coast. Thus, this study applied the threshold measure as a conservative estimate of the minimum threshold of FDDs to examine the effects of climate change on the winter road construction period.
Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.
Masalski, Marcin; Kręcicki, Tomasz
2013-04-12
Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.
Methods for SBS Threshold Reduction
1994-01-30
We have investigated methods for reducing the threshold for stimulated Brillouin scattering (SBS) using a frequency-narrowed Cr,Tm,Ho:YAG laser...operating at 2.12 micrometers. The SBS medium was carbon disulfide. Single-focus SBS and threshold reduction by using two foci, a loop, and a ring have
Peripheral neuropathy in children with type 1 diabetes.
Louraki, M; Karayianni, C; Kanaka-Gantenbein, C; Katsalouli, M; Karavanaki, K
2012-10-01
Diabetic neuropathy (DN) is a major complication of type 1 diabetes mellitus (T1DM) with significant morbidity and mortality in adulthood. Clinical neuropathy is rarely seen in paediatric populations, whereas subclinical neuropathy is commonly seen, especially in adolescents. Peripheral DN involves impairment of the large and/or small nerve fibres, and can be diagnosed by various methods. Nerve conduction studies (NCS) are the gold-standard method for the detection of subclinical DN; however, it is invasive, difficult to perform and selectively detects large-fibre abnormalities. Vibration sensation thresholds (VSTs) and thermal discrimination thresholds (TDTs) are quicker and easier and, therefore, more suitable as screening tools. Poor glycaemic control is the most important risk factor for the development of DN. Maintaining near-normoglycaemia is the only way to prevent or reverse neural impairment, as the currently available treatments can only relieve the symptoms of DN. Early detection of children and adolescents with nervous system abnormalities is crucial to allow all appropriate measures to be taken to prevent the development of DN. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
Liu, Chang; Jin, Su-Hyun
2015-11-01
This study investigated whether native listeners processed speech differently from non-native listeners in a speech detection task. Detection thresholds of Mandarin Chinese and Korean vowels and non-speech sounds in noise, frequency selectivity, and the nativeness of Mandarin Chinese and Korean vowels were measured for Mandarin Chinese- and Korean-native listeners. The two groups of listeners exhibited similar non-speech sound detection and frequency selectivity; however, the Korean listeners had better detection thresholds of Korean vowels than Chinese listeners, while the Chinese listeners performed no better at Chinese vowel detection than the Korean listeners. Moreover, thresholds predicted from an auditory model highly correlated with behavioral thresholds of the two groups of listeners, suggesting that detection of speech sounds not only depended on listeners' frequency selectivity, but also might be affected by their native language experience. Listeners evaluated their native vowels with higher nativeness scores than non-native listeners. Native listeners may have advantages over non-native listeners when processing speech sounds in noise, even without the required phonetic processing; however, such native speech advantages might be offset by Chinese listeners' lower sensitivity to vowel sounds, a characteristic possibly resulting from their sparse vowel system and their greater cognitive and attentional demands for vowel processing.
Anders, Royce; Riès, Stéphanie; Van Maanen, Leendert; Alario, F-Xavier
Patients with lesions in the left prefrontal cortex (PFC) have been shown to be impaired in lexical selection, especially when interference between semantically related alternatives is increased. To more deeply investigate which computational mechanisms may be impaired following left PFC damage due to stroke, a psychometric modelling approach is employed in which we assess the cognitive parameters of the patients from an evidence accumulation (sequential information sampling) modelling of their response data. We also compare the results to healthy speakers. Analysis of the cognitive parameters indicates an impairment of the PFC patients to appropriately adjust their decision threshold, in order to handle the increased item difficulty that is introduced by semantic interference. Also, the modelling contributes to other topics in psycholinguistic theory, in which specific effects are observed on the cognitive parameters according to item familiarization, and the opposing effects of priming (lower threshold) and semantic interference (lower drift) which are found to depend on repetition. These results are developed for the blocked-cyclic picture naming paradigm, in which pictures are presented within semantically homogeneous (HOM) or heterogeneous (HET) blocks, and are repeated several times per block. Overall, the results are in agreement with a role of the left PFC in adjusting the decision threshold for lexical selection in language production.
A new iterative triclass thresholding technique in image segmentation.
Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin
2014-03-01
We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.
Method and apparatus for welding precipitation hardenable materials
Murray, Jr., Holt; Harris, Ian D.; Ratka, John O.; Spiegelberg, William D.
1994-01-01
A method for welding together members consisting of precipitation age hardened materials includes the steps of selecting a weld filler material that has substantially the same composition as the materials being joined, and an age hardening characteristic temperature age threshold below that of the aging kinetic temperature range of the materials being joined, whereby after welding the members together, the resulting weld and heat affected zone (HAZ) are heat treated at a temperature below that of the kinetic temperature range of the materials joined, for obtaining substantially the same mechanical characteristics for the weld and HAZ, as for the parent material of the members joined.
Method and apparatus for welding precipitation hardenable materials
Murray, H. Jr.; Harris, I.D.; Ratka, J.O.; Spiegelberg, W.D.
1994-06-28
A method for welding together members consisting of precipitation age hardened materials includes the steps of selecting a weld filler material that has substantially the same composition as the materials being joined, and an age hardening characteristic temperature age threshold below that of the aging kinetic temperature range of the materials being joined, whereby after welding the members together, the resulting weld and heat affected zone (HAZ) are heat treated at a temperature below that of the kinetic temperature range of the materials joined, for obtaining substantially the same mechanical characteristics for the weld and HAZ, as for the parent material of the members joined. 5 figures.
Computer-aided analysis with Image J for quantitatively assessing psoriatic lesion area.
Sun, Z; Wang, Y; Ji, S; Wang, K; Zhao, Y
2015-11-01
Body surface area is important in determining the severity of psoriasis. However, objective, reliable, and practical method is still in need for this purpose. We performed a computer image analysis (CIA) of psoriatic area using the image J freeware to determine whether this method could be used for objective evaluation of psoriatic area. Fifteen psoriasis patients were randomized to be treated with adalimumab or placebo in a clinical trial. At each visit, the psoriasis area of each body site was estimated by two physicians (E-method), and standard photographs were taken. The psoriasis area in the pictures was assessed with CIA using semi-automatic threshold selection (T-method), or manual selection (M-method, gold standard). The results assessed by the three methods were analyzed with reliability and affecting factors evaluated. Both T- and E-method correlated strongly with M-method, and T-method had a slightly stronger correlation with M-method. Both T- and E-methods had a good consistency between the evaluators. All the three methods were able to detect the change in the psoriatic area after treatment, while the E-method tends to overestimate. The CIA with image J freeware is reliable and practicable in quantitatively assessing the lesional of psoriasis area. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Automated analysis for microcalcifications in high resolution digital mammograms
Mascio, Laura N.
1996-01-01
A method for automatically locating microcalcifications indicating breast cancer. The invention assists mammographers in finding very subtle microcalcifications and in recognizing the pattern formed by all the microcalcifications. It also draws attention to microcalcifications that might be overlooked because a more prominent feature draws attention away from an important object. A new filter has been designed to weed out false positives in one of the steps of the method. Previously, iterative selection threshold was used to separate microcalcifications from the spurious signals resulting from texture or other background. A Selective Erosion or Enhancement (SEE) Filter has been invented to improve this step. Since the algorithm detects areas containing potential calcifications on the mammogram, it can be used to determine which areas need be stored at the highest resolution available, while, in addition, the full mammogram can be reduced to an appropriate resolution for the remaining cancer signs.
Automated analysis for microcalcifications in high resolution digital mammograms
Mascio, L.N.
1996-12-17
A method is disclosed for automatically locating microcalcifications indicating breast cancer. The invention assists mammographers in finding very subtle microcalcifications and in recognizing the pattern formed by all the microcalcifications. It also draws attention to microcalcifications that might be overlooked because a more prominent feature draws attention away from an important object. A new filter has been designed to weed out false positives in one of the steps of the method. Previously, iterative selection threshold was used to separate microcalcifications from the spurious signals resulting from texture or other background. A Selective Erosion or Enhancement (SEE) Filter has been invented to improve this step. Since the algorithm detects areas containing potential calcifications on the mammogram, it can be used to determine which areas need be stored at the highest resolution available, while, in addition, the full mammogram can be reduced to an appropriate resolution for the remaining cancer signs. 8 figs.
Goyal, Vinay; Rajguru, Suhrud; Matic, Agnella I; Stock, Stuart R; Richter, Claus-Peter
2012-11-01
This article provides a mini review of the current state of infrared neural stimulation (INS), and new experimental results concerning INS damage thresholds. INS promises to be an attractive alternative for neural interfaces. With this method, one can attain spatially selective neural stimulation that is not possible with electrical stimulation. INS is based on the delivery of short laser pulses that result in a transient temperature increase in the tissue and depolarize the neurons. At a high stimulation rate and/or high pulse energy, the method bears the risk of thermal damage to the tissue from the instantaneous temperature increase or from potential accumulation of thermal energy. With the present study, we determined the injury thresholds in guinea pig cochleae for acute INS using functional measurements (compound action potentials) and histological evaluation. The selected laser parameters for INS were the wavelength (λ = 1,869 nm), the pulse duration (100 μs), the pulse repetition rate (250 Hz), and the radiant energy (0-127 μJ/pulse). For up to 5 hr of continuous irradiation at 250 Hz and at radiant energies up to 25 μJ/pulse, we did not observe any functional or histological damage in the cochlea. Functional loss was observed for energies above 25 μJ/pulse and the probability of injury to the target tissue resulting in functional loss increased with increasing radiant energy. Corresponding cochlear histology from control animals and animals exposed to 98 or 127 μJ/pulse at 250 Hz pulse repetition rate did not show a loss of spiral ganglion cells, hair cells, or other soft tissue structures of the organ of Corti. Light microscopy did not reveal any structural changes in the soft tissue either. Additionally, microcomputed tomography was used to visualize the placement of the optical fiber within the cochlea. Copyright © 2012 Wiley Periodicals, Inc.
A comparative assessment of statistical methods for extreme weather analysis
NASA Astrophysics Data System (ADS)
Schlögl, Matthias; Laaha, Gregor
2017-04-01
Extreme weather exposure assessment is of major importance for scientists and practitioners alike. We compare different extreme value approaches and fitting methods with respect to their value for assessing extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series over the standardly used annual maxima series in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing partial duration series, PDS) being superior to the block maxima approach (employing annual maxima series, AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was neither visible from the square-root criterion, nor from standardly used graphical diagnosis (mean residual life plot), but from a direct comparison of AMS and PDS in synoptic quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best suited approach. This will make the analyses more robust, in cases where threshold selection and dependency introduces biases to the PDS approach, but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend conditional performance measures that focus on rare events only in addition to standardly used unconditional indicators. The findings of this study are of relevance for a broad range of environmental variables, including meteorological and hydrological quantities.
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping
2005-11-01
A new method is proposed to eliminate the varying background and noise simultaneously for multivariate calibration of Fourier transform near infrared (FT-NIR) spectral signals. An ideal spectrum signal prototype was constructed based on the FT-NIR spectrum of fruit sugar content measurement. The performances of wavelet based threshold de-noising approaches via different combinations of wavelet base functions were compared. Three families of wavelet base function (Daubechies, Symlets and Coiflets) were applied to estimate the performance of those wavelet bases and threshold selection rules by a series of experiments. The experimental results show that the best de-noising performance is reached via the combinations of Daubechies 4 or Symlet 4 wavelet base function. Based on the optimization parameter, wavelet regression models for sugar content of pear were also developed and result in a smaller prediction error than a traditional Partial Least Squares Regression (PLSR) mode.
Sato, Miki; Maeda, Yuki; Ishioka, Toshio; Harata, Akira
2017-11-20
The detection limits and photoionization thresholds of polycyclic aromatic hydrocarbons and their chlorides and nitrides on the water surface are examined using laser two-photon ionization and single-photon ionization, respectively. The laser two-photon ionization methods are highly surface-selective, with a high sensitivity for aromatic hydrocarbons tending to accumulate on the water surface in the natural environment due to their highly hydrophobic nature. The dependence of the detection limits of target aromatic molecules on their physicochemical properties (photoionization thresholds relating to excess energy, molar absorptivity, and the octanol-water partition coefficient) is discussed. The detection limit clearly depends on the product of the octanol-water partition coefficient and molar absorptivity, and no clear dependence was found on excess energy. The detection limits of laser two-photon ionization for these types of molecules on the water surface are formulated.
Debt and growth: A non-parametric approach
NASA Astrophysics Data System (ADS)
Brida, Juan Gabriel; Gómez, David Matesanz; Seijas, Maria Nela
2017-11-01
In this study, we explore the dynamic relationship between public debt and economic growth by using a non-parametric approach based on data symbolization and clustering methods. The study uses annual data of general government consolidated gross debt-to-GDP ratio and gross domestic product for sixteen countries between 1977 and 2015. Using symbolic sequences, we introduce a notion of distance between the dynamical paths of different countries. Then, a Minimal Spanning Tree and a Hierarchical Tree are constructed from time series to help detecting the existence of groups of countries sharing similar economic performance. The main finding of the study appears for the period 2008-2016 when several countries surpassed the 90% debt-to-GDP threshold. During this period, three groups (clubs) of countries are obtained: high, mid and low indebted countries, suggesting that the employed debt-to-GDP threshold drives economic dynamics for the selected countries.
The comparison and analysis of extracting video key frame
NASA Astrophysics Data System (ADS)
Ouyang, S. Z.; Zhong, L.; Luo, R. Q.
2018-05-01
Video key frame extraction is an important part of the large data processing. Based on the previous work in key frame extraction, we summarized four important key frame extraction algorithms, and these methods are largely developed by comparing the differences between each of two frames. If the difference exceeds a threshold value, take the corresponding frame as two different keyframes. After the research, the key frame extraction based on the amount of mutual trust is proposed, the introduction of information entropy, by selecting the appropriate threshold values into the initial class, and finally take a similar mean mutual information as a candidate key frame. On this paper, several algorithms is used to extract the key frame of tunnel traffic videos. Then, with the analysis to the experimental results and comparisons between the pros and cons of these algorithms, the basis of practical applications is well provided.
Ultrafast Mid-Infrared Dynamics in Quantum Cascade Lasers
2010-01-07
pump and probe were tuned to be resonant with the gain transition at each bias . In Fig. 2(a), selected bias - dependent DT results at 30 K are displayed...emission just below threshold. Well below threshold, the phonon-assisted lifetime is weakly bias - dependent . Just below threshold, the photon density in...corresponds to the decay of the lower lasing state via tunneling . The second component, on the time scale of 2 ps, shows a characteristic inverse dependence
NASA Astrophysics Data System (ADS)
Brigandì, Giuseppina; Tito Aronica, Giuseppe; Bonaccorso, Brunella; Gueli, Roberto; Basile, Giuseppe
2017-09-01
The main focus of the paper is to present a flood and landslide early warning system, named HEWS (Hydrohazards Early Warning System), specifically developed for the Civil Protection Department of Sicily, based on the combined use of rainfall thresholds, soil moisture modelling and quantitative precipitation forecast (QPF). The warning system is referred to 9 different Alert Zones
in which Sicily has been divided into and based on a threshold system of three different increasing critical levels: ordinary, moderate and high. In this system, for early flood warning, a Soil Moisture Accounting (SMA) model provides daily soil moisture conditions, which allow to select a specific set of three rainfall thresholds, one for each critical level considered, to be used for issue the alert bulletin. Wetness indexes, representative of the soil moisture conditions of a catchment, are calculated using a simple, spatially-lumped rainfall-streamflow model, based on the SCS-CN method, and on the unit hydrograph approach, that require daily observed and/or predicted rainfall, and temperature data as input. For the calibration of this model daily continuous time series of rainfall, streamflow and air temperature data are used. An event based lumped rainfall-runoff model has been, instead, used for the derivation of the rainfall thresholds for each catchment in Sicily characterised by an area larger than 50 km2. In particular, a Kinematic Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall was developed for this purpose. For rainfall-induced shallow landslide warning, empirical rainfall thresholds provided by Gariano et al. (2015) have been included in the system. They were derived on an empirical basis starting from a catalogue of 265 shallow landslides in Sicily in the period 2002-2012. Finally, Delft-FEWS operational forecasting platform has been applied to link input data, SMA model and rainfall threshold models to produce warning on a daily basis for the entire region.
Quantifying ecological thresholds from response surfaces
Heather E. Lintz; Bruce McCune; Andrew N. Gray; Katherine A. McCulloh
2011-01-01
Ecological thresholds are abrupt changes of ecological state. While an ecological threshold is a widely accepted concept, most empirical methods detect them in time or across geographic space. Although useful, these approaches do not quantify the direct drivers of threshold response. Causal understanding of thresholds detected empirically requires their investigation...
Threshold-adaptive canny operator based on cross-zero points
NASA Astrophysics Data System (ADS)
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
Methods for automatic trigger threshold adjustment
Welch, Benjamin J; Partridge, Michael E
2014-03-18
Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.
Mathers, Jonathan; Sitch, Alice; Parry, Jayne
2016-10-01
Medical schools are increasingly using novel tools to select applicants. The UK Clinical Aptitude Test (UKCAT) is one such tool and measures mental abilities, attitudes and professional behaviour conducive to being a doctor using constructs likely to be less affected by socio-demographic factors than traditional measures of potential. Universities are free to use UKCAT as they see fit but three broad modalities have been observed: 'borderline', 'factor' and 'threshold'. This paper aims to provide the first longitudinal analyses assessing the impact of the different uses of UKCAT on making offers to applicants with different socio-demographic characteristics. Multilevel regression was used to model the outcome of applications to UK medical schools during the period 2004-2011 (data obtained from UCAS), adjusted for sex, ethnicity, schooling, parental occupation, educational attainment, year of application and UKCAT use (borderline, factor and threshold). The three ways of using the UKCAT did not differ in their impact on making the selection process more equitable, other than a marked reversal for female advantage when applied in a 'threshold' manner. Our attempt to model the longitudinal impact of the use of the UKCAT in its threshold format found again the reversal of female advantage, but did not demonstrate similar statistically significant reductions of the advantages associated with White ethnicity, higher social class and selective schooling. Our findings demonstrate attenuation of the advantage of being female but no changes in admission rates based on White ethnicity, higher social class and selective schooling. In view of this, the utility of the UKCAT as a means to widen access to medical schools among non-White and less advantaged applicants remains unproven. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Measurement of visual contrast sensitivity
NASA Astrophysics Data System (ADS)
Vongierke, H. E.; Marko, A. R.
1985-04-01
This invention involves measurement of the visual contrast sensitivity (modulation transfer) function of a human subject by means of linear or circular spatial frequency pattern on a cathode ray tube whose contrast is automatically decreasing or increasing depending on the subject pressing or releasing a hand-switch button. The threshold of detection of the pattern modulation is found by the subject by adjusting the contrast to values which vary about the subject's threshold thereby determining the threshold and also providing by the magnitude of the contrast fluctuations between reversals some estimate of the variability of the subject's absolute threshold. The invention also involves the slow automatic sweeping of the spatial frequency of the pattern over the spatial frequencies after preset time intervals or after threshold has been defined at each frequency by a selected number of subject-determined threshold crossings; i.e., contrast reversals.
Chen, Siyuan; Epps, Julien
2014-12-01
Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.
Identifying biologically relevant putative mechanisms in a given phenotype comparison
Hanoudi, Samer; Donato, Michele; Draghici, Sorin
2017-01-01
A major challenge in life science research is understanding the mechanism involved in a given phenotype. The ability to identify the correct mechanisms is needed in order to understand fundamental and very important phenomena such as mechanisms of disease, immune systems responses to various challenges, and mechanisms of drug action. The current data analysis methods focus on the identification of the differentially expressed (DE) genes using their fold change and/or p-values. Major shortcomings of this approach are that: i) it does not consider the interactions between genes; ii) its results are sensitive to the selection of the threshold(s) used, and iii) the set of genes produced by this approach is not always conducive to formulating mechanistic hypotheses. Here we present a method that can construct networks of genes that can be considered putative mechanisms. The putative mechanisms constructed by this approach are not limited to the set of DE genes, but also considers all known and relevant gene-gene interactions. We analyzed three real datasets for which both the causes of the phenotype, as well as the true mechanisms were known. We show that the method identified the correct mechanisms when applied on microarray datasets from mouse. We compared the results of our method with the results of the classical approach, showing that our method produces more meaningful biological insights. PMID:28486531
Diao, Wen-wen; Ni, Dao-feng; Li, Feng-rong; Shang, Ying-ying
2011-03-01
Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency, figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Thresholds with filter settings of 30 - 1500 Hz and 30 - 3000 Hz in children aged 2 - 33 months were recorded by click, tone burst ABR. A total of 18 patients (8 male/10 female), 22 ears were included. The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0.5 kHz and 2.0 kHz (t values were 2.238 and 2.217, P < 0.05), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.
Lubiprostone does not Influence Visceral Pain Thresholds in Patients with Irritable Bowel Syndrome
Whitehead, William E.; Palsson, Olafur S.; Gangarosa, Lisa; Turner, Marsha; Tucker, Jane
2011-01-01
Background In clinical trials, lubiprostone reduced the severity of abdominal pain. Aims The primary aim was to determine whether lubiprostone raises the threshold for abdominal pain induced by intraluminal balloon distention. A secondary aim was to determine whether changes in pain sensitivity influence clinical pain independently of changes in transit time. Methods Sixty-two patients with irritable bowel syndrome with constipation (IBS-C) participated in an 8-week crossover study. All subjects completed a 14-day baseline ending with a barostat test of pain and urge sensory thresholds. Half, randomly selected, then received 48 ug/day of lubiprostone for 14 days ending with a pain sensitivity test and a Sitzmark test of transit time. This was followed by a 14-day washout and then a crossover to 14 days of placebo with tests of pain sensitivity and transit time. The other half of the subjects received placebo before lubiprostone. All kept symptom diaries. Results Stools were significantly softer when taking lubiprostone compared to placebo (Bristol Stool scores 4.20 vs. 3.44, p<0.001). However, thresholds for pain (17.36 vs. 17.83 mmHg, lubiprostone vs. placebo) and urgency to defecate (14.14 vs. 14.53 mmHg) were not affected by lubiprostone. Transit time was not significantly different between lubiprostone and placebo (51.27 vs. 51.81 hours), and neither pain sensitivity nor transit time was a significant predictor of clinical pain. Conclusions Lubiprostone has no effect on visceral sensory thresholds. The reductions in clinical pain that occur while taking lubiprostone appear to be secondary to changes in stool consistency. PMID:21914041
NASA Astrophysics Data System (ADS)
Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke
2017-07-01
A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.
Segmentation of singularity maps in the context of soil porosity
NASA Astrophysics Data System (ADS)
Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.
2016-04-01
Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in porphyry Cu deposits by fractal concentration-volume modeling. Journal of Geochemical Exploration, 108, 220-232. Martín-Sotoca, J. J., Tarquis, A. M., Saa-Requejo, A. and Grau, J. B. (2015). Pore detection in Computed Tomography (CT) soil images through singularity map analysis. Oral Presentation in PedoFract VIII Congress (June, La Coruña - Spain).
Assessment of statistical methods used in library-based approaches to microbial source tracking.
Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D
2003-12-01
Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike; Brickhouse, Mark
2015-11-01
We present a newly developed feature transformation (FT) detection method for hyper-spectral imagery (HSI) sensors. In essence, the FT method, by transforming the original features (spectral bands) to a different feature domain, may considerably increase the statistical separation between the target and background probability density functions, and thus may significantly improve the target detection and identification performance, as evidenced by the test results in this paper. We show that by differentiating the original spectral, one can completely separate targets from the background using a single spectral band, leading to perfect detection results. In addition, we have proposed an automated best spectral band selection process with a double-threshold scheme that can rank the available spectral bands from the best to the worst for target detection. Finally, we have also proposed an automated cross-spectrum fusion process to further improve the detection performance in lower spectral range (<1000 nm) by selecting the best spectral band pair with multivariate analysis. Promising detection performance has been achieved using a small background material signature library for concept-proving, and has then been further evaluated and verified using a real background HSI scene collected by a HYDICE sensor.
NASA Astrophysics Data System (ADS)
Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.
2004-05-01
Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.
Analyses of Fatigue Crack Growth and Closure Near Threshold Conditions for Large-Crack Behavior
NASA Technical Reports Server (NTRS)
Newman, J. C., Jr.
1999-01-01
A plasticity-induced crack-closure model was used to study fatigue crack growth and closure in thin 2024-T3 aluminum alloy under constant-R and constant-K(sub max) threshold testing procedures. Two methods of calculating crack-opening stresses were compared. One method was based on a contact-K analyses and the other on crack-opening-displacement (COD) analyses. These methods gave nearly identical results under constant-amplitude loading, but under threshold simulations the contact-K analyses gave lower opening stresses than the contact COD method. Crack-growth predictions tend to support the use of contact-K analyses. Crack-growth simulations showed that remote closure can cause a rapid rise in opening stresses in the near threshold regime for low-constraint and high applied stress levels. Under low applied stress levels and high constraint, a rise in opening stresses was not observed near threshold conditions. But crack-tip-opening displacement (CTOD) were of the order of measured oxide thicknesses in the 2024 alloy under constant-R simulations. In contrast, under constant-K(sub max) testing the CTOD near threshold conditions were an order-of-magnitude larger than measured oxide thicknesses. Residual-plastic deformations under both constant-R and constant-K(sub max) threshold simulations were several times larger than the expected oxide thicknesses. Thus, residual-plastic deformations, in addition to oxide and roughness, play an integral part in threshold development.
Derkach, Andriy; Chiang, Theodore; Gong, Jiafen; Addis, Laura; Dobbins, Sara; Tomlinson, Ian; Houlston, Richard; Pal, Deb K; Strug, Lisa J
2014-08-01
Sufficiently powered case-control studies with next-generation sequence (NGS) data remain prohibitively expensive for many investigators. If feasible, a more efficient strategy would be to include publicly available sequenced controls. However, these studies can be confounded by differences in sequencing platform; alignment, single nucleotide polymorphism and variant calling algorithms; read depth; and selection thresholds. Assuming one can match cases and controls on the basis of ethnicity and other potential confounding factors, and one has access to the aligned reads in both groups, we investigate the effect of systematic differences in read depth and selection threshold when comparing allele frequencies between cases and controls. We propose a novel likelihood-based method, the robust variance score (RVS), that substitutes genotype calls by their expected values given observed sequence data. We show theoretically that the RVS eliminates read depth bias in the estimation of minor allele frequency. We also demonstrate that, using simulated and real NGS data, the RVS method controls Type I error and has comparable power to the 'gold standard' analysis with the true underlying genotypes for both common and rare variants. An RVS R script and instructions can be found at strug.research.sickkids.ca, and at https://github.com/strug-lab/RVS. lisa.strug@utoronto.ca Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Vanamail, P; Subramanian, S; Srividya, A; Ravi, R; Krishnamoorthy, K; Das, P K
2006-08-01
Lot quality assurance sampling (LQAS) with two-stage sampling plan was applied for rapid monitoring of coverage after every round of mass drug administration (MDA). A Primary Health Centre (PHC) consisting of 29 villages in Thiruvannamalai district, Tamil Nadu was selected as the study area. Two threshold levels of coverage were used: threshold A (maximum: 60%; minimum: 40%) and threshold B (maximum: 80%; minimum: 60%). Based on these thresholds, one sampling plan each for A and B was derived with the necessary sample size and the number of allowable defectives (i.e. defectives mean those who have not received the drug). Using data generated through simple random sampling (SRSI) of 1,750 individuals in the study area, LQAS was validated with the above two sampling plans for its diagnostic and field applicability. Simultaneously, a household survey (SRSH) was conducted for validation and cost-effectiveness analysis. Based on SRSH survey, the estimated coverage was 93.5% (CI: 91.7-95.3%). LQAS with threshold A revealed that by sampling a maximum of 14 individuals and by allowing four defectives, the coverage was >or=60% in >90% of villages at the first stage. Similarly, with threshold B by sampling a maximum of nine individuals and by allowing four defectives, the coverage was >or=80% in >90% of villages at the first stage. These analyses suggest that the sampling plan (14,4,52,25) of threshold A may be adopted in MDA to assess if a minimum coverage of 60% has been achieved. However, to achieve the goal of elimination, the sampling plan (9, 4, 42, 29) of threshold B can identify villages in which the coverage is <80% so that remedial measures can be taken. Cost-effectiveness analysis showed that both options of LQAS are more cost-effective than SRSH to detect a village with a given level of coverage. The cost per village was US dollars 76.18 under SRSH. The cost of LQAS was US dollars 65.81 and 55.63 per village for thresholds A and B respectively. The total financial cost of classifying a village correctly with the given threshold level of LQAS could be reduced by 14% and 26% of the cost of conventional SRSH method.
Enhancement of surface damage resistance by selective chemical removal of CeO2
NASA Astrophysics Data System (ADS)
Kamimura, Tomosumi; Motokoshi, Shinji; Sakamoto, Takayasu; Jitsuno, Takahisa; Shiba, Haruya; Akamatsu, Shigenori; Horibe, Hideo; Okamoto, Takayuki; Yoshida, Kunio
2005-02-01
The laser-induced damage threshold of polished fused silica surfaces is much lower than the damage threshod of its bulk. It is well known that contaminations of polished surface are one of the causes of low threshold of laser-induced surface damage. Particularly, polishing contamination such as cerium dioxide (CeO2) compound used in optical polishing process is embedded inside the surface layer, and cannot be removed by conventional cleaning. For the enhancement of surface damage resistance, various surface treatments have been applied to the removal of embedded polishing compound. In this paper, we propose a new method using slective chemical removal with high-temperature sulfuric acid (H2SO4). Sulfuric acid could dissolve only CeO2 from the fused silica surface. The surface roughness of fused silica treated H2SO4 was kept through the treatment process. At the wavelength of 355 nm, the surface damage threshold was drastically improved to the nearly same as bulk quality. However, the effect of our treatment was not observed at the wavelength of 1064 nm. The comparison with our previous results obtained from other surface treatments will be discussed.
Flood return level analysis of Peaks over Threshold series under changing climate
NASA Astrophysics Data System (ADS)
Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.
2016-12-01
Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.
NASA Astrophysics Data System (ADS)
Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie
2016-10-01
Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
Libong, Danielle; Bouchonnet, Stéphane; Ricordel, Ivan
2003-01-01
A gas chromatography-ion trap tandem mass spectrometry (GC-ion trap MS-MS) method for detection and quantitation of LSD in whole blood is presented. The sample preparation process, including a solid-phase extraction step with Bond Elut cartridges, was performed with 2 mL of whole blood. Eight microliters of the purified extract was injected with a cold on-column injection method. Positive chemical ionization was performed using acetonitrile as reagent gas; LSD was detected in the MS-MS mode. The chromatograms obtained from blood extracts showed the great selectivity of the method. GC-MS quantitation was performed using lysergic acid methylpropylamide as the internal standard. The response of the MS was linear for concentrations ranging from 0.02 ng/mL (detection threshold) to 10.0 ng/mL. Several parameters such as the choice of the capillary column, the choice of the internal standard and that of the ionization mode (positive CI vs. EI) were rationalized. Decomposition pathways under both ionization modes were studied. Within-day and between-day stability were evaluated.
Applications of spectral methods to turbulent magnetofluids in space and fusion research
NASA Technical Reports Server (NTRS)
Montgomery, D.; Voigt, R. G. (Editor); Gottlieb, D. (Editor); Hussaini, M. Y. (Editor)
1984-01-01
Recent and potential applications of spectral method computation to incompressible, dissipative magnetohydrodynamics are surveyed. Linear stability problems for one dimensional, quasi-equilibria are approachable through a close analogue of the Orr-Sommerfeld equation. It is likely that for Reynolds-like numbers above certain as-yet-undetermined thresholds, all magnetofluids are turbulent. Four recent effects in MHD turbulence are remarked upon, as they have displayed themselves in spectral method computations: (1) inverse cascades; (2) small-scale intermittent dissipative structures; (3) selective decays of ideal global invariants relative to each other; and (4) anisotropy induced by a mean dc magnetic field. Two more conjectured applications are suggested. All the turbulent processes discussed are sometimes involved in current carrying confined fusion magnetoplasmas and in space plasmas.
Automatic blood vessel based-liver segmentation using the portal phase abdominal CT
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2018-02-01
Liver segmentation is the basis for computer-based planning of hepatic surgical interventions. In diagnosis and analysis of hepatic diseases and surgery planning, automatic segmentation of liver has high importance. Blood vessel (BV) has showed high performance at liver segmentation. In our previous work, we developed a semi-automatic method that segments the liver through the portal phase abdominal CT images in two stages. First stage was interactive segmentation of abdominal blood vessels (ABVs) and subsequent classification into hepatic (HBVs) and non-hepatic (non-HBVs). This stage had 5 interactions that include selective threshold for bone segmentation, selecting two seed points for kidneys segmentation, selection of inferior vena cava (IVC) entrance for starting ABVs segmentation, identification of the portal vein (PV) entrance to the liver and the IVC-exit for classifying HBVs from other ABVs (non-HBVs). Second stage is automatic segmentation of the liver based on segmented ABVs as described in [4]. For full automation of our method we developed a method [5] that segments ABVs automatically tackling the first three interactions. In this paper, we propose full automation of classifying ABVs into HBVs and non- HBVs and consequently full automation of liver segmentation that we proposed in [4]. Results illustrate that the method is effective at segmentation of the liver through the portal abdominal CT images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, D; Nguyen, D; Voronenko, Y
Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less
NASA Astrophysics Data System (ADS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-05-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.
NASA Technical Reports Server (NTRS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-01-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.
Bettembourg, Charles; Diot, Christian; Dameron, Olivier
2015-01-01
Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274
A ROC-based feature selection method for computer-aided detection and diagnosis
NASA Astrophysics Data System (ADS)
Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing
2014-03-01
Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
Tsaneva, L
1993-01-01
The results from the investigation of the threshold of discomfort in 385 operators from firm "Kremikovtsi" are discussed. The most expressed changes are found in operators with increased tonal auditory threshold up to 45 and above 50 dB, in high confidential probability. The observed changes in the threshold of discomfort are classified into 3 groups: 1). Raised tonal auditory threshold (up to 30 dB) without decrease in the threshold of discomfort; 2). Decreased threshold of discomfort (with about 15-20 dB) in raised tonal auditory threshold (up to 45 dB); 3). Decreased threshold of discomfort on the background of raised (above 50 dB) tonal auditory threshold. On 4 figures are represented audiograms, illustrating the state of tonal auditory threshold, the field of hearing and the threshold of discomfort. The field of hearing of the operators from the III and IV groups is narrowed, and in the latter also deformed. The explanation of this pathophysiological phenomenon is related to the increased effect of the sound irritation and the presence of recruitment phenomenon with possible engagement of the central end of the auditory analyser. It is underlined, that the threshold of discomfort is sensitive index for the state of the individual norms of each operator for the speech-sound-noise discomfort.(ABSTRACT TRUNCATED AT 250 WORDS)
Pérez-Báez, Wendy; García-Latorre, Ethel A; Maldonado-Martínez, Héctor Aquiles; Coronado-Martínez, Iris; Flores-García, Leonardo; Taja-Chayeb, Lucía
2017-10-01
Treatment in metastatic colorectal cancer (mCRC) has expanded with monoclonal antibodies targeting epidermal growth factor receptor, but is restricted to patients with a wild-type (WT) KRAS mutational status. The most sensitive assays for KRAS mutation detection in formalin-fixed paraffin embedded (FFPE) tissues are based on real-time PCR. Among them, high resolution melting analysis (HRMA), is a simple, fast, highly sensitive, specific and cost-effective method, proposed as adjunct for KRAS mutation detection. However the method to categorize WT vs mutant sequences in HRMA is not clearly specified in available studies, besides the impact of FFPE artifacts on HRMA performance hasn't been addressed either. Avowedly adequate samples from 104 consecutive mCRC patients were tested for KRAS mutations by Therascreen™ (FDA Validated test), HRMA, and HRMA with UDG pre-treatment to reverse FFPE fixation artifacts. Comparisons of KRAS status allocation among the three methods were done. Focusing on HRMA as screening test, ROC curve analyses were performed for HRMA and HMRA-UDG against Therascreen™, in order to evaluate their discriminative power and to determine the threshold of profile concordance between WT control and sample for KRAS status determination. Comparing HRMA and HRMA-UDG against Therascreen™ as surrogate gold standard, sensitivity was 1 for both HRMA and HRMA-UDG; and specificity and positive predictive values were respectively 0.838 and 0.939; and 0.777 and 0.913. As evaluated by the McNemar test, HRMA-UDG allocated samples to a WT/mutated genotype in a significatively different way from HRMA (p > 0.001). On the other hand HRMA-UDG did not differ from Therascreen™ (p = 0.125). ROC-curve analysis showed a significant discriminative power for both HRMA and HRMA-UDG against Therascreen™ (respectively, AUC of 0.978, p > 0.0001, CI 95% 0.957-0.999; and AUC of 0.98, p > 0.0001, CI 95% 0.000-1.0). For HRMA as a screening tool, the best threshold (degree of concordance between sample curves and WT control) was attained at 92.14% for HRMA (specificity of 0.887), and at 92.55% for HRMA-UDG (specificity of 0.952). HRMA is a highly sensitive method for KRAS mutation detection, with apparently adequate and statistically significant discriminative power. FFPE sample fixation artifacts have an impact on HRMA results, so for HRMA on FFPE samples pre-treatment with UDG should be strongly suggested. The choice of the threshold for melting curve concordance has also great impact on HRMA performance. A threshold of 93% or greater might be adequate if using HRMA as a screening tool. Further validation of this threshold is required. Copyright © 2017 Elsevier Ltd. All rights reserved.
Functional properties of models for direction selectivity in the retina.
Grzywacz, N M; Koch, C
1987-01-01
Poggio and Reichardt (Kybernetik, 13:223-227, 1973) showed that if the average response of a visual system to a moving stimulus is directionally selective, then this sensitivity must be mediated by a nonlinear operation. In particular, it has been proposed that at the behavioral level, motion-sensitive biological systems are implemented by quadratic nonlinearities (Hassenstein and Reichardt: Z. Naturforsch., 11b:513-524, 1956; van Santen and Sperling: J. Opt. Soc. Am. [A] 1:451-473, 1984; Adelson and Bergen: J. Opt. Soc. Am. [A], 2:284-299, 1985). This paper analyzes theoretically two nonlinear neural mechanisms that possibly underlie retinal direction selectivity and explores the conditions under which they behave as a quadratic nonlinearity. The first mechanism is shunting inhibition (Torre and Poggio: Proc. R. Soc. Lond. [Biol.], 202:409-416, 1978), and the second consists of the linear combination of the outputs of a depolarizing and a hyperpolarizing synapse, followed by a threshold operation. It was found that although sometimes possible, it is in practice hard to approximate the Shunting Inhibition and the Threshold models for direction selectivity by quadratic systems. For instance, the level of the threshold on the Threshold model must be close to the steady-state level of the cell's combined synaptic input. Furthermore, for both the Shunting and the Threshold models, the approximation by a quadratic system is only possible for a small range of low contrast stimuli and for situations where the rectifications due to the ON-OFF mechanisms, and to the ganglion cells' action potentials, can be linearized. The main question that this paper leaves open is, how do we account for the apparent quadratic properties of motion perception given that the same properties seem so fragile at the single cell level? Finally, as a result of this study, some system analysis experiments were proposed that can distinguish between different instances of the models.
Model selection for clustering of pharmacokinetic responses.
Guerra, Rui P; Carvalho, Alexandra M; Mateus, Paulo
2018-08-01
Pharmacokinetics comprises the study of drug absorption, distribution, metabolism and excretion over time. Clinical pharmacokinetics, focusing on therapeutic management, offers important insights towards personalised medicine through the study of efficacy and toxicity of drug therapies. This study is hampered by subject's high variability in drug blood concentration, when starting a therapy with the same drug dosage. Clustering of pharmacokinetics responses has been addressed recently as a way to stratify subjects and provide different drug doses for each stratum. This clustering method, however, is not able to automatically determine the correct number of clusters, using an user-defined parameter for collapsing clusters that are closer than a given heuristic threshold. We aim to use information-theoretical approaches to address parameter-free model selection. We propose two model selection criteria for clustering pharmacokinetics responses, founded on the Minimum Description Length and on the Normalised Maximum Likelihood. Experimental results show the ability of model selection schemes to unveil the correct number of clusters underlying the mixture of pharmacokinetics responses. In this work we were able to devise two model selection criteria to determine the number of clusters in a mixture of pharmacokinetics curves, advancing over previous works. A cost-efficient parallel implementation in Java of the proposed method is publicly available for the community. Copyright © 2018 Elsevier B.V. All rights reserved.
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
Trend-Residual Dual Modeling for Detection of Outliers in Low-Cost GPS Trajectories.
Chen, Xiaojian; Cui, Tingting; Fu, Jianhong; Peng, Jianwei; Shan, Jie
2016-12-01
Low-cost GPS (receiver) has become a ubiquitous and integral part of our daily life. Despite noticeable advantages such as being cheap, small, light, and easy to use, its limited positioning accuracy devalues and hampers its wide applications for reliable mapping and analysis. Two conventional techniques to remove outliers in a GPS trajectory are thresholding and Kalman-based methods, which are difficult in selecting appropriate thresholds and modeling the trajectories. Moreover, they are insensitive to medium and small outliers, especially for low-sample-rate trajectories. This paper proposes a model-based GPS trajectory cleaner. Rather than examining speed and acceleration or assuming a pre-determined trajectory model, we first use cubic smooth spline to adaptively model the trend of the trajectory. The residuals, i.e., the differences between the trend and GPS measurements, are then further modeled by time series method. Outliers are detected by scoring the residuals at every GPS trajectory point. Comparing to the conventional procedures, the trend-residual dual modeling approach has the following features: (a) it is able to model trajectories and detect outliers adaptively; (b) only one critical value for outlier scores needs to be set; (c) it is able to robustly detect unapparent outliers; and (d) it is effective in cleaning outliers for GPS trajectories with low sample rates. Tests are carried out on three real-world GPS trajectories datasets. The evaluation demonstrates an average of 9.27 times better performance in outlier detection for GPS trajectories than thresholding and Kalman-based techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell K Meyer
Blister–threshold testing of fuel plates is a standard method through which the safety margin for operation of plate-type in research and test reactors is assessed. The blister-threshold temperature is indicative of the ability of fuel to operate at high temperatures for short periods of time (transient conditions) without failure. This method of testing was applied to the newly developed U-Mo monolithic fuel system. Blister annealing studies on the U-Mo monolithic fuel plates began in 2007, with the Reduced Enrichment for Research and Test Reactors (RERTR)-6 experiment, and they have continued as the U-Mo fuel system has evolved through the researchmore » and development process. Blister anneal threshold temperatures from early irradiation experiments (RERTR-6 through RERTR-10) ranged from 400 to 500°C. These temperatures were projected to be acceptable for NRC-licensed research reactors and the high-power Advanced Test Reactor (ATR) and the High Flux Isotope Reactor (HFIR) based on current safety-analysis reports (SARs). Initial blister testing results from the RERTR-12 experiment capsules X1 and X2 showed a decrease in the blister-threshold temperatures. Blister threshold temperatures from this experiment ranged from 300 to 400°C. Selected plates from the AFIP-4 experiment, which was fabricated using a process similar to that used to fabricate the RERTR-12 experiment, also underwent blister testing to determine whether results would be similar. The measured blister-threshold temperatures from the AFIP-4 plates fell within the same blister-threshold temperature range measured in the RERTR-12 plates. Investigation of the cause of this decrease in bister threshold temperature is being conducted under the guidance of Idaho National Laboratory PLN-4155, “Analysis of Low Blister Threshold Temperatures in the RERTR-12 and AFIP-4 Experiments,” and is driven by hypotheses. The main focus of the investigation is in the following areas: 1. Fabrication variables 2. Pre-irradiation characterization 3. Irradiation conditions 4. Post-irradiation examination 5. Additional blister testing 6. Mechanical modeling This report documents the preliminary results of this investigation. Several hypotheses can be dismissed as a result of this investigation. Two primary categories of causes remain. The most prominent theory, supported by the data, is that low blister-threshold temperature is the result of mechanical energy imparted on the samples during the fabrication process (hot and cold rolling) without adequate post processing (annealing). The mechanisms are not clearly understood and require further investigation, but can be divided into two categories: • Residual Stress • Undesirable interaction boundary and/or U-Mo microstructure change A secondary theory that cannot be dismissed with the information that is currently available is that a change in the test conditions has resulted in a statistically significant downward shift of measured blister temperature. This report outlines the results of the forensic investigations conducted to date. The data and conclusions presented in this report are preliminary. Definitive cause and effect relationships will be established by future experimental programs.« less
NASA Astrophysics Data System (ADS)
López-Coto, R.; Mazin, D.; Paoletti, R.; Blanch Bigas, O.; Cortina, J.
2016-04-01
Imaging atmospheric Cherenkov telescopes (IACTs) such as the Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) telescopes endeavor to reach the lowest possible energy threshold. In doing so the trigger system is a key element. Reducing the trigger threshold is hampered by the rapid increase of accidental triggers generated by ambient light (the so-called Night Sky Background NSB). In this paper we present a topological trigger, dubbed Topo-trigger, which rejects events on the basis of their relative orientation in the telescope cameras. We have simulated and tested the trigger selection algorithm in the MAGIC telescopes. The algorithm was tested using MonteCarlo simulations and shows a rejection of 85% of the accidental stereo triggers while preserving 99% of the gamma rays. A full implementation of this trigger system would achieve an increase in collection area between 10 and 20% at the energy threshold. The analysis energy threshold of the instrument is expected to decrease by ~ 8%. The selection algorithm was tested on real MAGIC data taken with the current trigger configuration and no γ-like events were found to be lost.
A Cyfip2-Dependent Excitatory Interneuron Pathway Establishes the Innate Startle Threshold.
Marsden, Kurt C; Jain, Roshan A; Wolman, Marc A; Echeverry, Fabio A; Nelson, Jessica C; Hayer, Katharina E; Miltenberg, Ben; Pereda, Alberto E; Granato, Michael
2018-04-17
Sensory experiences dynamically modify whether animals respond to a given stimulus, but it is unclear how innate behavioral thresholds are established. Here, we identify molecular and circuit-level mechanisms underlying the innate threshold of the zebrafish startle response. From a forward genetic screen, we isolated five mutant lines with reduced innate startle thresholds. Using whole-genome sequencing, we identify the causative mutation for one line to be in the fragile X mental retardation protein (FMRP)-interacting protein cyfip2. We show that cyfip2 acts independently of FMRP and that reactivation of cyfip2 restores the baseline threshold after phenotype onset. Finally, we show that cyfip2 regulates the innate startle threshold by reducing neural activity in a small group of excitatory hindbrain interneurons. Thus, we identify a selective set of genes critical to establishing an innate behavioral threshold and uncover a circuit-level role for cyfip2 in this process. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Receptive fields selection for binary feature description.
Fan, Bin; Kong, Qingqun; Trzcinski, Tomasz; Wang, Zhiheng; Pan, Chunhong; Fua, Pascal
2014-06-01
Feature description for local image patch is widely used in computer vision. While the conventional way to design local descriptor is based on expert experience and knowledge, learning-based methods for designing local descriptor become more and more popular because of their good performance and data-driven property. This paper proposes a novel data-driven method for designing binary feature descriptor, which we call receptive fields descriptor (RFD). Technically, RFD is constructed by thresholding responses of a set of receptive fields, which are selected from a large number of candidates according to their distinctiveness and correlations in a greedy way. Using two different kinds of receptive fields (namely rectangular pooling area and Gaussian pooling area) for selection, we obtain two binary descriptors RFDR and RFDG .accordingly. Image matching experiments on the well-known patch data set and Oxford data set demonstrate that RFD significantly outperforms the state-of-the-art binary descriptors, and is comparable with the best float-valued descriptors at a fraction of processing time. Finally, experiments on object recognition tasks confirm that both RFDR and RFDG successfully bridge the performance gap between binary descriptors and their floating-point competitors.
NASA Astrophysics Data System (ADS)
Jones, Mackenzie L.; Hickox, Ryan C.; Mutch, Simon J.; Croton, Darren J.; Ptak, Andrew F.; DiPompeo, Michael A.
2017-07-01
In studies of the connection between active galactic nuclei (AGNs) and their host galaxies, there is widespread disagreement on some key aspects of the connection. These disagreements largely stem from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to star formation. We explicitly model selection effects to produce an “observed” AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in models that attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGNs based on thresholds in luminosity or Eddington ratio on the “observed” AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low-mass black holes. We find that selecting AGNs using these various thresholds yield samples with different AGN host galaxy properties.
NASA Astrophysics Data System (ADS)
Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong
2012-11-01
Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).
NASA Astrophysics Data System (ADS)
Liang, J.; Liu, D.
2017-12-01
Emergency responses to floods require timely information on water extents that can be produced by satellite-based remote sensing. As SAR image can be acquired in adverse illumination and weather conditions, it is particularly suitable for delineating water extent during a flood event. Thresholding SAR imagery is one of the most widely used approaches to delineate water extent. However, most studies apply only one threshold to separate water and dry land without considering the complexity and variability of different dry land surface types in an image. This paper proposes a new thresholding method for SAR image to delineate water from other different land cover types. A probability distribution of SAR backscatter intensity is fitted for each land cover type including water before a flood event and the intersection between two distributions is regarded as a threshold to classify the two. To extract water, a set of thresholds are applied to several pairs of land cover types—water and urban or water and forest. The subsets are merged to form the water distribution for the SAR image during or after the flooding. Experiments show that this land cover based thresholding approach outperformed the traditional single thresholding by about 5% to 15%. This method has great application potential with the broadly acceptance of the thresholding based methods and availability of land cover data, especially for heterogeneous regions.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data
2014-01-01
Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kyung-Min; Min Kim, Chul; Moon Jeong, Tae, E-mail: jeongtm@gist.ac.kr
A computational method based on a first-principles multiscale simulation has been used for calculating the optical response and the ablation threshold of an optical material irradiated with an ultrashort intense laser pulse. The method employs Maxwell's equations to describe laser pulse propagation and time-dependent density functional theory to describe the generation of conduction band electrons in an optical medium. Optical properties, such as reflectance and absorption, were investigated for laser intensities in the range 10{sup 10} W/cm{sup 2} to 2 × 10{sup 15} W/cm{sup 2} based on the theory of generation and spatial distribution of the conduction band electrons. The method was applied tomore » investigate the changes in the optical reflectance of α-quartz bulk, half-wavelength thin-film, and quarter-wavelength thin-film and to estimate their ablation thresholds. Despite the adiabatic local density approximation used in calculating the exchange–correlation potential, the reflectance and the ablation threshold obtained from our method agree well with the previous theoretical and experimental results. The method can be applied to estimate the ablation thresholds for optical materials, in general. The ablation threshold data can be used to design ultra-broadband high-damage-threshold coating structures.« less
NASA Technical Reports Server (NTRS)
Hirsch, David B.; Williams, James H.; Harper, Susan A.; Beeson, Harold; Pedley, Michael D.
2007-01-01
Materials selection for spacecraft is based on an upward flammability test conducted in a quiescent environment in the highest expected oxygen concentration environment. The test conditions and its pass/fail test logic do not provide sufficient quantitative materials flammability information for an advanced space exploration program. A modified approach has been suggested determination of materials self-extinguishment limits. The flammability threshold information will allow NASA to identify materials with increased flammability risk from oxygen concentration and total pressure changes, minimize potential impacts, and allow for development of sound requirements for new spacecraft and extraterrestrial landers and habitats. This paper provides data on oxygen concentration self-extinguishment limits under quiescent conditions for selected materials considered for the Constellation Program.
Detection of immunocytological markers in photomicroscopic images
NASA Astrophysics Data System (ADS)
Friedrich, David; zur Jacobsmühlen, Joschka; Braunschweig, Till; Bell, André; Chaisaowong, Kraisorn; Knüchel-Clarke, Ruth; Aach, Til
2012-03-01
Early detection of cervical cancer can be achieved through visual analysis of cell anomalies. The established PAP smear achieves a sensitivity of 50-90%, most false negative results are caused by mistakes in the preparation of the specimen or reader variability in the subjective, visual investigation. Since cervical cancer is caused by human papillomavirus (HPV), the detection of HPV-infected cells opens new perspectives for screening of precancerous abnormalities. Immunocytochemical preparation marks HPV-positive cells in brush smears of the cervix with high sensitivity and specificity. The goal of this work is the automated detection of all marker-positive cells in microscopic images of a sample slide stained with an immunocytochemical marker. A color separation technique is used to estimate the concentrations of the immunocytochemical marker stain as well as of the counterstain used to color the nuclei. Segmentation methods based on Otsu's threshold selection method and Mean Shift are adapted to the task of segmenting marker-positive cells and their nuclei. The best detection performance of single marker-positive cells was achieved with the adapted thresholding method with a sensitivity of 95.9%. The contours differed by a modified Hausdorff Distance (MHD) of 2.8 μm. Nuclei of single marker positive cells were detected with a sensitivity of 95.9% and MHD = 1.02 μm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y; Olsen, J.; Parikh, P.
2014-06-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE),more » along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information, different filtering methods and their influences on the segmentation results. Parag Parikh receives research grant from ViewRay. Sasa Mutic has consulting and research agreements with ViewRay. Yanle Hu receives travel reimbursement from ViewRay. Iwan Kawrakow and James Dempsey are ViewRay employees.« less
Kramer, Gerbrand Maria; Frings, Virginie; Hoetjes, Nikie; Hoekstra, Otto S; Smit, Egbert F; de Langen, Adrianus Johannes; Boellaard, Ronald
2016-09-01
Change in (18)F-FDG uptake may predict response to anticancer treatment. The PERCIST suggest a threshold of 30% change in SUV to define partial response and progressive disease. Evidence underlying these thresholds consists of mixed stand-alone PET and PET/CT data with variable uptake intervals and no consensus on the number of lesions to be assessed. Additionally, there is increasing interest in alternative (18)F-FDG uptake measures such as metabolically active tumor volume and total lesion glycolysis (TLG). The aim of this study was to comprehensively investigate the repeatability of various quantitative whole-body (18)F-FDG metrics in non-small cell lung cancer (NSCLC) patients as a function of tracer uptake interval and lesion selection strategies. Eleven NSCLC patients, with at least 1 intrathoracic lesion 3 cm or greater, underwent double baseline whole-body (18)F-FDG PET/CT scans at 60 and 90 min after injection within 3 d. All (18)F-FDG-avid tumors were delineated with an 50% threshold of SUVpeak adapted for local background. SUVmax, SUVmean, SUVpeak, TLG, metabolically active tumor volume, and tumor-to-blood and -liver ratios were evaluated, as well as the influence of lesion selection and 2 methods for correction of uptake time differences. The best repeatability was found using the SUV metrics of the averaged PERCIST target lesions (repeatability coefficients < 10%). The correlation between test and retest scans was strong for all uptake measures at either uptake interval (intraclass correlation coefficient > 0.97 and R(2) > 0.98). There were no significant differences in repeatability between data obtained 60 and 90 min after injection. When only PERCIST-defined target lesions were included (n = 34), repeatability improved for all uptake values. Normalization to liver or blood uptake or glucose correction did not improve repeatability. However, after correction for uptake time the correlation of SUV measures and TLG between the 60- and 90-min data significantly improved without affecting test-retest performance. This study suggests that a 15% change of SUVmean/SUVpeak at 60 min after injection can be used to assess response in advanced NSCLC patients if up to 5 PERCIST target lesions are assessed. Lower thresholds could be used in averaged PERCIST target lesions (<10%). © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Bierer, Julie Arenberg; Faulkner, Kathleen F; Tremblay, Kelly L
2011-01-01
The goal of this study was to compare cochlear implant behavioral measures and electrically evoked auditory brain stem responses (EABRs) obtained with a spatially focused electrode configuration. It has been shown previously that channels with high thresholds, when measured with the tripolar configuration, exhibit relatively broad psychophysical tuning curves. The elevated threshold and degraded spatial/spectral selectivity of such channels are consistent with a poor electrode-neuron interface, defined as suboptimal electrode placement or reduced nerve survival. However, the psychophysical methods required to obtain these data are time intensive and may not be practical during a clinical mapping session, especially for young children. Here, we have extended the previous investigation to determine whether a physiological approach could provide a similar assessment of channel functionality. We hypothesized that, in accordance with the perceptual measures, higher EABR thresholds would correlate with steeper EABR amplitude growth functions, reflecting a degraded electrode-neuron interface. Data were collected from six cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. EABRs were obtained in each subject for the two channels having the highest and lowest tripolar (σ = 1 or 0.9) behavioral threshold. Evoked potentials were measured with both the monopolar (σ = 0) and a more focused partial tripolar (σ ≥ 0.50) configuration. Consistent with previous studies, EABR thresholds were highly and positively correlated with behavioral thresholds obtained with both the monopolar and partial tripolar configurations. The Wave V amplitude growth functions with increasing stimulus level showed the predicted effect of shallower growth for the partial tripolar than for the monopolar configuration, but this was observed only for the low-threshold channels. In contrast, high-threshold channels showed the opposite effect; steeper growth functions were seen for the partial tripolar configuration. These results suggest that behavioral thresholds or EABRs measured with a restricted stimulus can be used to identify potentially impaired cochlear implant channels. Channels having high thresholds and steep growth functions would likely not activate the appropriate spatially restricted region of the cochlea, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.
Kavakiotis, Ioannis; Samaras, Patroklos; Triantafyllidis, Alexandros; Vlahavas, Ioannis
2017-11-01
Single Nucleotide Polymorphism (SNPs) are, nowadays, becoming the marker of choice for biological analyses involving a wide range of applications with great medical, biological, economic and environmental interest. Classification tasks i.e. the assignment of individuals to groups of origin based on their (multi-locus) genotypes, are performed in many fields such as forensic investigations, discrimination between wild and/or farmed populations and others. Τhese tasks, should be performed with a small number of loci, for computational as well as biological reasons. Thus, feature selection should precede classification tasks, especially for Single Nucleotide Polymorphism (SNP) datasets, where the number of features can amount to hundreds of thousands or millions. In this paper, we present a novel data mining approach, called FIFS - Frequent Item Feature Selection, based on the use of frequent items for selection of the most informative markers from population genomic data. It is a modular method, consisting of two main components. The first one identifies the most frequent and unique genotypes for each sampled population. The second one selects the most appropriate among them, in order to create the informative SNP subsets to be returned. The proposed method (FIFS) was tested on a real dataset, which comprised of a comprehensive coverage of pig breed types present in Britain. This dataset consisted of 446 individuals divided in 14 sub-populations, genotyped at 59,436 SNPs. Our method outperforms the state-of-the-art and baseline methods in every case. More specifically, our method surpassed the assignment accuracy threshold of 95% needing only half the number of SNPs selected by other methods (FIFS: 28 SNPs, Delta: 70 SNPs Pairwise FST: 70 SNPs, In: 100 SNPs.) CONCLUSION: Our approach successfully deals with the problem of informative marker selection in high dimensional genomic datasets. It offers better results compared to existing approaches and can aid biologists in selecting the most informative markers with maximum discrimination power for optimization of cost-effective panels with applications related to e.g. species identification, wildlife management, and forensics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dark field photoelectron emission microscopy of micron scale few layer graphene
NASA Astrophysics Data System (ADS)
Barrett, N.; Conrad, E.; Winkler, K.; Krömker, B.
2012-08-01
We demonstrate dark field imaging in photoelectron emission microscopy (PEEM) of heterogeneous few layer graphene (FLG) furnace grown on SiC(000-1). Energy-filtered, threshold PEEM is used to locate distinct zones of FLG graphene. In each region, selected by a field aperture, the k-space information is imaged using appropriate transfer optics. By selecting the photoelectron intensity at a given wave vector and using the inverse transfer optics, dark field PEEM gives a spatial distribution of the angular photoelectron emission. In the results presented here, the wave vector coordinates of the Dirac cones characteristic of commensurate rotations of FLG on SiC(000-1) are selected providing a map of the commensurate rotations across the surface. This special type of contrast is therefore a method to map the spatial distribution of the local band structure and offers a new laboratory tool for the characterisation of technically relevant, microscopically structured matter.
NASA Astrophysics Data System (ADS)
Therrien, A. C.; Lemaire, W.; Lecoq, P.; Fontaine, R.; Pratte, J.-F.
2018-01-01
The advantages of Time-of-Flight positron emission tomography (TOF-PET) have pushed the development of detectors with better time resolution. In particular, Silicon Photomultipliers (SiPM) have evolved tremendously in the past decade and arrays with a fully digital readout are the next logical step (dSiPM). New multi-timestamp methods use the precise time information of multiple photons to estimate the time of a PET event with greater accuracy, resulting in excellent time resolution. We propose a method which uses the same timestamps as the time estimator to perform energy discrimination, thus using data obtained within 5 ns of the beginning of the event. Having collected all the necessary information, the dSiPM could then be disabled for the remaining scintillation while dedicated electronics process the collected data. This would reduce afterpulsing as the SPAD would be turned off for several hundred nanoseconds, emptying the majority of traps. The proposed method uses a strategy based on subtraction and minimal electronics to reject energy below a selected threshold. This method achieves an error rate of less than 3% for photopeak discrimination (threshold at 400 keV) for dark count rates up to 100 cps/μm2, time-to-digital converter resolution up to 50 ps and a photon detection efficiency ranging from 10 to 70%.
NASA Astrophysics Data System (ADS)
Wan, Renzhi; Zu, Yunxiao; Shao, Lin
2018-04-01
The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.
Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2017-02-01
Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.
Habing, Greg; Djordjevic, Catherine; Schuenemann, Gustavo M; Lakritz, Jeff
2016-08-01
Reductions in livestock antimicrobial use (AMU) can be achieved through identification of effective antimicrobial alternatives as well as accurate and stringent identification of cases requiring antimicrobial therapy. Objective measurements of selectivity that incorporate appropriate case definitions are necessary to understand the need and potential for reductions in AMU through judicious use. The objective of this study was to measure selectivity using a novel disease severity treatment threshold for calf diarrhea, and identify predictors of more selective application of antimicrobials among conventional dairy producers. A second objective of this study was to describe the usage frequency and perceptions of efficacy of common antimicrobial alternatives among conventional and organic producers. The cross-sectional survey was mailed to Michigan and Ohio, USA dairy producers and contained questions on AMU attitudes, AMU practices, veterinary-written protocols, and antimicrobial alternatives. The treatment threshold, defined based on the case severity where the producer would normally apply antimicrobials, was identified with a series of descriptions with increasing severity, and ordinal multivariable logistic regression was used to determine the association between the treatment threshold and individual or herd characteristics. The response rate was 49% (727/1488). Overall, 42% of conventional producers reported any veterinary-written treatment protocol, and 27% (113/412) of conventional producers had a veterinary-written protocol for the treatment of diarrhea that included a case identification. The majority (58%, 253/437) of conventional producers, but a minority (7%) of organic producers disagreed that antibiotic use in agriculture led to resistant bacterial infections in people. Among conventional producers, the proportion of producers applying antimicrobials for therapy increased from 13% to 67% with increasing case severity. The treatment threshold was low, medium, and high for 11% (47/419), 57% (251/419), and 28% (121/419) of conventional producers, respectively. Treatment threshold was not significantly associated with the use of protocols or frequency of veterinary visits; however, individuals with more concern for the public health impact of livestock AMU had a significantly higher treatment threshold (i.e. more selective) (p<0.05). Alternative therapies were used by both organic and conventional producers, but, garlic, aloe, and "other herbal therapies" with little documented efficacy were used by a majority (>60%) of organic producers. Overall, findings from this study highlight the need for research on antimicrobial alternatives, wider application of treatment protocols, and farm personnel education and training on diagnostic criteria for initiation of antimicrobial therapy. Copyright © 2016 Elsevier B.V. All rights reserved.
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia
2018-04-01
This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
Image intensifier gain uniformity improvements in sealed tubes by selective scrubbing
Thomas, S.W.
1995-04-18
The gain uniformity of sealed microchannel plate image intensifiers (MCPIs) is improved by selectively scrubbing the high gain sections with a controlled bright light source. Using the premise that ions returning to the cathode from the microchannel plate (MCP) damage the cathode and reduce its sensitivity, a HeNe laser beam light source is raster scanned across the cathode of a microchannel plate image intensifier (MCPI) tube. Cathode current is monitored and when it exceeds a preset threshold, the sweep rate is decreased 1000 times, giving 1000 times the exposure to cathode areas with sensitivity greater than the threshold. The threshold is set at the cathode current corresponding to the lowest sensitivity in the active cathode area so that sensitivity of the entire cathode is reduced to this level. This process reduces tube gain by between 10% and 30% in the high gain areas while gain reduction in low gain areas is negligible. 4 figs.
Image intensifier gain uniformity improvements in sealed tubes by selective scrubbing
Thomas, Stanley W.
1995-01-01
The gain uniformity of sealed microchannel plate image intensifiers (MCPIs) is improved by selectively scrubbing the high gain sections with a controlled bright light source. Using the premise that ions returning to the cathode from the microchannel plate (MCP) damage the cathode and reduce its sensitivity, a HeNe laser beam light source is raster scanned across the cathode of a microchannel plate image intensifier (MCPI) tube. Cathode current is monitored and when it exceeds a preset threshold, the sweep rate is decreased 1000 times, giving 1000 times the exposure to cathode areas with sensitivity greater than the threshold. The threshold is set at the cathode current corresponding to the lowest sensitivity in the active cathode area so that sensitivity of the entire cathode is reduced to this level. This process reduces tube gain by between 10% and 30% in the high gain areas while gain reduction in low gain areas is negligible.
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Wavelet tree structure based speckle noise removal for optical coherence tomography
NASA Astrophysics Data System (ADS)
Yuan, Xin; Liu, Xuan; Liu, Yang
2018-02-01
We report a new speckle noise removal algorithm in optical coherence tomography (OCT). Though wavelet domain thresholding algorithms have demonstrated superior advantages in suppressing noise magnitude and preserving image sharpness in OCT, the wavelet tree structure has not been investigated in previous applications. In this work, we propose an adaptive wavelet thresholding algorithm via exploiting the tree structure in wavelet coefficients to remove the speckle noise in OCT images. The threshold for each wavelet band is adaptively selected following a special rule to retain the structure of the image across different wavelet layers. Our results demonstrate that the proposed algorithm outperforms conventional wavelet thresholding, with significant advantages in preserving image features.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.
Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J
2012-12-15
A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.
McCambridge, Jim; Kypri, Kypros; McElduff, Patrick
2014-02-01
Reductions in drinking among individuals randomised to control groups in brief alcohol intervention trials are common and suggest that asking study participants about their drinking may itself cause them to reduce their consumption. We sought to test the hypothesis that the statistical artefact regression to the mean (RTM) explains part of the reduction in such studies. 967 participants in a cohort study of alcohol consumption in New Zealand provided data at baseline and again six months later. We use graphical methods and apply thresholds of 8, 12, 16 and 20 in AUDIT scores to explore RTM. There was a negative association between baseline AUDIT scores and change in AUDIT scores from baseline to six months, which in the absence of bias and confounding, is RTM. Students with lower baseline scores tended to have higher follow-up scores and conversely, those with higher baseline scores tended to have lower follow-up scores. When a threshold score of 8 was used to select a subgroup, the observed mean change was approximately half of that observed without a threshold. The application of higher thresholds produced greater apparent reductions in alcohol consumption. Part of the reduction seen in the control groups of brief alcohol intervention trials is likely to be due to RTM and the amount of change is likely to be greater as the threshold for entry to the trial increases. Quantification of RTM warrants further study and should assist understanding assessment and other research participation effects. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford
2018-04-01
Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.
USDA-ARS?s Scientific Manuscript database
(Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...
LinkImputeR: user-guided genotype calling and imputation for non-model organisms.
Money, Daniel; Migicovsky, Zoë; Gardner, Kyle; Myles, Sean
2017-07-10
Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from NGS technologies. It enables the user to quickly and easily examine the effects of varying thresholds and filters on the number and quality of the resulting genotype calls. In this manner, users can decide on thresholds that are most suitable for their purposes. We show that LinkImputeR can significantly augment the value and utility of NGS data sets, especially in non-model organisms with poor genomic resources.
NASA Astrophysics Data System (ADS)
Susanti, D.; Hartini, E.; Permana, A.
2017-01-01
Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.
Selective document image data compression technique
Fu, C.Y.; Petrich, L.I.
1998-05-19
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.
A Topological Criterion for Filtering Information in Complex Brain Networks
Latora, Vito; Chavez, Mario
2017-01-01
In many biological systems, the network of interactions between the elements can only be inferred from experimental measurements. In neuroscience, non-invasive imaging tools are extensively used to derive either structural or functional brain networks in-vivo. As a result of the inference process, we obtain a matrix of values corresponding to a fully connected and weighted network. To turn this into a useful sparse network, thresholding is typically adopted to cancel a percentage of the weakest connections. The structural properties of the resulting network depend on how much of the inferred connectivity is eventually retained. However, how to objectively fix this threshold is still an open issue. We introduce a criterion, the efficiency cost optimization (ECO), to select a threshold based on the optimization of the trade-off between the efficiency of a network and its wiring cost. We prove analytically and we confirm through numerical simulations that the connection density maximizing this trade-off emphasizes the intrinsic properties of a given network, while preserving its sparsity. Moreover, this density threshold can be determined a-priori, since the number of connections to filter only depends on the network size according to a power-law. We validate this result on several brain networks, from micro- to macro-scales, obtained with different imaging modalities. Finally, we test the potential of ECO in discriminating brain states with respect to alternative filtering methods. ECO advances our ability to analyze and compare biological networks, inferred from experimental data, in a fast and principled way. PMID:28076353
Selective document image data compression technique
Fu, Chi-Yung; Petrich, Loren I.
1998-01-01
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)
Trotta-Moreu, Nuria; Lobo, Jorge M
2010-02-01
Predictions from individual distribution models for Mexican Geotrupinae species were overlaid to obtain a total species richness map for this group. A database (GEOMEX) that compiles available information from the literature and from several entomological collections was used. A Maximum Entropy method (MaxEnt) was applied to estimate the distribution of each species, taking into account 19 climatic variables as predictors. For each species, suitability values ranging from 0 to 100 were calculated for each grid cell on the map, and 21 different thresholds were used to convert these continuous suitability values into binary ones (presence-absence). By summing all of the individual binary maps, we generated a species richness prediction for each of the considered thresholds. The number of species and faunal composition thus predicted for each Mexican state were subsequently compared with those observed in a preselected set of well-surveyed states. Our results indicate that the sum of individual predictions tends to overestimate species richness but that the selection of an appropriate threshold can reduce this bias. Even under the most optimistic prediction threshold, the mean species richness error is 61% of the observed species richness, with commission errors being significantly more common than omission errors (71 +/- 29 versus 18 +/- 10%). The estimated distribution of Geotrupinae species richness in Mexico in discussed, although our conclusions are preliminary and contingent on the scarce and probably biased available data.
Deficient cortical face-sensitive N170 responses and basic visual processing in schizophrenia.
Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y
2016-01-01
Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun
2007-03-01
Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.
Crowell, Sara E.; Wells-Berlin, Alicia M.; Therrien, Ronald E.; Yannuzzi, Sally E.; Carr, Catherine E.
2016-01-01
Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000−3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.
Crowell, Sara E; Wells-Berlin, Alicia M; Therrien, Ronald E; Yannuzzi, Sally E; Carr, Catherine E
2016-05-01
Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000-3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.
synthesis procedures; a ’best’ method is definitely established. (2) ’Symmetry Types for Threshold Logic’ is a tutorial expositon including a careful...development of the Goto-Takahasi self-dual type ideas. (3) ’Best Threshold Gate Decisions’ reports a comparison, on the 2470 7-argument threshold ...interpretation is shown best. (4) ’ Threshold Gate Networks’ reviews the previously discussed 2-algorithm in geometric terms, describes our FORTRAN
Allee effect in the selection for prime-numbered cycles in periodical cicadas.
Tanaka, Yumi; Yoshimura, Jin; Simon, Chris; Cooley, John R; Tainaka, Kei-ichi
2009-06-02
Periodical cicadas are well known for their prime-numbered life cycles (17 and 13 years) and their mass periodical emergences. The origination and persistence of prime-numbered cycles are explained by the hybridization hypothesis on the basis of their lower likelihood of hybridization with other cycles. Recently, we showed by using an integer-based numerical model that prime-numbered cycles are indeed selected for among 10- to 20-year cycles. Here, we develop a real-number-based model to investigate the factors affecting the selection of prime-numbered cycles. We include an Allee effect in our model, such that a critical population size is set as an extinction threshold. We compare the real-number models with and without the Allee effect. The results show that in the presence of an Allee effect, prime-numbered life cycles are most likely to persist and to be selected under a wide range of extinction thresholds.
Quantitative assessment model for gastric cancer screening
Chen, Kun; Yu, Wei-Ping; Song, Liang; Zhu, Yi-Min
2005-01-01
AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer. METHODS: A case control study was carried on in 66 patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food, etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD). RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively. According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%. Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P>0.05). CONCLUSION: The validity of this method is satisfactory. It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer. PMID:15655813
A masking level difference due to harmonicity.
Treurniet, W C; Boucher, D R
2001-01-01
The role of harmonicity in masking was studied by comparing the effect of harmonic and inharmonic maskers on the masked thresholds of noise probes using a three-alternative, forced-choice method. Harmonic maskers were created by selecting sets of partials from a harmonic series with an 88-Hz fundamental and 45 consecutive partials. Inharmonic maskers differed in that the partial frequencies were perturbed to nearby values that were not integer multiples of the fundamental frequency. Average simultaneous-masked thresholds were as much as 10 dB lower with the harmonic masker than with the inharmonic masker, and this difference was unaffected by masker level. It was reduced or eliminated when the harmonic partials were separated by more than 176 Hz, suggesting that the effect is related to the extent to which the harmonics are resolved by auditory filters. The threshold difference was not observed in a forward-masking experiment. Finally, an across-channel mechanism was implicated when the threshold difference was found between a harmonic masker flanked by harmonic bands and a harmonic masker flanked by inharmonic bands. A model developed to explain the observed difference recognizes that an auditory filter output envelope is modulated when the filter passes two or more sinusoids, and that the modulation rate depends on the differences among the input frequencies. For a harmonic masker, the frequency differences of adjacent partials are identical, and all auditory filters have the same dominant modulation rate. For an inharmonic masker, however, the frequency differences are not constant and the envelope modulation rate varies across filters. The model proposes that a lower variability facilitates detection of a probe-induced change in the variability, thus accounting for the masked threshold difference. The model was supported by significantly improved predictions of observed thresholds when the predictor variables included envelope modulation rate variance measured using simulated auditory filters.
Heydari, Azhdar; Davoudi, Shima
2017-02-01
Serotonin is a key regulatory neurotransmitter in the CNS which plays an important role in seizure through different receptors, especially the 5HT 1A subtype. The role of sertraline through the 5HT 1A receptor and nitric oxide interaction on the PTZ-induced seizure threshold was investigated in this study. In this study, 70 white male mice were randomly divided into 10 groups including intact control, sham-control and eight experimental groups which received sertraline, 8-OH-DPAT, WAY100635, WAY100635+sertraline, WAY100635+8-OH-DPAT, L-NAME, L-NAME+sertraline and L-NAME+8-OH-DPAT. After 14days of treatment in different groups, the PTZ-induced seizure threshold was assessed and the measurement of nitric oxide metabolites in the brain tissue was done with the Greiss method. The seizure threshold was significantly increased in the sertraline and 8OH-DPAT receiving groups compared to the sham group (P<0.001). In the presence of WAY100635, the effect of both sertraline and 8-OH-DPAT in raising the seizure threshold was more prominent (P<0.001) but on the other hand, in the presence of L-NAME, an increase in the anticonvulsant effect of 8-OH-DPAT was observed, while L-NAME alone had no effect on the seizure threshold (P<0.001). The NO X concentration was significantly decreased in the 8-OH-DPAT_treated group (P<0.01), while the WAY100657 reversed it and the combination of 8-OH-DPAT with L-NAME reduced the NO X levels (P<0.001). These findings support the anticonvulsant effect of SSRIs and selective 5HT 1A receptors, although serotonin receptors other than 5HT 1A subtype may be involved and also it is probable that some anticonvulsant effects of the sertraline and 8-OH-DPAT are through the modulation of nitrergic system. Copyright © 2016 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Nefopam, a Non-sedative Benzoxazocine Analgesic, Selectively Reduces the Shivering Threshold
Alfonsi, Pascal; Adam, Frederic; Passard, Andrea; Guignard, Bruno; Sessler, Daniel I.; Chauvin, Marcel
2005-01-01
Background The analgesic nefopam does not compromise ventilation, is minimally sedating, and is effective as a treatment for postoperative shivering. We evaluated the effects of nefopam on the major thermoregulatory responses in humans: sweating, vasoconstriction, and shivering. Methods Nine volunteers were studied on three randomly assigned days: 1) control (Saline), 2) nefopam at a target plasma concentration of 35 ng/ml (Small Dose), and 3) nefopam at a target concentration of 70 ng/ml (Large Dose, ≈20 mg total). Each day, skin and core temperatures were increased to provoke sweating and then reduced to elicit peripheral vasoconstriction and shivering. We determined the thresholds (triggering core temperature at a designated skin temperature of 34°C) by mathematically compensating for changes in skin temperature using the established linear cutaneous contributions to control of each response. Results Nefopam did not significantly modify the slopes for sweating (0.0 ± 4.9°C·μg−1·ml; r2 = 0.73 ± 0.32) or vasoconstriction (−3.6 ± 5.0°C·μg−1·ml; r2=−0.47± 0.41). In contrast, nefopam significantly reduced the slope of shivering (−16.8 ± 9.3°C·μg−1·ml; r2 = 0.92 ± 0.06). Large-Dose nefopam thus reduced the shivering threshold by 0.9 ± 0.4°C (P<0.001) without any discernable effect on the sweating or vasoconstriction thresholds. Conclusions Most drugs with thermoregulatory actions — including anesthetics, sedatives, and opioids — synchronously reduce the vasoconstriction and shivering thresholds. Nefopam however reduced only the shivering threshold. This pattern has not previously been reported for a centrally acting drug. That pharmacologic modulation of vasoconstriction and shivering can be separated is of clinical and physiologic interest. PMID:14695722
Bierer, Julie Arenberg; Faulkner, Kathleen F
2010-04-01
The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, and tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve (PTC) measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar (pTP) electrode configuration are predictive of wide or tip-shifted PTCs. Data were collected from five cochlear implant listeners implanted with the HiRes90k cochlear implant (Advanced Bionics Corp., Sylmar, CA). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the pTP configuration for which a fraction of current (sigma) from a center-active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked PTCs were obtained for channels with the highest, lowest, and median tripolar (sigma = 1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (sigma = 0) or a more focused pTP (sigma > or = 0.55) configuration. The masker channel and level were varied, whereas the configuration was fixed to sigma = 0.5. A standard, three-interval, two-alternative forced choice procedure was used for thresholds and masked levels. Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, sigma, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, pTP stimulus, had significantly broader PTCs than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with pTP probes for both the highest and lowest threshold channels. These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.
Jarvis, M F; Wessale, J L; Zhu, C Z; Lynch, J J; Dayton, B D; Calzadilla, S V; Padley, R J; Opgenorth, T J; Kowaluk, E A
2000-01-24
Tactile allodynia, the enhanced perception of pain in response to normally non-painful stimulation, represents a common complication of diabetic neuropathy. The activation of endothelin ET(A) receptors has been implicated in diabetes-induced reductions in peripheral neurovascularization and concomitant endoneurial hypoxia. Endothelin receptor activation has also been shown to alter the peripheral and central processing of nociceptive information. The present study was conducted to evaluate the antinociceptive effects of the novel endothelin ET(A) receptor-selective antagonist, 2R-(4-methoxyphenyl)-4S-(1,3-benzodioxol-5-yl)-1-(N, N-di(n-butyl)aminocarbonyl-methyl)-pyrrolidine-3R-carboxylic acid (ABT-627), in the streptozotocin-induced diabetic rat model of neuropathic pain. Rats were injected with 75 mg/kg streptozotocin (i. p.), and drug effects were assessed 8-12 weeks following streptozotocin treatment to allow for stabilization of blood glucose levels (>/=240 mg/dl) and tactile allodynia thresholds (=8.0 g). Systemic (i.p.) administration of ABT-627 (1 and 10 mg/kg) was found to produce a dose-dependent increase in tactile allodynia thresholds. A significant antinociceptive effect (40-50% increase in tactile allodynia thresholds, P<0.05) was observed at the dose of 10 mg/kg, i.p., within 0.5-2-h post-dosing. The antinociceptive effects of ABT-627 (10 mg kg(-1) day(-1), p.o.) were maintained following chronic administration of the antagonist in drinking water for 7 days. In comparison, morphine administered acutely at a dose of 8 mg/kg, i.p., produced a significant 90% increase in streptozotocin-induced tactile allodynia thresholds. The endothelin ET(B) receptor-selective antagonist, 2R-(4-propoxyphenyl)-4S-(1, 3-benzodioxol-5-yl)-1-(N-(2, 6-diethylphenyl)aminocarbonyl-methyl)-pyrrolidine-3R-carboxy lic acid (A-192621; 20 mg/kg, i.p.), did not significantly alter tactile allodynia thresholds in streptozotocin-treated rats. Although combined i.p. administration of ABT-627 and A-192621 produced a significant, acute increase in tactile allodynia thresholds, this effect was significantly less than that produced by ABT-627 alone. These results indicate that the selective blockade of endothelin ET(A) receptors results in an attenuation of tactile allodynia in the streptozotocin-treated rat.
Goldrath, Dara A.; Wright, Michael T.; Belitz, Kenneth
2010-01-01
Groundwater quality in the 188-square-mile Colorado River Study unit (COLOR) was investigated October through December 2007 as part of the Priority Basin Project of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001, and the U.S. Geological Survey (USGS) is the technical project lead. The Colorado River study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within COLOR, and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 28 wells in three study areas in San Bernardino, Riverside, and Imperial Counties. Twenty wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the Study unit; these wells are termed 'grid wells'. Eight additional wells were selected to evaluate specific water-quality issues in the study area; these wells are termed `understanding wells.' The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOC], gasoline oxygenates and degradates, pesticides and pesticide degradates, pharmaceutical compounds), constituents of special interest (perchlorate, 1,4-dioxane, and 1,2,3-trichlorpropane [1,2,3-TCP]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), and radioactive constituents. Concentrations of naturally occurring isotopes (tritium, carbon-14, and stable isotopes of hydrogen and oxygen in water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, approximately 220 constituents and water-quality indicators were investigated. Quality-control samples (blanks, replicates, and matrix spikes) were collected at approximately 30 percent of the wells, and the results were used to evaluate the quality of the data obtained from the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a significant source of bias in the data. Differences between replicate samples were within acceptable ranges and matrix-spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, raw groundwater typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw groundwater. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared to regulatory and nonregulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and the California Department of Public Health (CDPH) and to thresholds established for aesthetic concerns by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and do not indicate compliance or noncompliance with those thresholds. The concentrations of most constituents detected in groundwater samples were below drinking-water thresholds. Volatile organic compounds (VOC) were detected in approximately 35 percent of grid well samples; all concentrations were below health-based thresholds. Pesticides and pesticide degradates were detected in about 20 percent of all samples; detections were below health-based thresholds. No concentrations of constituents of special interest or nutrients were detected above health-based thresholds. Most of the major and minor ion constituents sampled do not have health-based thresholds; the exception is chloride. Concentrations of chloride, sulfate, and total dis
3D SAPIV particle field reconstruction method based on adaptive threshold.
Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi
2018-03-01
Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.
NASA Astrophysics Data System (ADS)
Khamwan, Kitiwat; Krisanachinda, Anchali; Pluempitiwiriyawej, Charnchai
2012-10-01
This study presents an automatic method to trace the boundary of the tumour in positron emission tomography (PET) images. It has been discovered that Otsu's threshold value is biased when the within-class variances between the object and the background are significantly different. To solve the problem, a double-stage threshold search that minimizes the energy between the first Otsu's threshold and the maximum intensity value is introduced. Such shifted-optimal thresholding is embedded into a region-based active contour so that both algorithms are performed consecutively. The efficiency of the method is validated using six sphere inserts (0.52-26.53 cc volume) of the IEC/2001 torso phantom. Both spheres and phantom were filled with 18F solution with four source-to-background ratio (SBR) measurements of PET images. The results illustrate that the tumour volumes segmented by combined algorithm are of higher accuracy than the traditional active contour. The method had been clinically implemented in ten oesophageal cancer patients. The results are evaluated and compared with the manual tracing by an experienced radiation oncologist. The advantage of the algorithm is the reduced erroneous delineation that improves the precision and accuracy of PET tumour contouring. Moreover, the combined method is robust, independent of the SBR threshold-volume curves, and it does not require prior lesion size measurement.
Riedel, Damien; Bocquet, Marie-Laure; Lesnard, Hervé; Lastapis, Mathieu; Lorente, Nicolas; Sonnet, Philippe; Dujardin, Gérald
2009-06-03
Selective electron-induced reactions of individual biphenyl molecules adsorbed in their weakly chemisorbed configuration on a Si(100) surface are investigated by using the tip of a low-temperature (5 K) scanning tunnelling microscope (STM) as an atomic size source of electrons. Selected types of molecular reactions are produced, depending on the polarity of the surface voltage during STM excitation. At negative surface voltages, the biphenyl molecule diffuses across the surface in its weakly chemisorbed configuration. At positive surface voltages, different types of molecular reactions are activated, which involve the change of adsorption configuration from the weakly chemisorbed to the strongly chemisorbed bistable and quadristable configurations. Calculated reaction pathways of the molecular reactions on the silicon surface, using the nudge elastic band method, provide evidence that the observed selectivity as a function of the surface voltage polarity cannot be ascribed to different activation energies. These results, together with the measured threshold surface voltages and the calculated molecular electronic structures via density functional theory, suggest that the electron-induced molecular reactions are driven by selective electron detachment (oxidation) or attachment (reduction) processes.
Peptide Peak Detection for Low Resolution MALDI-TOF Mass Spectrometry.
Yao, Jingwen; Utsunomiya, Shin-Ichi; Kajihara, Shigeki; Tabata, Tsuyoshi; Aoshima, Ken; Oda, Yoshiya; Tanaka, Koichi
2014-01-01
A new peak detection method has been developed for rapid selection of peptide and its fragment ion peaks for protein identification using tandem mass spectrometry. The algorithm applies classification of peak intensities present in the defined mass range to determine the noise level. A threshold is then given to select ion peaks according to the determined noise level in each mass range. This algorithm was initially designed for the peak detection of low resolution peptide mass spectra, such as matrix-assisted laser desorption/ionization Time-of-Flight (MALDI-TOF) mass spectra. But it can also be applied to other type of mass spectra. This method has demonstrated obtaining a good rate of number of real ions to noises for even poorly fragmented peptide spectra. The effect of using peak lists generated from this method produces improved protein scores in database search results. The reliability of the protein identifications is increased by finding more peptide identifications. This software tool is freely available at the Mass++ home page (http://www.first-ms3d.jp/english/achievement/software/).
Peptide Peak Detection for Low Resolution MALDI-TOF Mass Spectrometry
Yao, Jingwen; Utsunomiya, Shin-ichi; Kajihara, Shigeki; Tabata, Tsuyoshi; Aoshima, Ken; Oda, Yoshiya; Tanaka, Koichi
2014-01-01
A new peak detection method has been developed for rapid selection of peptide and its fragment ion peaks for protein identification using tandem mass spectrometry. The algorithm applies classification of peak intensities present in the defined mass range to determine the noise level. A threshold is then given to select ion peaks according to the determined noise level in each mass range. This algorithm was initially designed for the peak detection of low resolution peptide mass spectra, such as matrix-assisted laser desorption/ionization Time-of-Flight (MALDI-TOF) mass spectra. But it can also be applied to other type of mass spectra. This method has demonstrated obtaining a good rate of number of real ions to noises for even poorly fragmented peptide spectra. The effect of using peak lists generated from this method produces improved protein scores in database search results. The reliability of the protein identifications is increased by finding more peptide identifications. This software tool is freely available at the Mass++ home page (http://www.first-ms3d.jp/english/achievement/software/). PMID:26819872
Informational masking and musical training
NASA Astrophysics Data System (ADS)
Oxenham, Andrew J.; Fligor, Brian J.; Mason, Christine R.; Kidd, Gerald
2003-09-01
The relationship between musical training and informational masking was studied for 24 young adult listeners with normal hearing. The listeners were divided into two groups based on musical training. In one group, the listeners had little or no musical training; the other group was comprised of highly trained, currently active musicians. The hypothesis was that musicians may be less susceptible to informational masking, which is thought to reflect central, rather than peripheral, limitations on the processing of sound. Masked thresholds were measured in two conditions, similar to those used by Kidd et al. [J. Acoust. Soc. Am. 95, 3475-3480 (1994)]. In both conditions the signal was comprised of a series of repeated tone bursts at 1 kHz. The masker was comprised of a series of multitone bursts, gated with the signal. In one condition the frequencies of the masker were selected randomly for each burst; in the other condition the masker frequencies were selected randomly for the first burst of each interval and then remained constant throughout the interval. The difference in thresholds between the two conditions was taken as a measure of informational masking. Frequency selectivity, using the notched-noise method, was also estimated in the two groups. The results showed no difference in frequency selectivity between the two groups, but showed a large and significant difference in the amount of informational masking between musically trained and untrained listeners. This informational masking task, which requires no knowledge specific to musical training (such as note or interval names) and is generally not susceptible to systematic short- or medium-term training effects, may provide a basis for further studies of analytic listening abilities in different populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Mackenzie L.; Hickox, Ryan C.; DiPompeo, Michael A.
In studies of the connection between active galactic nuclei (AGNs) and their host galaxies, there is widespread disagreement on some key aspects of the connection. These disagreements largely stem from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to starmore » formation. We explicitly model selection effects to produce an “observed” AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in models that attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGNs based on thresholds in luminosity or Eddington ratio on the “observed” AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low-mass black holes. We find that selecting AGNs using these various thresholds yield samples with different AGN host galaxy properties.« less
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc
2016-04-01
In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.
Adaptive threshold shearlet transform for surface microseismic data denoising
NASA Astrophysics Data System (ADS)
Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan
2018-06-01
Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.
Carbon deposition thresholds on nickel-based solid oxide fuel cell anodes I. Fuel utilization
NASA Astrophysics Data System (ADS)
Kuhn, J.; Kesler, O.
2015-03-01
In the first of a two part publication, the effect of fuel utilization (Uf) on carbon deposition rates in solid oxide fuel cell nickel-based anodes was studied. Representative 5-component CH4 reformate compositions (CH4, H2, CO, H2O, & CO2) were selected graphically by plotting the solutions to a system of mass-balance constraint equations. The centroid of the solution space was chosen to represent a typical anode gas mixture for each nominal Uf value. Selected 5-component and 3-component gas mixtures were then delivered to anode-supported cells for 10 h, followed by determination of the resulting deposited carbon mass. The empirical carbon deposition thresholds were affected by atomic carbon (C), hydrogen (H), and oxygen (O) fractions of the delivered gas mixtures and temperature. It was also found that CH4-rich gas mixtures caused irreversible damage, whereas atomically equivalent CO-rich compositions did not. The coking threshold predicted by thermodynamic equilibrium calculations employing graphite for the solid carbon phase agreed well with empirical thresholds at 700 °C (Uf ≈ 32%); however, at 600 °C, poor agreement was observed with the empirical threshold of ∼36%. Finally, cell operating temperatures correlated well with the difference in enthalpy between the supplied anode gas mixtures and their resulting thermodynamic equilibrium gas mixtures.
Quantification of intraventricular blood clot in MR-guided focused ultrasound surgery
NASA Astrophysics Data System (ADS)
Hess, Maggie; Looi, Thomas; Lasso, Andras; Fichtinger, Gabor; Drake, James
2015-03-01
Intraventricular hemorrhage (IVH) affects nearly 15% of preterm infants. It can lead to ventricular dilation and cognitive impairment. To ablate IVH clots, MR-guided focused ultrasound surgery (MRgFUS) is investigated. This procedure requires accurate, fast and consistent quantification of ventricle and clot volumes. We developed a semi-autonomous segmentation (SAS) algorithm for measuring changes in the ventricle and clot volumes. Images are normalized, and then ventricle and clot masks are registered to the images. Voxels of the registered masks and voxels obtained by thresholding the normalized images are used as seed points for competitive region growing, which provides the final segmentation. The user selects the areas of interest for correspondence after thresholding and these selections are the final seeds for region growing. SAS was evaluated on an IVH porcine model. SAS was compared to ground truth manual segmentation (MS) for accuracy, efficiency, and consistency. Accuracy was determined by comparing clot and ventricle volumes produced by SAS and MS, and comparing contours by calculating 95% Hausdorff distances between the two labels. In Two-One-Sided Test, SAS and MS were found to be significantly equivalent (p < 0.01). SAS on average was found to be 15 times faster than MS (p < 0.01). Consistency was determined by repeated segmentation of the same image by both SAS and manual methods, SAS being significantly more consistent than MS (p < 0.05). SAS is a viable method to quantify the IVH clot and the lateral brain ventricles and it is serving in a large-scale porcine study of MRgFUS treatment of IVH clot lysis.
Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan
2015-06-01
[Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds.
Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan
2015-01-01
[Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds. PMID:26180348