Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding
Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing
2017-01-01
The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305
Value of information and pricing new healthcare interventions.
Willan, Andrew R; Eckermann, Simon
2012-06-01
Previous application of value-of-information methods to optimal clinical trial design have predominantly taken a societal decision-making perspective, implicitly assuming that healthcare costs are covered through public expenditure and trial research is funded by government or donation-based philanthropic agencies. In this paper, we consider the interaction between interrelated perspectives of a societal decision maker (e.g. the National Institute for Health and Clinical Excellence [NICE] in the UK) charged with the responsibility for approving new health interventions for reimbursement and the company that holds the patent for a new intervention. We establish optimal decision making from societal and company perspectives, allowing for trade-offs between the value and cost of research and the price of the new intervention. Given the current level of evidence, there exists a maximum (threshold) price acceptable to the decision maker. Submission for approval with prices above this threshold will be refused. Given the current level of evidence and the decision maker's threshold price, there exists a minimum (threshold) price acceptable to the company. If the decision maker's threshold price exceeds the company's, then current evidence is sufficient since any price between the thresholds is acceptable to both. On the other hand, if the decision maker's threshold price is lower than the company's, then no price is acceptable to both and the company's optimal strategy is to commission additional research. The methods are illustrated using a recent example from the literature.
Bettembourg, Charles; Diot, Christian; Dameron, Olivier
2015-01-01
Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274
Connolly, Declan A J
2012-09-01
The purpose of this article is to assess the value of the anaerobic threshold for use in clinical populations with the intent to improve exercise adaptations and outcomes. The anaerobic threshold is generally poorly understood, improperly used, and poorly measured. It is rarely used in clinical settings and often reserved for athletic performance testing. Increased exercise participation within both clinical and other less healthy populations has increased our attention to optimizing exercise outcomes. Of particular interest is the optimization of lipid metabolism during exercise in order to improve numerous conditions such as blood lipid profile, insulin sensitivity and secretion, and weight loss. Numerous authors report on the benefits of appropriate exercise intensity in optimizing outcomes even though regulation of intensity has proved difficult for many. Despite limited use, selected exercise physiology markers have considerable merit in exercise-intensity regulation. The anaerobic threshold, and other markers such as heart rate, may well provide a simple and valuable mechanism for regulating exercising intensity. The use of the anaerobic threshold and accurate target heart rate to regulate exercise intensity is a valuable approach that is under-utilized across populations. The measurement of the anaerobic threshold can be simplified to allow clients to use nonlaboratory measures, for example heart rate, in order to self-regulate exercise intensity and improve outcomes.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
NASA Astrophysics Data System (ADS)
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
Damian, Anne M; Jacobson, Sandra A; Hentz, Joseph G; Belden, Christine M; Shill, Holly A; Sabbagh, Marwan N; Caviness, John N; Adler, Charles H
2011-01-01
To perform an item analysis of the Montreal Cognitive Assessment (MoCA) versus the Mini-Mental State Examination (MMSE) in the prediction of cognitive impairment, and to examine the characteristics of different MoCA threshold scores. 135 subjects enrolled in a longitudinal clinicopathologic study were administered the MoCA by a single physician and the MMSE by a trained research assistant. Subjects were classified as cognitively impaired or cognitively normal based on independent neuropsychological testing. 89 subjects were found to be cognitively normal, and 46 cognitively impaired (20 with dementia, 26 with mild cognitive impairment). The MoCA was superior in both sensitivity and specificity to the MMSE, although not all MoCA tasks were of equal predictive value. A MoCA threshold score of 26 had a sensitivity of 98% and a specificity of 52% in this population. In a population with a 20% prevalence of cognitive impairment, a threshold of 24 was optimal (negative predictive value 96%, positive predictive value 47%). This analysis suggests the potential for creating an abbreviated MoCA. For screening in primary care, the MoCA threshold of 26 appears optimal. For testing in a memory disorders clinic, a lower threshold has better predictive value. Copyright © 2011 S. Karger AG, Basel.
How to Assess the Value of Medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066
How to assess the value of medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.
Threshold-driven optimization for reference-based auto-planning
NASA Astrophysics Data System (ADS)
Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo
2018-02-01
We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.
Sato, Atsushi; Shimizu, Yusaku; Koyama, Junichi; Hongo, Kazuhiro
2017-06-01
Tissue plasminogen activator (tPA) is effective for the treatment of acute brain ischemia, but may trigger fatal brain edema or hemorrhage if the brain ischemia results in a large infarct. Herein, we attempted to predict the extent of infarcts by determining the optimal threshold of ADC values on DWI that predictively distinguishes between infarct and reversible areas, and by reconstructing color-coded images based on this threshold. The study subjects consisted of 36 patients with acute brain ischemia in whom MRA had confirmed reopening of the occluded arteries in a short time (mean: 99min) after tPA treatment. We measured the apparetnt diffusion coefficient (ADC) values in several small regions of interest over the white matter within high-intensity areas on the initial diffusion weighted image (DWI); then, by comparing the findings to the follow-up images, we obtained the optimal threshold of ADC values using receiver-operating characteristic analysis. The threshold obtained (583×10 -6 m 2 /s) was lower than those previously reported; this threshold could distinguish between infarct and reversible areas with considerable accuracy (sensitivity: 0.87, specificity: 0.94). The threshold obtained and the reconstructed images were predictive of the final radiological result of tPA treatment, and this threshold may be helpful in determining the appropriate management of patients with acute brain ischemia. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
Defining operating rules for mitigation of drought effects on water supply systems
NASA Astrophysics Data System (ADS)
Rossi, G.; Caporali, E.; Garrote, L.; Federici, G. V.
2012-04-01
Reservoirs play a pivotal role for water supply systems regulation and management especially during drought periods. Optimization of reservoir releases, related to drought mitigation rules is particularly required. The hydrologic state of the system is evaluated defining some threshold values, expressed in probabilistic terms. Risk deficit curves are used to reduce the ensemble of possible rules for simulation. Threshold values can be linked to specific actions in an operational context in different levels of severity, i.e. normal, pre-alert, alert and emergency scenarios. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are validated with a long term simulation that takes into account the characteristics of the evaluated system. The threshold levels determine some curves that define reservoir releases as a function of existing storage volume. A demand reduction is related to each threshold level. The rules to manage the system in drought conditions, the threshold levels and the reductions are optimized using long term simulations with different hypothesized states of the system. Synthetic sequences of flows with the same statistical properties of the historical ones are produced to evaluate the system behaviour. Performances of different values of reduction and different threshold curves are evaluated using different objective function and performances indices. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in Central Italy. The considered demand centres are Firenze and Bagno a Ripoli that have, accordingly to the census ISTAT 2001, a total of 395.000 inhabitants.
The variance of length of stay and the optimal DRG outlier payments.
Felder, Stefan
2009-09-01
Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.
[The analysis of threshold effect using Empower Stats software].
Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan
2013-11-01
In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.
Patel, Bhavik N; Farjat, Alfredo; Schabel, Christoph; Duvnjak, Petar; Mileto, Achille; Ramirez-Giraldo, Juan Carlos; Marin, Daniele
2018-05-01
The purpose of this study was to determine in vitro and in vivo the optimal threshold for renal lesion vascularity at low-energy (40-60 keV) virtual monoenergetic imaging. A rod simulating unenhanced renal parenchymal attenuation (35 HU) was fitted with a syringe containing water. Three iodinated solutions (0.38, 0.57, and 0.76 mg I/mL) were inserted into another rod that simulated enhanced renal parenchyma (180 HU). Rods were inserted into cylindric phantoms of three different body sizes and scanned with single- and dual-energy MDCT. In addition, 102 patients (32 men, 70 women; mean age, 66.8 ± 12.9 [SD] years) with 112 renal lesions (67 nonvascular, 45 vascular) measuring 1.1-8.9 cm underwent single-energy unenhanced and contrast-enhanced dual-energy CT. Optimal threshold attenuation values that differentiated vascular from nonvascular lesions at 40-60 keV were determined. Mean optimal threshold values were 30.2 ± 3.6 (standard error), 20.9 ± 1.3, and 16.1 ± 1.0 HU in the phantom, and 35.9 ± 3.6, 25.4 ± 1.8, and 17.8 ± 1.8 HU in the patients at 40, 50, and 60 keV. Sensitivity and specificity for the thresholds did not change significantly between low-energy and 70-keV virtual monoenergetic imaging (sensitivity, 87-98%; specificity, 90-91%). The AUC from 40 to 70 keV was 0.96 (95% CI, 0.93-0.99) to 0.98 (95% CI, 0.95-1.00). Low-energy virtual monoenergetic imaging at energy-specific optimized attenuation thresholds can be used for reliable characterization of renal lesions.
NASA Astrophysics Data System (ADS)
Song, Chen; Zhong-Cheng, Wu; Hong, Lv
2018-03-01
Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.
NASA Astrophysics Data System (ADS)
Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun
2007-03-01
Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.
A low threshold nanocavity in a two-dimensional 12-fold photonic quasicrystal
NASA Astrophysics Data System (ADS)
Ren, Jie; Sun, XiaoHong; Wang, Shuai
2018-05-01
In this article, a low threshold nanocavity is built and investigated in a two-dimensional 12-fold holographic photonic quasicrystal (PQC). The cavity is formed by using the method of multi-beam common-path interference. By finely adjusting the structure parameters of the cavity, the Q factor and the mode volume are optimized, which are two keys to low-threshold on the basis of Purcell effect. Finally, an optimal cavity is obtained with Q value of 6023 and mode volume of 1.24 ×10-12cm3 . On the other hand, by Fourier Transformation of the electric field components in the cavity, the in-plane wave vectors are calculated and fitted to evaluate the cavity performance. The performance analysis of the cavity further proves the effectiveness of the optimization process. This has a guiding significance for the research of low threshold nano-laser.
Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M
2017-06-30
There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.
Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu
2017-01-01
The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.
An n -material thresholding method for improving integerness of solutions in topology optimization
Watts, Seth; Tortorelli, Daniel A.
2016-04-10
It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten
2018-05-01
Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.
Minet, V; Baudar, J; Bailly, N; Douxfils, J; Laloy, J; Lessire, S; Gourdin, M; Devalet, B; Chatelain, B; Dogné, J M; Mullier, F
2014-06-01
Accurate diagnosis of heparin-induced thrombocytopenia (HIT) is essential but remains challenging. We have previously demonstrated, in a retrospective study, the usefulness of the combination of the 4Ts score, AcuStar HIT and heparin-induced multiple electrode aggregometry (HIMEA) with optimized thresholds. We aimed at exploring prospectively the performances of our optimized diagnostic algorithm on suspected HIT patients. The secondary objective is to evaluate performances of AcuStar HIT-Ab (PF4-H) in comparison with the clinical outcome. 116 inpatients with clinically suspected immune HIT were included. Our optimized diagnostic algorithm was applied to each patient. Sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) of the overall diagnostic strategy as well as AcuStar HIT-Ab (at manufacturer's thresholds and at our thresholds) were calculated using clinical diagnosis as the reference. Among 116 patients, 2 patients had clinically-diagnosed HIT. These 2 patients were positive on AcuStar HIT-Ab, AcuStar HIT-IgG and HIMEA. Using our optimized algorithm, all patients were correctly diagnosed. AcuStar HIT-Ab at our cut-off (>9.41 U/mL) and at manufacturer's cut-off (>1.00 U/mL) showed both a sensitivity of 100.0% and a specificity of 99.1% and 90.4%, respectively. The combination of the 4Ts score, the HemosIL® AcuStar HIT and HIMEA with optimized thresholds may be useful for the rapid and accurate exclusion of the diagnosis of immune HIT. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wang, Yi-Ting; Sung, Pei-Yuan; Lin, Peng-Lin; Yu, Ya-Wen; Chung, Ren-Hua
2015-05-15
Genome-wide association studies (GWAS) have become a common approach to identifying single nucleotide polymorphisms (SNPs) associated with complex diseases. As complex diseases are caused by the joint effects of multiple genes, while the effect of individual gene or SNP is modest, a method considering the joint effects of multiple SNPs can be more powerful than testing individual SNPs. The multi-SNP analysis aims to test association based on a SNP set, usually defined based on biological knowledge such as gene or pathway, which may contain only a portion of SNPs with effects on the disease. Therefore, a challenge for the multi-SNP analysis is how to effectively select a subset of SNPs with promising association signals from the SNP set. We developed the Optimal P-value Threshold Pedigree Disequilibrium Test (OPTPDT). The OPTPDT uses general nuclear families. A variable p-value threshold algorithm is used to determine an optimal p-value threshold for selecting a subset of SNPs. A permutation procedure is used to assess the significance of the test. We used simulations to verify that the OPTPDT has correct type I error rates. Our power studies showed that the OPTPDT can be more powerful than the set-based test in PLINK, the multi-SNP FBAT test, and the p-value based test GATES. We applied the OPTPDT to a family-based autism GWAS dataset for gene-based association analysis and identified MACROD2-AS1 with genome-wide significance (p-value=2.5×10(-6)). Our simulation results suggested that the OPTPDT is a valid and powerful test. The OPTPDT will be helpful for gene-based or pathway association analysis. The method is ideal for the secondary analysis of existing GWAS datasets, which may identify a set of SNPs with joint effects on the disease.
Bilevel thresholding of sliced image of sludge floc.
Chu, C P; Lee, D J
2004-02-15
This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D
2009-12-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.
Jafri, Nazia F; Newitt, David C; Kornak, John; Esserman, Laura J; Joe, Bonnie N; Hylton, Nola M
2014-08-01
To evaluate optimal contrast kinetics thresholds for measuring functional tumor volume (FTV) by breast magnetic resonance imaging (MRI) for assessment of recurrence-free survival (RFS). In this Institutional Review Board (IRB)-approved retrospective study of 64 patients (ages 29-72, median age of 48.6) undergoing neoadjuvant chemotherapy (NACT) for breast cancer, all patients underwent pre-MRI1 and postchemotherapy MRI4 of the breast. Tumor was defined as voxels meeting thresholds for early percent enhancement (PEthresh) and early-to-late signal enhancement ratio (SERthresh); and FTV (PEthresh, SERthresh) by summing all voxels meeting threshold criteria and minimum connectivity requirements. Ranges of PEthresh from 50% to 220% and SERthresh from 0.0 to 2.0 were evaluated. A Cox proportional hazard model determined associations between change in FTV over treatment and RFS at different PE and SER thresholds. The plot of hazard ratios for change in FTV from MRI1 to MRI4 showed a broad peak with the maximum hazard ratio and highest significance occurring at PE threshold of 70% and SER threshold of 1.0 (hazard ratio = 8.71, 95% confidence interval 2.86-25.5, P < 0.00015), indicating optimal model fit. Enhancement thresholds affect the ability of MRI tumor volume to predict RFS. The value is robust over a wide range of thresholds, supporting the use of FTV as a biomarker. © 2013 Wiley Periodicals, Inc.
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
2017-01-01
Objective To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Methods Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. Results VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. Conclusion We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia. PMID:29201816
Park, Chul-Hyun; Kim, Don-Kyu; Lee, Yong-Taek; Yi, Youbin; Lee, Jung-Sang; Kim, Kunwoo; Park, Jung Ho; Yoon, Kyung Jae
2017-10-01
To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia.
Optimal glottal configuration for ease of phonation.
Lucero, J C
1998-06-01
Recent experimental studies have shown the existence of optimal values of the glottal width and convergence angle, at which the phonation threshold pressure is minimum. These results indicate the existence of an optimal glottal configuration for ease of phonation, not predicted by the previous theory. In this paper, the origin of the optimal configuration is investigated using a low dimensional mathematical model of the vocal fold. Two phenomena of glottal aerodynamics are examined: pressure losses due to air viscosity, and air flow separation from a divergent glottis. The optimal glottal configuration seems to be a consequence of the combined effect of both factors. The results agree with the experimental data, showing that the phonation threshold pressure is minimum when the vocal folds are slightly separated in a near rectangular glottis.
NASA Astrophysics Data System (ADS)
Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.
2015-07-01
The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bettinger, Nicolas; Khalique, Omar K; Krepp, Joseph M; Hamid, Nadira B; Bae, David J; Pulerwitz, Todd C; Liao, Ming; Hahn, Rebecca T; Vahl, Torsten P; Nazif, Tamim M; George, Isaac; Leon, Martin B; Einstein, Andrew J; Kodali, Susheel K
The threshold for the optimal computed tomography (CT) number in Hounsfield Units (HU) to quantify aortic valvular calcium on contrast-enhanced scans has not been standardized. Our aim was to find the most accurate threshold to predict paravalvular regurgitation (PVR) after transcatheter aortic valve replacement (TAVR). 104 patients who underwent TAVR with the CoreValve prosthesis were studied retrospectively. Luminal attenuation (LA) in HU was measured at the level of the aortic annulus. Calcium volume score for the aortic valvular complex was measured using 6 threshold cutoffs (650 HU, 850 HU, LA × 1.25, LA × 1.5, LA+50, LA+100). Receiver-operating characteristic (ROC) analysis was performed to assess the predictive value for > mild PVR (n = 16). Multivariable analysis was performed to determine the accuracy to predict > mild PVR after adjustment for depth and perimeter oversizing. ROC analysis showed lower area under the curve (AUC) values for fixed threshold cutoffs (650 or 850 HU) compared to thresholds relative to LA. The LA+100 threshold had the highest AUC (0.81), and AUC was higher than all studied protocols, other than the LA x 1.25 and LA + 50 protocols, where the difference approached statistical significance (p = 0.05, and 0.068, respectively). Multivariable analysis showed calcium volume determined by the LAx1.25, LAx1.5, LA+50, and LA+ 100 HU protocols to independently predict PVR. Calcium volume scoring thresholds which are relative to LA are more predictive of PVR post-TAVR than those which use fixed cutoffs. A threshold of LA+100 HU had the highest predictive value. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
NASA Astrophysics Data System (ADS)
Khamwan, Kitiwat; Krisanachinda, Anchali; Pluempitiwiriyawej, Charnchai
2012-10-01
This study presents an automatic method to trace the boundary of the tumour in positron emission tomography (PET) images. It has been discovered that Otsu's threshold value is biased when the within-class variances between the object and the background are significantly different. To solve the problem, a double-stage threshold search that minimizes the energy between the first Otsu's threshold and the maximum intensity value is introduced. Such shifted-optimal thresholding is embedded into a region-based active contour so that both algorithms are performed consecutively. The efficiency of the method is validated using six sphere inserts (0.52-26.53 cc volume) of the IEC/2001 torso phantom. Both spheres and phantom were filled with 18F solution with four source-to-background ratio (SBR) measurements of PET images. The results illustrate that the tumour volumes segmented by combined algorithm are of higher accuracy than the traditional active contour. The method had been clinically implemented in ten oesophageal cancer patients. The results are evaluated and compared with the manual tracing by an experienced radiation oncologist. The advantage of the algorithm is the reduced erroneous delineation that improves the precision and accuracy of PET tumour contouring. Moreover, the combined method is robust, independent of the SBR threshold-volume curves, and it does not require prior lesion size measurement.
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
Wang, Rui-Ping; Jiang, Yong-Gen; Zhao, Gen-Ming; Guo, Xiao-Qin; Michael, Engelgau
2017-12-01
The China Infectious Disease Automated-alert and Response System (CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control (CDC) at all levels in China. In the CIDARS, thresholds are determined using the "Mean+2SD‟ in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the "Mean +2SD‟ method to the performance of 5 novel algorithms to select optimal "Outbreak Gold Standard (OGS)‟ and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The "Mean+2SD‟, C1, C2, moving average (MA), seasonal model (SM), and cumulative sum (CUSUM) algorithms were applied. Outbreak signals for the predicted value (Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A (chickenpox and mumps), TYPE B (influenza and rubella) and TYPE C [hand foot and mouth disease (HFMD) and scarlet fever]. Optimized thresholds for chickenpox (P 55 ), mumps (P 50 ), influenza (P 40 , P 55 , and P 75 ), rubella (P 45 and P 75 ), HFMD (P 65 and P 70 ), and scarlet fever (P 75 and P 80 ) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.
Threshold selection for classification of MR brain images by clustering method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less
Tay, Timothy Kwang Yong; Thike, Aye Aye; Pathmanathan, Nirmala; Jara-Lazaro, Ana Richelia; Iqbal, Jabed; Sng, Adeline Shi Hui; Ye, Heng Seow; Lim, Jeffrey Chun Tatt; Koh, Valerie Cui Yun; Tan, Jane Sie Yong; Yeong, Joe Poh Sheng; Chow, Zi Long; Li, Hui Hua; Cheng, Chee Leong; Tan, Puay Hoon
2018-01-01
Background Ki67 positivity in invasive breast cancers has an inverse correlation with survival outcomes and serves as an immunohistochemical surrogate for molecular subtyping of breast cancer, particularly ER positive breast cancer. The optimal threshold of Ki67 in both settings, however, remains elusive. We use computer assisted image analysis (CAIA) to determine the optimal threshold for Ki67 in predicting survival outcomes and differentiating luminal B from luminal A breast cancers. Methods Quantitative scoring of Ki67 on tissue microarray (TMA) sections of 440 invasive breast cancers was performed using Aperio ePathology ImmunoHistochemistry Nuclear Image Analysis algorithm, with TMA slides digitally scanned via Aperio ScanScope XT System. Results On multivariate analysis, tumours with Ki67 ≥14% had an increased likelihood of recurrence (HR 1.941, p=0.021) and shorter overall survival (HR 2.201, p=0.016). Similar findings were observed in the subset of 343 ER positive breast cancers (HR 2.409, p=0.012 and HR 2.787, p=0.012 respectively). The value of Ki67 associated with ER+HER2-PR<20% tumours (Luminal B subtype) was found to be <17%. Conclusion Using CAIA, we found optimal thresholds for Ki67 that predict a poorer prognosis and an association with the Luminal B subtype of breast cancer. Further investigation and validation of these thresholds are recommended. PMID:29545924
Shen, Jing; Hu, Yanyun; Liu, Fang; Zeng, Hui; Li, Lianxi; Zhao, Jun; Zhao, Jungong; Zheng, Taishan; Lu, Huijuan; Lu, Fengdi; Bao, Yuqian; Jia, Weiping
2013-10-01
We investigated the relationship between vibration perception threshold and diabetic retinopathy and verified the screening value of vibration perception threshold for severe diabetic retinopathy. A total of 955 patients with type 2 diabetes were recruited and divided into three groups according to their fundus oculi photography results: no diabetic retinopathy (n = 654, 68.48%), non-sight-threatening diabetic retinopathy (n = 189, 19.79%) and sight-threatening diabetic retinopathy (n = 112, 11.73%). Their clinical and biochemical characteristics, vibration perception threshold and the diabetic retinopathy grades were detected and compared. There were significant differences in diabetes duration and blood glucose levels among three groups (all p < 0.05). The values of vibration perception threshold increased with the rising severity of retinopathy, and the vibration perception threshold level of sight-threatening diabetic retinopathy group was significantly higher than both non-sight-threatening diabetic retinopathy and no diabetic retinopathy groups (both p < 0.01). The prevalence of sight-threatening diabetic retinopathy in vibration perception threshold >25 V group was significantly higher than those in 16-24 V group (p < 0.01). The severity of diabetic retinopathy was positively associated with diabetes duration, blood glucose indexes and vibration perception threshold (all p < 0.01). Multiple stepwise regression analysis proved that glycosylated haemoglobin (β = 0.385, p = 0.000), diabetes duration (β = 0.275, p = 0.000) and vibration perception threshold (β = 0.180, p = 0.015) were independent risk factors for diabetic retinopathy. Receiver operating characteristic analysis further revealed that vibration perception threshold higher than 18 V was the optimal cut point for reflecting high risk of sight-threatening diabetic retinopathy (odds ratio = 4.20, 95% confidence interval = 2.67-6.59). There was a close association between vibration perception threshold and the severity of diabetic retinopathy. vibration perception threshold was a potential screening method for diabetic retinopathy, and its optimal cut-off for prompting high risk of sight-threatening retinopathy was 18 V. Copyright © 2013 John Wiley & Sons, Ltd.
Adaptive time-sequential binary sensing for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Hu, Chenhui; Lu, Yue M.
2012-06-01
We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.
Sesay, Musa; Robin, Georges; Tauzin-Fin, Patrick; Sacko, Oumar; Gimbert, Edouard; Vignes, Jean-Rodolphe; Liguoro, Dominique; Nouette-Gaulain, Karine
2015-04-01
The autonomic nervous system is influenced by many stimuli including pain. Heart rate variability (HRV) is an indirect marker of the autonomic nervous system. Because of paucity of data, this study sought to determine the optimal thresholds of HRV above which the patients are in pain after minor spinal surgery (MSS). Secondly, we evaluated the correlation between HRV and the numeric rating scale (NRS). Following institutional review board approval, patients who underwent MSS were assessed in the postanesthesia care unit after extubation. A laptop containing the HRV software was connected to the ECG monitor. The low-frequency band (LF: 0.04 to 0.5 Hz) denoted both sympathetic and parasympathetic activities, whereas the high-frequency band (HF: 0.15 to 0.4 Hz) represented parasympathetic activity. LF/HF was the sympathovagal balance. Pain was quantified by the NRS ranging from 0 (no pain) to 10 (worst imaginable pain). Simultaneously, HRV parameters were noted. Optimal thresholds were calculated using receiver operating characteristic curves with NRS>3 as cutoff. The correlation between HRV and NRS was assessed using the Spearman rank test. We included 120 patients (64 men and 56 women), mean age 51±14 years. The optimal pain threshold values were 298 ms for LF and 3.12 for LF/HF, with no significant change in HF. NRS was correlated with LF (r=0.29, P<0.005) and LF/HF (r=0.31, P<0.001) but not with HF (r=0.09, NS). This study suggests that, after MSS, values of LF>298 m and LF/HF>3.1 denote acute pain (NRS>3). These HRV parameters are significantly correlated with NRS.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
Narrowing of ischiofemoral and quadratus femoris spaces in pediatric ischiofemoral impingement.
Goldberg-Stein, Shlomit; Friedman, Avi; Gao, Qi; Choi, Jaeun; Schulz, Jacob; Fornari, Eric; Taragin, Benjamin
2018-05-05
To correlate MRI findings of quadratus femoris muscle edema (QFME) with narrowing of the ischiofemoral space (IFS) and quadratus femoris space (QFS) in children, and to identify threshold values reflecting an anatomic architecture that may predispose to ischiofemoral impingement. A case-control retrospective MRI review of 49 hips in 27 children (mean, 13 years) with QFME was compared to 49 hips in 27 gender and age-matched controls. Two radiologists independently measured IFS and QFS. Generalized linear mixed-effects models were fit to compare IFS and QFS values between cases and controls, and adjust for correlation in repeated measures from the same subject. Receiver operating characteristic (ROC) analysis determined optimal threshold values. Compared to controls, cases had significantly smaller IFS (p < 0.001, both readers) and QFS (reader 1: p < 0.001; reader 2: p = 0.003). When stratified as preteen (< 13) or teenage (≥ 13), lower mean IFS and QFS were observed in cases versus controls in both age groups. Area under ROC curve for IFS and QFS was high in preteens (0.77 and 0.71) and teens (0.94 and 0.88). Threshold values were 14.9 mm (preteens) and 19 mm (teens) for IFS and 11.2 mm (preteens) and 11.1 mm (teens) for QFS. IFS and QFS were modestly correlated with age among controls only. Pediatric patients with QFME had significantly narrower QFS and IFS compared with controls. IFS and QFS were found to normally increase in size with age. Optimal cutoff threshold values were identified for QFS and IFS in preteens and teenagers.
Assenova, Valentina A
2018-01-01
Complex innovations- ideas, practices, and technologies that hold uncertain benefits for potential adopters-often vary in their ability to diffuse in different communities over time. To explain why, I develop a model of innovation adoption in which agents engage in naïve (DeGroot) learning about the value of an innovation within their social networks. Using simulations on Bernoulli random graphs, I examine how adoption varies with network properties and with the distribution of initial opinions and adoption thresholds. The results show that: (i) low-density and high-asymmetry networks produce polarization in influence to adopt an innovation over time, (ii) increasing network density and asymmetry promote adoption under a variety of opinion and threshold distributions, and (iii) the optimal levels of density and asymmetry in networks depend on the distribution of thresholds: networks with high density (>0.25) and high asymmetry (>0.50) are optimal for maximizing diffusion when adoption thresholds are right-skewed (i.e., barriers to adoption are low), but networks with low density (<0.01) and low asymmetry (<0.25) are optimal when thresholds are left-skewed. I draw on data from a diffusion field experiment to predict adoption over time and compare the results to observed outcomes.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Kang, Hyunchul
2015-01-01
We investigate the in-network processing of an iceberg join query in wireless sensor networks (WSNs). An iceberg join is a special type of join where only those joined tuples whose cardinality exceeds a certain threshold (called iceberg threshold) are qualified for the result. Processing such a join involves the value matching for the join predicate as well as the checking of the cardinality constraint for the iceberg threshold. In the previous scheme, the value matching is carried out as the main task for filtering non-joinable tuples while the iceberg threshold is treated as an additional constraint. We take an alternative approach, meeting the cardinality constraint first and matching values next. In this approach, with a logical fragmentation of the join operand relations on the aggregate counts of the joining attribute values, the optimal sequence of 2-way fragment semijoins is generated, where each fragment semijoin employs a Bloom filter as a synopsis of the joining attribute values. This sequence filters non-joinable tuples in an energy-efficient way in WSNs. Through implementation and a set of detailed experiments, we show that our alternative approach considerably outperforms the previous one. PMID:25774710
WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, H; Yeung, I; Milosevic, M
2016-06-15
Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
A derivation of the stable cavitation threshold accounting for bubble-bubble interactions.
Guédra, Matthieu; Cornu, Corentin; Inserra, Claude
2017-09-01
The subharmonic emission of sound coming from the nonlinear response of a bubble population is the most used indicator for stable cavitation. When driven at twice their resonance frequency, bubbles can exhibit subharmonic spherical oscillations if the acoustic pressure amplitude exceeds a threshold value. Although various theoretical derivations exist for the subharmonic emission by free or coated bubbles, they all rest on the single bubble model. In this paper, we propose an analytical expression of the subharmonic threshold for interacting bubbles in a homogeneous, monodisperse cloud. This theory predicts a shift of the subharmonic resonance frequency and a decrease of the corresponding pressure threshold due to the interactions. For a given sonication frequency, these results show that an optimal value of the interaction strength (i.e. the number density of bubbles) can be found for which the subharmonic threshold is minimum, which is consistent with recently published experiments conducted on ultrasound contrast agents. Copyright © 2017 Elsevier B.V. All rights reserved.
[Target volume segmentation of PET images by an iterative method based on threshold value].
Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L
2014-01-01
An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Ramírez-Vélez, Robinson; Correa-Bautista, Jorge Enrique; Martínez-Torres, Javier; González-Ruíz, Katherine; González-Jiménez, Emilio; Schmidt-RioValle, Jacqueline; Garcia-Hermoso, Antonio
2016-01-01
This study aimed to determine thresholds for percentage of body fat (BF%) corresponding to the cut-off values for overweight/obesity as recommended by the International Obesity Task Force (IOTF), using two bioelectrical impedance analyzers (BIA), and described the likelihood of increased cardiometabolic risk in our cohort defined by the IOTF and BF% status. Participants included 1165 children and adolescents (54.9% girls) from Bogotá (Colombia). Body mass index (BMI) was calculated from height and weight. BF% of each youth was assessed first using the Tanita BC-418® followed by a Tanita BF-689®. The sensitivity and specificity of both devices and their ability to correctly classify children as overweight/obesity (≥2 standard deviation), as defined by IOTF, was investigated using receiver operating characteristic (ROC) by sex and age groups (9–11, 12–14, and 13–17 years old); Area under curve (AUC) values were also reported. For girls, the optimal BF% threshold for classifying into overweight/obesity was found to be between 25.2 and 28.5 (AUC = 0.91–0.97) and 23.9 to 26.6 (AUC = 0.90–0.99) for Tanita BC-418® and Tanita BF-689®, respectively. For boys, the optimal threshold was between 16.5 and 21.1 (AUC = 0.93–0.96) and 15.8 to 20.6 (AUC = 0.92–0.94) by Tanita BC-418® and Tanita BF-689®, respectively. All AUC values for ROC curves were statistically significant and there were no differences between AUC values measured by both BIA devices. The BF% values associated with the IOTF-recommended BMI cut-off for overweight/obesity may require age- and sex-specific threshold values in Colombian children and adolescents aged 9–17 years and could be used as a surrogate method to identify individuals at risk of excess adiposity. PMID:27782039
Ramírez-Vélez, Robinson; Correa-Bautista, Jorge Enrique; Martínez-Torres, Javier; González-Ruíz, Katherine; González-Jiménez, Emilio; Schmidt-RioValle, Jacqueline; Garcia-Hermoso, Antonio
2016-10-04
This study aimed to determine thresholds for percentage of body fat (BF%) corresponding to the cut-off values for overweight/obesity as recommended by the International Obesity Task Force (IOTF), using two bioelectrical impedance analyzers (BIA), and described the likelihood of increased cardiometabolic risk in our cohort defined by the IOTF and BF% status. Participants included 1165 children and adolescents (54.9% girls) from Bogotá (Colombia). Body mass index (BMI) was calculated from height and weight. BF% of each youth was assessed first using the Tanita BC-418® followed by a Tanita BF-689®. The sensitivity and specificity of both devices and their ability to correctly classify children as overweight/obesity (≥2 standard deviation), as defined by IOTF, was investigated using receiver operating characteristic (ROC) by sex and age groups (9-11, 12-14, and 13-17 years old); Area under curve (AUC) values were also reported. For girls, the optimal BF% threshold for classifying into overweight/obesity was found to be between 25.2 and 28.5 (AUC = 0.91-0.97) and 23.9 to 26.6 (AUC = 0.90-0.99) for Tanita BC-418® and Tanita BF-689®, respectively. For boys, the optimal threshold was between 16.5 and 21.1 (AUC = 0.93-0.96) and 15.8 to 20.6 (AUC = 0.92-0.94) by Tanita BC-418® and Tanita BF-689®, respectively. All AUC values for ROC curves were statistically significant and there were no differences between AUC values measured by both BIA devices. The BF% values associated with the IOTF-recommended BMI cut-off for overweight/obesity may require age- and sex-specific threshold values in Colombian children and adolescents aged 9-17 years and could be used as a surrogate method to identify individuals at risk of excess adiposity.
Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.
Zhao, Yanfeng; Li, Xiaolu; Wang, Xiaoyi; Lin, Meng; Zhao, Xinming; Luo, Dehong; Li, Jianying
2017-01-01
Background To investigate the value of single-source dual-energy spectral CT imaging in improving the accuracy of preoperative diagnosis of lymph node metastasis of thyroid carcinoma. Methods Thirty-four thyroid carcinoma patients were enrolled and received spectral CT scanning before thyroidectomy and cervical lymph node dissection surgery. Iodine-based material decomposition (MD) images and 101 sets of monochromatic images from 40 to 140 keV were reconstructed after CT scans. The iodine concentrations (IC) of lymph nodes were measured on the MD images and was normalized to that of common carotid artery to obtain the normalized iodine concentration (NIC). The CT number of lymph nodes as function of photon energy was measured on the 101 sets of images to generate a spectral HU curve and to calculate its slope λHU. The measurements between the metastatic and non-metastatic lymph nodes were statistically compared and receiver operating characteristic (ROC) curves were used to determine the optimal thresholds of these measurements for diagnosing lymph nodes metastasis. Results There were 136 lymph nodes that were pathologically confirmed. Among them, 102 (75%) were metastatic and 34 (25%) were non-metastatic. The IC, NIC and the slope λHU of the metastatic lymph nodes were 3.93±1.58 mg/mL, 0.70±0.55 and 4.63±1.91, respectively. These values were statistically higher than the respective values of 1.77±0.71 mg/mL, 0.29±0.16 and 2.19±0.91 for the non-metastatic lymph nodes (all P<0.001). ROC analysis determined the optimal diagnostic threshold for IC as 2.56 mg/mL, with the sensitivity, specificity and accuracy of 83.3%, 91.2% and 85.3%, respectively. The optimal threshold for NIC was 0.289, with the sensitivity, specificity and accuracy of 96.1%, 76.5% and 91.2%, respectively. The optimal threshold for the spectral curve slope λHU was 2.692, with the sensitivity, specificity and accuracy of 88.2%, 82.4% and 86.8%, respectively. Conclusions The measurements obtained in dual-energy spectral CT improve the sensitivity and accuracy for preoperatively diagnosing lymph node metastasis in thyroid carcinoma. PMID:29268547
Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp
This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and tomore » evaluate the efficiency of the algorithm.« less
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.
Donoho, David; Jin, Jiashun
2008-09-30
In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
Donoho, David; Jin, Jiashun
2008-01-01
In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365
Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine
2013-08-06
We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.
Prefixed-threshold real-time selection method in free-space quantum key distribution
NASA Astrophysics Data System (ADS)
Wang, Wenyuan; Xu, Feihu; Lo, Hoi-Kwong
2018-03-01
Free-space quantum key distribution allows two parties to share a random key with unconditional security, between ground stations, between mobile platforms, and even in satellite-ground quantum communications. Atmospheric turbulence causes fluctuations in transmittance, which further affect the quantum bit error rate and the secure key rate. Previous postselection methods to combat atmospheric turbulence require a threshold value determined after all quantum transmission. In contrast, here we propose a method where we predetermine the optimal threshold value even before quantum transmission. Therefore, the receiver can discard useless data immediately, thus greatly reducing data storage requirements and computing resources. Furthermore, our method can be applied to a variety of protocols, including, for example, not only single-photon BB84 but also asymptotic and finite-size decoy-state BB84, which can greatly increase its practicality.
NASA Astrophysics Data System (ADS)
Nlandu Kamavuako, Ernest; Scheme, Erik Justin; Englehart, Kevin Brian
2016-08-01
Objective. For over two decades, Hudgins’ set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. Approach. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. Main results. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. Significance. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.
Kamavuako, Ernest Nlandu; Scheme, Erik Justin; Englehart, Kevin Brian
2016-08-01
For over two decades, Hudgins' set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2016-02-08
We aimed to provide insight in how threshold selection affects time to stabilization (TTS) and its reliability to support selection of methods to determine TTS. Eighty-two elite youth soccer players performed six single leg drop jump landings. The TTS was calculated based on four processed signals: raw ground reaction force (GRF) signal (RAW), moving root mean square window (RMS), sequential average (SA) or unbounded third order polynomial fit (TOP). For each trial and processing method a wide range of thresholds was applied. Per threshold, reliability of the TTS was assessed through intra-class correlation coefficients (ICC) for the vertical (V), anteroposterior (AP) and mediolateral (ML) direction of force. Low thresholds resulted in a sharp increase of TTS values and in the percentage of trials in which TTS exceeded trial duration. The TTS and ICC were essentially similar for RAW and RMS in all directions; ICC's were mostly 'insufficient' (<0.4) to 'fair' (0.4-0.6) for the entire range of thresholds. The SA signals resulted in the most stable ICC values across thresholds, being 'substantial' (>0.8) for V, and 'moderate' (0.6-0.8) for AP and ML. The ICC's for TOP were 'substantial' for V, 'moderate' for AP, and 'fair' for ML. The present findings did not reveal an optimal threshold to assess TTS in elite youth soccer players following a single leg drop jump landing. Irrespective of threshold selection, the SA and TOP methods yielded sufficiently reliable TTS values, while for RAW and RMS the reliability was insufficient to differentiate between players. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2018-01-01
Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. PMID:27097901
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2016-06-01
A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Helgeson, Melvin D; Kang, Daniel G; Lehman, Ronald A; Dmitriev, Anton E; Luhmann, Scott J
2013-08-01
There is currently no reliable technique for intraoperative assessment of pedicle screw fixation strength and optimal screw size. Several studies have evaluated pedicle screw insertional torque (IT) and its direct correlation with pullout strength. However, there is limited clinical application with pedicle screw IT as it must be measured during screw placement and rarely causes the spine surgeon to change screw size. To date, no study has evaluated tapping IT, which precedes screw insertion, and its ability to predict pedicle screw pullout strength. The objective of this study was to investigate tapping IT and its ability to predict pedicle screw pullout strength and optimal screw size. In vitro human cadaveric biomechanical analysis. Twenty fresh-frozen human cadaveric thoracic vertebral levels were prepared and dual-energy radiographic absorptiometry scanned for bone mineral density (BMD). All specimens were osteoporotic with a mean BMD of 0.60 ± 0.07 g/cm(2). Five specimens (n=10) were used to perform a pilot study, as there were no previously established values for optimal tapping IT. Each pedicle during the pilot study was measured using a digital caliper as well as computed tomography measurements, and the optimal screw size was determined to be equal to or the first size smaller than the pedicle diameter. The optimal tap size was then selected as the tap diameter 1 mm smaller than the optimal screw size. During optimal tap size insertion, all peak tapping IT values were found to be between 2 in-lbs and 3 in-lbs. Therefore, the threshold tapping IT value for optimal pedicle screw and tap size was determined to be 2.5 in-lbs, and a comparison tapping IT value of 1.5 in-lbs was selected. Next, 15 test specimens (n=30) were measured with digital calipers, probed, tapped, and instrumented using a paired comparison between the two threshold tapping IT values (Group 1: 1.5 in-lbs; Group 2: 2.5 in-lbs), randomly assigned to the left or right pedicle on each specimen. Each pedicle was incrementally tapped to increasing size (3.75, 4.00, 4.50, and 5.50 mm) until the threshold value was reached based on the assigned group. Pedicle screw size was determined by adding 1 mm to the tap size that crossed the threshold torque value. Torque measurements were recorded with each revolution during tap and pedicle screw insertion. Each specimen was then individually potted and pedicle screws pulled out "in-line" with the screw axis at a rate of 0.25 mm/sec. Peak pullout strength (POS) was measured in Newtons (N). The peak tapping IT was significantly increased (50%) in Group 2 (3.23 ± 0.65 in-lbs) compared with Group 1 (2.15 ± 0.56 in-lbs) (p=.0005). The peak screw IT was also significantly increased (19%) in Group 2 (8.99 ± 2.27 in-lbs) compared with Group 1 (7.52 ± 2.96 in-lbs) (p=.02). The pedicle screw pullout strength was also significantly increased (23%) in Group 2 (877.9 ± 235.2 N) compared with Group 1 (712.3 ± 223.1 N) (p=.017). The mean pedicle screw diameter was significantly increased in Group 2 (5.70 ± 1.05 mm) compared with Group 1 (5.00 ± 0.80 mm) (p=.0002). There was also an increased rate of optimal pedicle screw size selection in Group 2 with 9 of 15 (60%) pedicle screws compared with Group 1 with 4 of 15 (26.7%) pedicle screws within 1 mm of the measured pedicle width. There was a moderate correlation for tapping IT with both screw IT (r=0.54; p=.002) and pedicle screw POS (r=0.55; p=.002). Our findings suggest that tapping IT directly correlates with pedicle screw IT, pedicle screw pullout strength, and optimal pedicle screw size. Therefore, tapping IT may be used during thoracic pedicle screw instrumentation as an adjunct to preoperative imaging and clinical experience to maximize fixation strength and optimize pedicle "fit and fill" with the largest screw possible. However, further prospective, in vivo studies are necessary to evaluate the intraoperative use of tapping IT to predict screw loosening/complications. Published by Elsevier Inc.
Image quality, threshold contrast and mean glandular dose in CR mammography
NASA Astrophysics Data System (ADS)
Jakubiak, R. R.; Gamba, H. R.; Neves, E. B.; Peixoto, J. E.
2013-09-01
In many countries, computed radiography (CR) systems represent the majority of equipment used in digital mammography. This study presents a method for optimizing image quality and dose in CR mammography of patients with breast thicknesses between 45 and 75 mm. Initially, clinical images of 67 patients (group 1) were analyzed by three experienced radiologists, reporting about anatomical structures, noise and contrast in low and high pixel value areas, and image sharpness and contrast. Exposure parameters (kV, mAs and target/filter combination) used in the examinations of these patients were reproduced to determine the contrast-to-noise ratio (CNR) and mean glandular dose (MGD). The parameters were also used to radiograph a CDMAM (version 3.4) phantom (Artinis Medical Systems, The Netherlands) for image threshold contrast evaluation. After that, different breast thicknesses were simulated with polymethylmethacrylate layers and various sets of exposure parameters were used in order to determine optimal radiographic parameters. For each simulated breast thickness, optimal beam quality was defined as giving a target CNR to reach the threshold contrast of CDMAM images for acceptable MGD. These results were used for adjustments in the automatic exposure control (AEC) by the maintenance team. Using optimized exposure parameters, clinical images of 63 patients (group 2) were evaluated as described above. Threshold contrast, CNR and MGD for such exposure parameters were also determined. Results showed that the proposed optimization method was effective for all breast thicknesses studied in phantoms. The best result was found for breasts of 75 mm. While in group 1 there was no detection of the 0.1 mm critical diameter detail with threshold contrast below 23%, after the optimization, detection occurred in 47.6% of the images. There was also an average MGD reduction of 7.5%. The clinical image quality criteria were attended in 91.7% for all breast thicknesses evaluated in both patient groups. Finally, this study also concluded that the use of the AEC of the x-ray unit based on the constant dose to the detector may bring some difficulties to CR systems to operate under optimal conditions. More studies must be performed, so that the compatibility between systems and optimization methodologies can be evaluated, as well as this optimization method. Most methods are developed for phantoms, so comparative studies including clinical images must be developed.
De Cloedt, Lise; Emeriaud, Guillaume; Lefebvre, Émilie; Kleiber, Niina; Robitaille, Nancy; Jarlot, Christine; Lacroix, Jacques; Gauvin, France
2018-04-01
The incidence of transfusion-associated circulatory overload (TACO) is not well known in children, especially in pediatric intensive care unit (PICU) patients. All consecutive patients admitted over 1 year to the PICU of CHU Sainte-Justine were included after they received their first red blood cell transfusion. TACO was diagnosed using the criteria of the International Society of Blood Transfusion, with two different ways of defining abnormal values: 1) using normal pediatric values published in the Nelson Textbook of Pediatrics and 2) by using the patient as its own control and comparing pre- and posttransfusion values with either 10 or 20% difference threshold. We monitored for TACO up to 24 hours posttransfusion. A total of 136 patients were included. Using the "normal pediatric values" definition, we diagnosed 63, 88, and 104 patients with TACO at 6, 12, and 24 hours posttransfusion, respectively. Using the "10% threshold" definition we detected 4, 15, and 27 TACO cases in the same periods, respectively; using the "20% threshold" definition, the number of TACO cases was 2, 6, and 17, respectively. Chest radiograph was the most frequent missing item, especially at 6 and 12 hours posttransfusion. Overall, the incidence of TACO varied from 1.5% to 76% depending on the definition. A more operational definition of TACO is needed in PICU patients. Using a threshold could be more optimal but more studies are needed to confirm the best threshold. © 2018 AABB.
Bahouth, George; Digges, Kennerly; Schulman, Carl
2012-01-01
This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132
Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.
Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles
2015-11-01
Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.
NASA Astrophysics Data System (ADS)
Mesoloras, Geraldine
Yttrium-90 (90Y) microsphere therapy is being utilized as a treatment option for patients with primary and metastatic liver cancer due to its ability to target tumors within the liver. The success of this treatment is dependent on many factors, including the extent and type of disease and the nature of prior treatments received. Metabolic activity, as determined by PET imaging, may correlate with the number of viable cancer cells and reflect changes in viable cancer cell volume. However, contouring of PET images by hand is labor intensive and introduces an element of irreproducibility into the determination of functional target/tumor volume (FTV). A computer-assisted method to aid in the automatic contouring of FTV has the potential to substantially improve treatment individualization and outcome assessment. Commercial software to determine FTV in FDG-avid primary and metastatic liver tumors has been evaluated and optimized. Volumes determined using the automated technique were compared to those from manually drawn contours identified using the same cutoff in the standard uptake value (SUV). The reproducibility of FTV is improved through the introduction of an optimal threshold value determined from phantom experiments. Application of the optimal threshold value from the phantom experiments to patient scans was in good agreement with hand-drawn determinations of the FTV. It is concluded that computer-assisted contouring of the FTV for primary and metastatic liver tumors improves reproducibility and increases accuracy, especially when combined with the selection of an optimal SUV threshold determined from phantom experiments. A method to link the pre-treatment assessment of functional (PET based) and anatomical (CT based) parameters to post-treatment survival and time to progression was evaluated in 22 patients with colorectal cancer liver metastases treated using 90Y microspheres and chemotherapy. The values for pre-treatment parameters that were the best predictors of response were determined for FTV, anatomical tumor volume, total lesion glycolysis, and the tumor marker, CEA. Of the parameters considered, the best predictors of response were found to be pre-treatment FTV ≤153 cm3, ATV ≤163 cm3, TLG ≤144 g in the chemo-SIRT treated field, and CEA ≤11.6 ng/mL.
Blood pressure-to-height ratio for screening prehypertension and hypertension in Chinese children.
Dong, B; Wang, Z; Wang, H-J; Ma, J
2015-10-01
The diagnosis of hypertension in children is complicated because of the multiple age-, sex- and height-specific thresholds. To simplify the process of diagnosis, blood pressure-to-height ratio (BPHR) was employed in this study. Data were obtained from a Chinese national survey conducted in 2010, and 197 191 children aged 7-17 years were included. High normal and elevated blood pressure (BP) were defined according to the National High Blood Pressure Education Program (NHBPEP) Working Group definition. The optimal thresholds were selected by Youden's index. Sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) and area under the curve (AUC) were assessed for the performance of these thresholds. The systolic and diastolic BPHR thresholds for identifying high normal BP were 0.84/0.55, 0.78/0.50 and 0.75/0.46 for children aged 7-8 years, 9-11 years and 12-17 years, respectively. The corresponding thresholds for identifying elevated BP were 0.87/0.57, 0.81/0.53 and 0.76/0.49, respectively. These proposed thresholds revealed high sensitivity and NPVs, all above 0.96, moderate to high specificity and AUCs, and low PPVs. Our finding suggested the proposed BPHR thresholds were accurate for identifying children without high normal or elevated BP, and could be employed to simplify the procedure of screening prehypertension and hypertension in children.
Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.
2014-01-01
Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hintermueller, M., E-mail: hint@math.hu-berlin.de; Kao, C.-Y., E-mail: Ckao@claremontmckenna.edu; Laurain, A., E-mail: laurain@math.hu-berlin.de
2012-02-15
This paper focuses on the study of a linear eigenvalue problem with indefinite weight and Robin type boundary conditions. We investigate the minimization of the positive principal eigenvalue under the constraint that the absolute value of the weight is bounded and the total weight is a fixed negative constant. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for a species to survive. For rectangular domains with Neumann boundary condition, it is known that there exists a threshold value such that if the total weight is below this thresholdmore » value then the optimal favorable region is like a section of a disk at one of the four corners; otherwise, the optimal favorable region is a strip attached to the shorter side of the rectangle. Here, we investigate the same problem with mixed Robin-Neumann type boundary conditions and study how this boundary condition affects the optimal spatial arrangement.« less
NASA Astrophysics Data System (ADS)
Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.
2017-02-01
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.
Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa
2015-11-01
To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto
2015-04-01
Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This has consequences in the ROC analysis. We applied the proposed procedure to a catalogue of rainfall conditions that have resulted in landslides, and to a set of rainfall events that - presumably - have not resulted in landslides, in Sicily, in the period 2002-2012. First, we determined regional event duration-cumulated event (ED) rainfall thresholds for shallow landslide occurrence using 200 rainfall conditions that have resulted in 223 shallow landslides in Sicily in the period 2002-2011. Next, we validated the thresholds using 29 rainfall conditions that have triggered 42 shallow landslides in Sicily in 2012, and 1250 rainfall events that presumably have not resulted in landslides in the same year. We performed a back analysis simulating the use of the thresholds in a hypothetical landslide warning system operating in 2012.
Melamed, N; Hiersch, L; Gabbay-Benziv, R; Bardin, R; Meizner, I; Wiznitzer, A; Yogev, Y
2015-07-01
To assess the accuracy and determine the optimal threshold of sonographic cervical length (CL) for the prediction of preterm delivery (PTD) in women with twin pregnancies presenting with threatened preterm labor (PTL). This was a retrospective study of women with twin pregnancies who presented with threatened PTL and underwent sonographic measurement of CL in a tertiary center. The accuracy of CL in predicting PTD in women with twin pregnancies was compared with that in a control group of women with singleton pregnancies. Overall, 218 women with a twin pregnancy and 1077 women with a singleton pregnancy, who presented with PTL, were included in the study. The performance of CL as a predictive test for PTD was similar in twins and singletons, as reflected by the similar correlation between CL and the examination-to-delivery interval (r, 0.30 vs 0.29; P = 0.9), the similar association of CL with risk of PTD, and the similar areas under the receiver-operating characteristics curves for differing delivery outcomes (range, 0.653-0.724 vs 0.620-0.682, respectively; P = 0.3). The optimal threshold of CL for any given target sensitivity or specificity was lower in twin than in singleton pregnancies. However, in order to achieve a negative predictive value of 95%, a higher threshold (28-30 mm) should be used in twin pregnancies. Using this twin-specific CL threshold, women with twins who present with PTL are more likely to have a positive CL test, and therefore to require subsequent interventions, than are women with singleton pregnancies with PTL (55% vs 4.2%, respectively). In women with PTL, the performance of CL as a test for the prediction of PTD is similar in twin and singleton pregnancies. However, the optimal threshold of CL for the prediction of PTD appears to be higher in twin pregnancies, mainly owing to the higher baseline risk for PTD in these pregnancies. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.
McCaffrey, Nikki; Agar, Meera; Harlum, Janeane; Karnon, Jonathon; Currow, David; Eckermann, Simon
2015-01-01
Introduction Comparing multiple, diverse outcomes with cost-effectiveness analysis (CEA) is important, yet challenging in areas like palliative care where domains are unamenable to integration with survival. Generic multi-attribute utility values exclude important domains and non-health outcomes, while partial analyses—where outcomes are considered separately, with their joint relationship under uncertainty ignored—lead to incorrect inference regarding preferred strategies. Objective The objective of this paper is to consider whether such decision making can be better informed with alternative presentation and summary measures, extending methods previously shown to have advantages in multiple strategy comparison. Methods Multiple outcomes CEA of a home-based palliative care model (PEACH) relative to usual care is undertaken in cost disutility (CDU) space and compared with analysis on the cost-effectiveness plane. Summary measures developed for comparing strategies across potential threshold values for multiple outcomes include: expected net loss (ENL) planes quantifying differences in expected net benefit; the ENL contour identifying preferred strategies minimising ENL and their expected value of perfect information; and cost-effectiveness acceptability planes showing probability of strategies minimising ENL. Results Conventional analysis suggests PEACH is cost-effective when the threshold value per additional day at home ( 1) exceeds $1,068 or dominated by usual care when only the proportion of home deaths is considered. In contrast, neither alternative dominate in CDU space where cost and outcomes are jointly considered, with the optimal strategy depending on threshold values. For example, PEACH minimises ENL when 1=$2,000 and 2=$2,000 (threshold value for dying at home), with a 51.6% chance of PEACH being cost-effective. Conclusion Comparison in CDU space and associated summary measures have distinct advantages to multiple domain comparisons, aiding transparent and robust joint comparison of costs and multiple effects under uncertainty across potential threshold values for effect, better informing net benefit assessment and related reimbursement and research decisions. PMID:25751629
Diagnostic performance of BMI percentiles to identify adolescents with metabolic syndrome.
Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C
2014-02-01
To compare the diagnostic performance of the Centers for Disease Control and Prevention (CDC) and FITNESSGRAM (FGram) BMI standards for quantifying metabolic risk in youth. Adolescents in the NHANES (n = 3385) were measured for anthropometric variables and metabolic risk factors. BMI percentiles were calculated, and youth were categorized by weight status (using CDC and FGram thresholds). Participants were also categorized by presence or absence of metabolic syndrome. The CDC and FGram standards were compared by prevalence of metabolic abnormalities, various diagnostic criteria, and odds of metabolic syndrome. Receiver operating characteristic curves were also created to identify optimal BMI percentiles to detect metabolic syndrome. The prevalence of metabolic syndrome in obese youth was 19% to 35%, compared with <2% in the normal-weight groups. The odds of metabolic syndrome for obese boys and girls were 46 to 67 and 19 to 22 times greater, respectively, than for normal-weight youth. The receiver operating characteristic analyses identified optimal thresholds similar to the CDC standards for boys and the FGram standards for girls. Overall, BMI thresholds were more strongly associated with metabolic syndrome in boys than in girls. Both the CDC and FGram standards are predictive of metabolic syndrome. The diagnostic utility of the CDC thresholds outperformed the FGram values for boys, whereas FGram standards were slightly better thresholds for girls. The use of a common set of thresholds for school and clinical applications would provide advantages for public health and clinical research and practice.
Application of automatic threshold in dynamic target recognition with low contrast
NASA Astrophysics Data System (ADS)
Miao, Hua; Guo, Xiaoming; Chen, Yu
2014-11-01
Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.
The variance needed to accurately describe jump height from vertical ground reaction force data.
Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran
2014-12-01
In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.
Adubeiro, Nuno; Nogueira, Maria Luísa; Nunes, Rita G; Ferreira, Hugo Alexandre; Ribeiro, Eduardo; La Fuente, José Maria Ferreira
Determining optimal b-value pair for differentiation between normal and prostate cancer (PCa) tissues. Forty-three patients with diagnosis or PCa symptoms were included. Apparent diffusion coefficient (ADC) was estimated using minimum and maximum b-values of 0, 50, 100, 150, 200, 500s/mm2 and 500, 800, 1100, 1400, 1700 and 2000s/mm2, respectively. Diagnostic performances were evaluated when Area-under-the-curve (AUC)>95%. 15 of the 35 b-values pair surpassed this AUC threshold. The pair (50, 2000s/mm2) provided the highest AUC (96%) with ADC cutoff 0.89×10- 3 mm 2 /s, sensitivity 95.5%, specificity 93.2% and accuracy 94.4%. The best b-value pair was b=50, 2000s/mm2. Copyright © 2017 Elsevier Inc. All rights reserved.
Influence of the helium-pressure on diode-pumped alkali-vapor laser
NASA Astrophysics Data System (ADS)
Gao, Fei; Chen, Fei; Xie, Ji-jiang; Zhang, Lai-ming; Li, Dian-jun; Yang, Gui-long; Guo, Jing
2013-05-01
Diode-pumped alkali-vapor laser (DPAL) is a kind of laser attracted much attention for its merits, such as high quantum efficiency, excellent beam quality, favorable thermal management, and potential scalability to high power and so on. Based on the rate-equation theory of end-pumped DPAL, the performances of DPAL using Cs-vapor collisionally broadened by helium are simulated and studied. With the increase of helium pressure, the numerical results show that: 1) the absorption line-width increases and the stimulated absorption cross-section decreases contrarily; 2) the threshold pumping power decreases to minimum and then rolls over to increase linearly; 3) the absorption efficiency rises to maximum initially due to enough large stimulated absorption cross-section in the far wings of collisionally broadened D2 transition (absorption transition), and then begins to reduce; 4) an optimal value of helium pressure exists to obtain the highest output power, leading to an optimal optical-optical efficiency. Furthermore, to generate the self-oscillation of laser, a critical value of helium pressure occurs when small-signal gain equals to the threshold gain.
Suppressing epidemic spreading by risk-averse migration in dynamical networks
NASA Astrophysics Data System (ADS)
Yang, Han-Xin; Tang, Ming; Wang, Zhen
2018-01-01
In this paper, we study the interplay between individual behaviors and epidemic spreading in a dynamical network. We distribute agents on a square-shaped region with periodic boundary conditions. Every agent is regarded as a node of the network and a wireless link is established between two agents if their geographical distance is less than a certain radius. At each time, every agent assesses the epidemic situation and make decisions on whether it should stay in or leave its current place. An agent will leave its current place with a speed if the number of infected neighbors reaches or exceeds a critical value E. Owing to the movement of agents, the network's structure is dynamical. Interestingly, we find that there exists an optimal value of E leading to the maximum epidemic threshold. This means that epidemic spreading can be effectively controlled by risk-averse migration. Besides, we find that the epidemic threshold increases as the recovering rate increases, decreases as the contact radius increases, and is maximized by an optimal moving speed. Our findings offer a deeper understanding of epidemic spreading in dynamical networks.
Irwin, R John; Irwin, Timothy C
2011-06-01
Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2014-10-28
Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Saito, Hirotaka; McKenna, Sean A
2007-07-01
An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.
Abejón, David; Rueda, Pablo; del Saz, Javier; Arango, Sara; Monzón, Eva; Gilsanz, Fernando
2015-04-01
Neurostimulation is the process and technology derived from the application of electricity with different parameters to activate or inhibit nerve pathways. Pulse width (Pw) is the duration of each electrical impulse and, along with amplitude (I), determines the total energy charge of the stimulation. The aim of the study was to test Pw values to find the most adequate pulse widths in rechargeable systems to obtain the largest coverage of the painful area, the most comfortable paresthesia, and the greatest patient satisfaction. A study of the parameters was performed, varying Pw while maintaining a fixed frequency at 50 Hz. Data on perception threshold (Tp ), discomfort threshold (Td ), and therapeutic threshold (Tt ) were recorded, applying 14 increasing Pw values ranging from 50 µsec to 1000 µsec. Lastly, the behavior of the therapeutic range (TR), the coverage of the painful area, the subjective patient perception of paresthesia, and the degree of patient satisfaction were assessed. The findings after analyzing the different thresholds were as follows: When varying the Pw, the differences obtained at each threshold (Tp , Tt , and Td ) were statistically significant (p < 0.05). The differences among the resulting Tp values and among the resulting Tt values were statistically significant when varying Pw from 50 up to 600 µsec (p < 0.05). For Pw levels 600 µsec and up, no differences were observed in these thresholds. In the case of Td , significant differences existed as Pw increased from 50 to 700 µsec (p ≤ 0.05). The coverage increased in a statistically significant way (p < 0.05) from Pw values of 50 µsec to 300 µsec. Good or very good subjective perception was shown at about Pw 300 µsec. The patient paresthesia coverage was introduced as an extra variable in the chronaxie-rheobase curve, allowing the adjustment of Pw values for optimal programming. The coverage of the patient against the current chronaxie-rheobase formula will be represented on three axes; an extra axis (z) will appear, multiplying each combination of Pw value and amplitude by the percentage of coverage corresponding to those values. Using this new comparison of chronaxie-rheobase curve vs. coverage, maximum Pw values will be obtained different from those obtained by classic methods. © 2014 International Neuromodulation Society.
Optimal Stimulus Amplitude for Vestibular Stochastic Stimulation to Improve Sensorimotor Function
NASA Technical Reports Server (NTRS)
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Cohen, H.; Bloomberg, J. J.;
2014-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). Our previous work has shown the advantageous effects of VSR in a balance task of standing on an unstable surface. This technique to improve detection of vestibular signals uses a stimulus delivery system that is wearable or portable and provides imperceptibly low levels of white noise-based binaural bipolar electrical stimulation of the vestibular system. The goal of this project is to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection. A series of experiments were carried out to determine a robust paradigm to identify a vestibular threshold that can then be used to recommend optimal stimulation levels for SR training applications customized to each crewmember. Customizing stimulus intensity can maximize treatment effects. The amplitude of stimulation to be used in the VSR application has varied across studies in the literature such as 60% of nociceptive stimulus thresholds. We compared subjects' perceptual threshold with that obtained from two measures of body sway. Each test session was 463s long and consisted of several 15s sinusoidal stimuli, at different current amplitudes (0-2 mA), interspersed with 20-20.5s periods of no stimulation. Subjects sat on a chair with their eyes closed and had to report their perception of motion through a joystick. A force plate underneath the chair recorded medio-lateral shear forces and roll moments. First we determined the percent time during stimulation periods for which perception of motion (activity above a pre-defined threshold) was reported using the joystick, and body sway (two standard deviation of the noise level in the baseline measurement) was detected by the sensors. The percentage time at each stimulation level for motion detection was normalized with respect to the largest value and a logistic regression curve fit was applied to these data. The threshold was defined at the 50% probability of motion detection. Comparison of threshold of motion detection obtained from joystick data versus body sway suggests that perceptual thresholds were significantly lower, and were not impacted by system noise. Further, in order to determine optimal stimulation amplitude to improve balance, two sets of experiments were carried out. In the first set of experiments, all subjects received the same level of stimuli and the intensity of optimal performance was projected back on subjects' vestibular threshold curve. In the second set of experiments, on different subjects, stimulation was administered from 20-400% of subjects' vestibular threshold obtained from joystick data. Preliminary results of our study show that, in general, using stimulation amplitudes at 40-60% of perceptual motion threshold improved balance performance significantly compared to control (no stimulation). The amplitude of vestibular stimulation that improved balance function was predominantly in the range of +/- 100 to +/- 400 micro A. We hypothesize that VSR stimulation will act synergistically with sensorimotor adaptability (SA) training to improve adaptability by increasing utilization of vestibular information and therefore will help us to optimize and personalize a SA countermeasure prescription. This combination will help to significantly reduce the number of days required to recover functional performance to preflight levels after long-duration spaceflight.
Kanoun, Salim; Tal, Ilan; Berriolo-Riedinger, Alina; Rossi, Cédric; Riedinger, Jean-Marc; Vrigneaud, Jean-Marc; Legrand, Louis; Humbert, Olivier; Casasnovas, Olivier; Brunotte, François; Cochet, Alexandre
2015-01-01
To investigate the respective influence of software tool and total metabolic tumor volume (TMTV0) calculation method on prognostic stratification of baseline 2-deoxy-2-[18F]fluoro-D-glucose positron emission tomography ([18F]FDG-PET) in newly diagnosed Hodgkin lymphoma (HL). 59 patients with newly diagnosed HL were retrospectively included. [18F]FDG-PET was performed before any treatment. Four sets of TMTV0 were calculated with Beth Israel (BI) software: based on an absolute threshold selecting voxel with standardized uptake value (SUV) >2.5 (TMTV02.5), applying a per-lesion threshold of 41% of the SUV max (TMTV041) and using a per-patient adapted threshold based on SUV max of the liver (>125% and >140% of SUV max of the liver background; TMTV0125 and TMTV0140). TMTV041 was also determined with commercial software for comparison of software tools. ROC curves were used to determine the optimal threshold for each TMTV0 to predict treatment failure. Median follow-up was 39 months. There was an excellent correlation between TMTV041 determined with BI and with the commercial software (r = 0.96, p<0.0001). The median TMTV0 value for TMTV041, TMTV02.5, TMTV0125 and TMTV0140 were respectively 160 (used as reference), 210 ([28;154] p = 0.005), 183 ([-4;114] p = 0.06) and 143 ml ([-58;64] p = 0.9). The respective optimal TMTV0 threshold and area under curve (AUC) for prediction of progression free survival (PFS) were respectively: 313 ml and 0.70, 432 ml and 0.68, 450 ml and 0.68, 330 ml and 0.68. There was no significant difference between ROC curves. High TMTV0 value was predictive of poor PFS in all methodologies: 4-years PFS was 83% vs 42% (p = 0.006) for TMTV02.5, 83% vs 41% (p = 0.003) for TMTV041, 85% vs 40% (p<0.001) for TMTV0125 and 83% vs 42% (p = 0.004) for TMTV0140. In newly diagnosed HL, baseline metabolic tumor volume values were significantly influenced by the choice of the method used for determination of volume. However, no significant differences were found in term of prognosis.
Policy Tree Optimization for Adaptive Management of Water Resources Systems
NASA Astrophysics Data System (ADS)
Herman, J. D.; Giuliani, M.
2016-12-01
Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points", which are threshold values of indicator variables that signal a change in policy. However, there remains a need for a general method to optimize the choice of indicators and their threshold values in a way that is easily interpretable for decision makers. Here we propose a conceptual framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. We demonstrate the approach using Folsom Reservoir, California as a case study, in which operating policies must balance the risk of both floods and droughts. Given a set of feature variables, such as reservoir level, inflow observations and forecasts, and time of year, the resulting policy defines the conditions under which flood control and water supply hedging operations should be triggered. Importantly, the tree-based rule sets are easy to interpret for decision making, and can be compared to historical operating policies to understand the adaptations needed under possible climate change scenarios. Several remaining challenges are discussed, including the empirical convergence properties of the method, and extensions to irreversible decisions such as infrastructure. Policy tree optimization, and corresponding open-source software, provide a generalizable, interpretable approach to designing adaptive policies under uncertainty for water resources systems.
White, Khendi T.; Moorthy, M.V.; Akinkuolie, Akintunde O.; Demler, Olga; Ridker, Paul M; Cook, Nancy R.; Mora, Samia
2015-01-01
Background Nonfasting triglycerides are similar to or superior to fasting triglycerides at predicting cardiovascular events. However, diagnostic cutpoints are based on fasting triglycerides. We examined the optimal cutpoint for increased nonfasting triglycerides. Methods Baseline nonfasting (<8 hours since last meal) samples were obtained from 6,391 participants in the Women’s Health Study, followed prospectively for up to 17 years. The optimal diagnostic threshold for nonfasting triglycerides, determined by logistic regression models using c-statistics and Youden index (sum of sensitivity and specificity minus one), was used to calculate hazard ratios for incident cardiovascular events. Performance was compared to thresholds recommended by the American Heart Association (AHA) and European guidelines. Results The optimal threshold was 175 mg/dL (1.98 mmol/L), corresponding to a c-statistic of 0.656 that was statistically better than the AHA cutpoint of 200 mg/dL (c-statistic of 0.628). For nonfasting triglycerides above and below 175 mg/dL, adjusting for age, hypertension, smoking, hormone use, and menopausal status, the hazard ratio for cardiovascular events was 1.88 (95% CI, 1.52–2.33, P<0.001), and for triglycerides measured at 0–4 and 4–8 hours since last meal, hazard ratios (95%CIs) were 2.05 (1.54– 2.74) and 1.68 (1.21–2.32), respectively. Performance of this optimal cutpoint was validated using ten-fold cross-validation and bootstrapping of multivariable models that included standard risk factors plus total and HDL cholesterol, diabetes, body-mass index, and C-reactive protein. Conclusions In this study of middle aged and older apparently healthy women, we identified a diagnostic threshold for nonfasting hypertriglyceridemia of 175 mg/dL (1.98 mmol/L), with the potential to more accurately identify cases than the currently recommended AHA cutpoint. PMID:26071491
Using Reanalysis Data for the Prediction of Seasonal Wind Turbine Power Losses Due to Icing
NASA Astrophysics Data System (ADS)
Burtch, D.; Mullendore, G. L.; Delene, D. J.; Storm, B.
2013-12-01
The Northern Plains region of the United States is home to a significant amount of potential wind energy. However, in winter months capturing this potential power is severely impacted by the meteorological conditions, in the form of icing. Predicting the expected loss in power production due to icing is a valuable parameter that can be used in wind turbine operations, determination of wind turbine site locations and long-term energy estimates which are used for financing purposes. Currently, losses due to icing must be estimated when developing predictions for turbine feasibility and financing studies, while icing maps, a tool commonly used in Europe, are lacking in the United States. This study uses the Modern-Era Retrospective Analysis for Research and Applications (MERRA) dataset in conjunction with turbine production data to investigate various methods of predicting seasonal losses (October-March) due to icing at two wind turbine sites located 121 km apart in North Dakota. The prediction of icing losses is based on temperature and relative humidity thresholds and is accomplished using three methods. For each of the three methods, the required atmospheric variables are determined in one of two ways: using industry-specific software to correlate anemometer data in conjunction with the MERRA dataset and using only the MERRA dataset for all variables. For each season, a percentage of the total expected generated power lost due to icing is determined and compared to observed losses from the production data. An optimization is performed in order to determine the relative humidity threshold that minimizes the difference between the predicted and observed values. Eight seasons of data are used to determine an optimal relative humidity threshold, and a further three seasons of data are used to test this threshold. Preliminary results have shown that the optimized relative humidity threshold for the northern turbine is higher than the southern turbine for all methods. For the three test seasons, the optimized thresholds tend to under-predict the icing losses. However, the threshold determined using boundary layer similarity theory most closely predicts the power losses due to icing versus the other methods. For the northern turbine, the average predicted power loss over the three seasons is 4.65 % while the observed power loss is 6.22 % (average difference of 1.57 %). For the southern turbine, the average predicted power loss and observed power loss over the same time period are 4.43 % and 6.16 %, respectively (average difference of 1.73 %). The three-year average, however, does not clearly capture the variability that exists season-to-season. On examination of each of the test seasons individually, the optimized relative humidity threshold methodology performs better than fixed power loss estimates commonly used in the wind energy industry.
Grosso, Matthew J; Frangiamore, Salvatore J; Ricchetti, Eric T; Bauer, Thomas W; Iannotti, Joseph P
2014-03-19
Propionibacterium acnes is a clinically relevant pathogen with total shoulder arthroplasty. The purpose of this study was to determine the sensitivity of frozen section histology in identifying patients with Propionibacterium acnes infection during revision total shoulder arthroplasty and investigate various diagnostic thresholds of acute inflammation that may improve frozen section performance. We reviewed the results of forty-five patients who underwent revision total shoulder arthroplasty. Patients were divided into the non-infection group (n = 15), the Propionibacterium acnes infection group (n = 18), and the other infection group (n = 12). Routine preoperative testing was performed and intraoperative tissue culture and frozen section histology were collected for each patient. The histologic diagnosis was determined by one pathologist for each of the four different thresholds. The absolute maximum polymorphonuclear leukocyte concentration was used to construct a receiver operating characteristics curve to determine a new potential optimal threshold. Using the current thresholds for grading frozen section histology, the sensitivity was lower for the Propionibacterium acnes infection group (50%) compared with the other infection group (67%). The specificity of frozen section was 100%. Using a receiver operating characteristics curve, an optimized threshold was found at a total of ten polymorphonuclear leukocytes in five high-power fields (400×). Using this threshold, the sensitivity of frozen section for Propionibacterium acnes was increased to 72%, and the specificity remained at 100%. Using current histopathology grading systems, frozen sections were specific but showed low sensitivity with respect to the Propionibacterium acnes infection. A new threshold value of a total of ten or more polymorphonuclear leukocytes in five high-power fields may increase the sensitivity of frozen section, with minimal impact on specificity.
Precipitation phase partitioning variability across the Northern Hemisphere
NASA Astrophysics Data System (ADS)
Jennings, K. S.; Winchell, T. S.; Livneh, B.; Molotch, N. P.
2017-12-01
Precipitation phase drives myriad hydrologic, climatic, and biogeochemical processes. Despite its importance, many of the land surface models used to simulate such processes and their sensitivity to climate warming rely on simple, spatially uniform air temperature thresholds to partition rainfall and snowfall. Our analysis of a 29-year dataset with 18.7 million observations of precipitation phase from 12,143 stations across the Northern Hemisphere land surface showed marked spatial variability in the near-surface air temperature at which precipitation is equally likely to fall as rain and snow, the 50% rain-snow threshold. This value averaged 1.0°C and ranged from -0.4°C to 2.4°C for 95% of the stations analyzed. High-elevation continental areas such as the Rocky Mountains of the western U.S. and the Tibetan Plateau of central Asia generally exhibited the warmest thresholds, in some cases exceeding 3.0°C. Conversely, the coldest thresholds were observed on the Pacific Coast of North America, the southeast U.S., and parts of Eurasia, with values dropping below -0.5°C. Analysis of the meteorological conditions during storm events showed relative humidity exerted the strongest control on phase partitioning, with surface pressure playing a secondary role. Lower relative humidity and surface pressure were both associated with warmer 50% rain-snow thresholds. Additionally, we trained a binary logistic regression model on the observations to classify rain and snow events and found including relative humidity as a predictor variable significantly increased model performance between 0.6°C and 3.8°C when phase partitioning is most uncertain. We then used the optimized model and a spatially continuous reanalysis product to map the 50% rain-snow threshold across the Northern Hemisphere. The map reproduced patterns in the observed thresholds with a mean bias of 0.5°C relative to the station data. The above results suggest land surface models could be improved by incorporating relative humidity into their precipitation phase prediction schemes or by using a spatially variable, optimized rain-snow temperature threshold. This is particularly important for climate warming simulations where misdiagnosing a shift from snow to rain or inaccurately quantifying snowfall fraction would likely lead to biased results.
Optimal Decision Making in a Class of Uncertain Systems Based on Uncertain Variables
NASA Astrophysics Data System (ADS)
Bubnicki, Z.
2006-06-01
The paper is concerned with a class of uncertain systems described by relational knowledge representations with unknown parameters which are assumed to be values of uncertain variables characterized by a user in the form of certainty distributions. The first part presents the basic optimization problem consisting in finding the decision maximizing the certainty index that the requirement given by a user is satisfied. The main part is devoted to the description of the optimization problem with the given certainty threshold. It is shown how the approach presented in the paper may be applied to some problems for anticipatory systems.
Measuring the value of accurate link prediction for network seeding.
Wei, Yijin; Spencer, Gwen
2017-01-01
The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.
Compton, Wilson M.; Dawson, Deborah A.; Goldstein, Risë B.; Grant, Bridget F.
2013-01-01
Background Ascertaining agreement between DSM-IV and DSM-5 is important to determine the applicability of treatments for DSM-IV conditions to persons diagnosed according to the proposed DSM-5. Methods Data from a nationally representative sample of US adults were used to compare concordance of past-year DSM-IV Opioid, Cannabis, Cocaine and Alcohol Dependence with past-year DSM-5 disorders at thresholds of 3+, 4+ 5+ and 6+ positive DSM-5 criteria among past-year users of opioids (n=264), cannabis (n=1,622), cocaine (n=271) and alcohol (n=23,013). Substance-specific 2×2 tables yielded overall concordance (kappa), sensitivity, specificity, positive predictive values (PPV) and negative predictive values (NPV). Results For DSM-IV Alcohol, Cocaine and Opioid Dependence, optimal concordance occurred when 4+ DSM-5 criteria were endorsed, corresponding to the threshold for moderate DSM-5 Alcohol, Cocaine and Opioid Use Disorders. Maximal concordance of DSM-IV Cannabis Dependence and DSM-5 Cannabis Use Disorder occurred when 6+ criteria were endorsed, corresponding to the threshold for severe DSM-5 Cannabis Use Disorder. At these optimal thresholds, sensitivity, specificity, PPV and NPV generally exceeded 85% (>75% for cannabis). Conclusions Overall, excellent correspondence of DSM-IV Dependence with DSM-5 Substance Use Disorders was documented in this general population sample of alcohol, cannabis, cocaine and opioid users. Applicability of treatments tested for DSM-IV Dependence is supported by these results for those with a DSM-5 Alcohol, Cocaine or Opioid Use Disorder of at least moderate severity or Severe Cannabis Use Disorder. Further research is needed to provide evidence for applicability of treatments for persons with milder substance use disorders. PMID:23642316
Succession of hide–seek and pursuit–evasion at heterogeneous locations
Gal, Shmuel; Casas, Jérôme
2014-01-01
Many interactions between searching agents and their elusive targets are composed of a succession of steps, whether in the context of immune systems, predation or counterterrorism. In the simplest case, a two-step process starts with a search-and-hide phase, also called a hide-and-seek phase, followed by a round of pursuit–escape. Our aim is to link these two processes, usually analysed separately and with different models, in a single game theory context. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hide strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses it's hiding place so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal position for the hider is where it has the highest probability of escaping once spotted. PMID:24621817
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-01-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called ‘rain’. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based ‘experience matrix’ that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event. PMID:27077048
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-03-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called 'rain'. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event.
3D SAPIV particle field reconstruction method based on adaptive threshold.
Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi
2018-03-01
Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.
Determination of the threshold dose distribution in photodynamic action from in vitro experiments.
de Faria, Clara Maria Gonçalves; Inada, Natalia Mayumi; Kurachi, Cristina; Bagnato, Vanderlei Salvador
2016-09-01
The concept of threshold in photodynamic action on cells or microorganisms is well observed in experiments but not fully explored on in vitro experiments. The intercomparison between light and used photosensitizer among many experiments is also poorly evaluated. In this report, we present an analytical model that allows extracting from the survival rate experiments the data of the threshold dose distribution, ie, the distribution of energies and photosensitizer concentration necessary to produce death of cells. Then, we use this model to investigate photodynamic therapy (PDT) data previously published in literature. The concept of threshold dose distribution instead of "single value of threshold" is a rich concept for the comparison of photodynamic action in different situations, allowing analyses of its efficiency as well as determination of optimized conditions for PDT. We observed that, in general, as it becomes more difficult to kill a population, the distribution tends to broaden, which means it presents a large spectrum of threshold values within the same cell type population. From the distribution parameters (center peak and full width), we also observed a clear distinction among cell types regarding their response to PDT that can be quantified. Comparing data obtained from the same cell line and used photosensitizer (PS), where the only distinct condition was the light source's wavelength, we found that the differences on the distribution parameters were comparable to the differences on the PS absorption. At last, we observed evidence that the threshold dose distribution matches the curve of apoptotic activity for some PSs. Copyright © 2016 Elsevier B.V. All rights reserved.
An adaptive design for updating the threshold value of a continuous biomarker
Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2017-01-01
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407
Matsui, Yusuke; Horikawa, Masahiro; Jahangiri Noudeh, Younes; Kaufman, John A; Kolbeck, Kenneth J; Farsad, Khashayar
2017-12-01
The aim of the study was to evaluate the association between baseline Lipiodol uptake in hepatocellular carcinoma (HCC) after transarterial chemoembolization (TACE) with early tumor recurrence, and to identify a threshold baseline uptake value predicting tumor response. A single-institution retrospective database of HCC treated with Lipiodol-TACE was reviewed. Forty-six tumors in 30 patients treated with a Lipiodol-chemotherapy emulsion and no additional particle embolization were included. Baseline Lipiodol uptake was measured as the mean Hounsfield units (HU) on a CT within one week after TACE. Washout rate was calculated dividing the difference in HU between the baseline CT and follow-up CT by time (HU/month). Cox proportional hazard models were used to correlate baseline Lipiodol uptake and other variables with tumor response. A receiver operating characteristic (ROC) curve was used to identify the optimal threshold for baseline Lipiodol uptake predicting tumor response. During the follow-up period (mean 5.6 months), 19 (41.3%) tumors recurred (mean time to recurrence = 3.6 months). In a multivariate model, low baseline Lipiodol uptake and higher washout rate were significant predictors of early tumor recurrence ( P = 0.001 and < 0.0001, respectively). On ROC analysis, a threshold Lipiodol uptake of 270.2 HU was significantly associated with tumor response (95% sensitivity, 93% specificity). Baseline Lipiodol uptake and washout rate on follow-up were independent predictors of early tumor recurrence. A threshold value of baseline Lipiodol uptake > 270.2 HU was highly sensitive and specific for tumor response. These findings may prove useful for determining subsequent treatment strategies after Lipiodol TACE.
Physically Unclonable Cryptographic Primitives by Chemical Vapor Deposition of Layered MoS2.
Alharbi, Abdullah; Armstrong, Darren; Alharbi, Somayah; Shahrjerdi, Davood
2017-12-26
Physically unclonable cryptographic primitives are promising for securing the rapidly growing number of electronic devices. Here, we introduce physically unclonable primitives from layered molybdenum disulfide (MoS 2 ) by leveraging the natural randomness of their island growth during chemical vapor deposition (CVD). We synthesize a MoS 2 monolayer film covered with speckles of multilayer islands, where the growth process is engineered for an optimal speckle density. Using the Clark-Evans test, we confirm that the distribution of islands on the film exhibits complete spatial randomness, hence indicating the growth of multilayer speckles is a spatial Poisson process. Such a property is highly desirable for constructing unpredictable cryptographic primitives. The security primitive is an array of 2048 pixels fabricated from this film. The complex structure of the pixels makes the physical duplication of the array impossible (i.e., physically unclonable). A unique optical response is generated by applying an optical stimulus to the structure. The basis for this unique response is the dependence of the photoemission on the number of MoS 2 layers, which by design is random throughout the film. Using a threshold value for the photoemission, we convert the optical response into binary cryptographic keys. We show that the proper selection of this threshold is crucial for maximizing combination randomness and that the optimal value of the threshold is linked directly to the growth process. This study reveals an opportunity for generating robust and versatile security primitives from layered transition metal dichalcogenides.
NASA Astrophysics Data System (ADS)
Liang, Juhua; Tang, Sanyi; Cheke, Robert A.
2016-07-01
Pest resistance to pesticides is usually managed by switching between different types of pesticides. The optimal switching time, which depends on the dynamics of the pest population and on the evolution of the pesticide resistance, is critical. Here we address how the dynamic complexity of the pest population, the development of resistance and the spraying frequency of pulsed chemical control affect optimal switching strategies given different control aims. To do this, we developed novel discrete pest population growth models with both impulsive chemical control and the evolution of pesticide resistance. Strong and weak threshold conditions which guarantee the extinction of the pest population, based on the threshold values of the analytical formula for the optimal switching time, were derived. Further, we addressed switching strategies in the light of chosen economic injury levels. Moreover, the effects of the complex dynamical behaviour of the pest population on the pesticide switching times were also studied. The pesticide application period, the evolution of pesticide resistance and the dynamic complexity of the pest population may result in complex outbreak patterns, with consequent effects on the pesticide switching strategies.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
Threshold-selecting strategy for best possible ground state detection with genetic algorithms
NASA Astrophysics Data System (ADS)
Lässig, Jörg; Hoffmann, Karl Heinz
2009-04-01
Genetic algorithms are a standard heuristic to find states of low energy in complex state spaces as given by physical systems such as spin glasses but also in combinatorial optimization. The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover. Many schemes have been considered in literature as possible crossover selection strategies. We show for a large class of quality measures that the best possible probability distribution for selecting individuals in each generation of the algorithm execution is a rectangular distribution over the individuals sorted by their energy values. This means uniform probabilities have to be assigned to a group of the individuals with lowest energy in the population but probabilities equal to zero to individuals which are corresponding to energy values higher than a fixed cutoff, which is equal to a certain rank in the vector sorted by the energy of the states in the current population. The considered strategy is dubbed threshold selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of algorithms.
Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan
2007-05-01
In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.
Learning to wait: A laboratory investigation
Oprea, R.; Friedman, D.; Anderson, S.T.
2009-01-01
Human subjects decide when to sink a fixed cost C to seize an irreversible investment opportunity whose value V is governed by Brownian motion. The optimal policy is to invest when V first crosses a threshold V* = (1 + w*) C, where the wait option premium w* depends on drift, volatility, and expiration hazard parameters. Subjects in the Low w* treatment on average invest at values quite close to optimum. Subjects in the two Medium and the High w* treatments invested at values below optimum, but with the predicted ordering, and values approached the optimum by the last block of 20 periods. ?? 2009 The Review of Economic Studies Limited.
OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN
An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...
Brown, Joshua B; Gestring, Mark L; Leeper, Christine M; Sperry, Jason L; Peitzman, Andrew B; Billiar, Timothy R; Gaines, Barbara A
2017-06-01
The Injury Severity Score (ISS) is the most commonly used injury scoring system in trauma research and benchmarking. An ISS greater than 15 conventionally defines severe injury; however, no studies evaluate whether ISS performs similarly between adults and children. Our objective was to evaluate ISS and Abbreviated Injury Scale (AIS) to predict mortality and define optimal thresholds of severe injury in pediatric trauma. Patients from the Pennsylvania trauma registry 2000-2013 were included. Children were defined as younger than 16 years. Logistic regression predicted mortality from ISS for children and adults. The optimal ISS cutoff for mortality that maximized diagnostic characteristics was determined in children. Regression also evaluated the association between mortality and maximum AIS in each body region, controlling for age, mechanism, and nonaccidental trauma. Analysis was performed in single and multisystem injuries. Sensitivity analyses with alternative outcomes were performed. Included were 352,127 adults and 50,579 children. Children had similar predicted mortality at ISS of 25 as adults at ISS of 15 (5%). The optimal ISS cutoff in children was ISS greater than 25 and had a positive predictive value of 19% and negative predictive value of 99% compared to a positive predictive value of 7% and negative predictive value of 99% for ISS greater than 15 to predict mortality. In single-system-injured children, mortality was associated with head (odds ratio, 4.80; 95% confidence interval, 2.61-8.84; p < 0.01) and chest AIS (odds ratio, 3.55; 95% confidence interval, 1.81-6.97; p < 0.01), but not abdomen, face, neck, spine, or extremity AIS (p > 0.05). For multisystem injury, all body region AIS scores were associated with mortality except extremities. Sensitivity analysis demonstrated ISS greater than 23 to predict need for full trauma activation, and ISS greater than 26 to predict impaired functional independence were optimal thresholds. An ISS greater than 25 may be a more appropriate definition of severe injury in children. Pattern of injury is important, as only head and chest injury drive mortality in single-system-injured children. These findings should be considered in benchmarking and performance improvement efforts. Epidemiologic study, level III.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Response threshold variance as a basis of collective rationality
Yamamoto, Tatsuhiro
2017-01-01
Determining the optimal choice among multiple options is necessary in various situations, and the collective rationality of groups has recently become a major topic of interest. Social insects are thought to make such optimal choices by collecting individuals' responses relating to an option's value (=a quality-graded response). However, this behaviour cannot explain the collective rationality of brains because neurons can make only ‘yes/no’ responses on the basis of the response threshold. Here, we elucidate the basic mechanism underlying the collective rationality of such simple units and show that an ant species uses this mechanism. A larger number of units respond ‘yes’ to the best option available to a collective decision-maker using only the yes/no mechanism; thus, the best option is always selected by majority decision. Colonies of the ant Myrmica kotokui preferred the better option in a binary choice experiment. The preference of a colony was demonstrated by the workers, which exhibited variable thresholds between two options' qualities. Our results demonstrate how a collective decision-maker comprising simple yes/no judgement units achieves collective rationality without using quality-graded responses. This mechanism has broad applicability to collective decision-making in brain neurons, swarm robotics and human societies. PMID:28484636
Response threshold variance as a basis of collective rationality.
Yamamoto, Tatsuhiro; Hasegawa, Eisuke
2017-04-01
Determining the optimal choice among multiple options is necessary in various situations, and the collective rationality of groups has recently become a major topic of interest. Social insects are thought to make such optimal choices by collecting individuals' responses relating to an option's value (=a quality-graded response). However, this behaviour cannot explain the collective rationality of brains because neurons can make only 'yes/no' responses on the basis of the response threshold. Here, we elucidate the basic mechanism underlying the collective rationality of such simple units and show that an ant species uses this mechanism. A larger number of units respond 'yes' to the best option available to a collective decision-maker using only the yes/no mechanism; thus, the best option is always selected by majority decision. Colonies of the ant Myrmica kotokui preferred the better option in a binary choice experiment. The preference of a colony was demonstrated by the workers, which exhibited variable thresholds between two options' qualities. Our results demonstrate how a collective decision-maker comprising simple yes/no judgement units achieves collective rationality without using quality-graded responses. This mechanism has broad applicability to collective decision-making in brain neurons, swarm robotics and human societies.
A new method for automated discontinuity trace mapping on rock mass 3D surface model
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
McKenzie, Elizabeth M.; Balter, Peter A.; Stingo, Francesco C.; Jones, Jimmy; Followill, David S.; Kry, Stephen F.
2014-01-01
Purpose: The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. Methods: The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Results: Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. Conclusions: IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans. PMID:25471949
McKenzie, Elizabeth M; Balter, Peter A; Stingo, Francesco C; Jones, Jimmy; Followill, David S; Kry, Stephen F
2014-12-01
The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans.
Wong, Carlos K H; Lang, Brian H H; Guo, Vivian Y W; Lam, Cindy L K
2016-12-01
The aim of this paper was to critically review the literature on the cost effectiveness of cancer screening interventions, and examine the incremental cost-effectiveness ratios (ICERs) that may influence government recommendations on cancer screening strategies and funding for mass implementation in the Hong Kong healthcare system. We conducted a literature review of cost-effectiveness studies in the Hong Kong population related to cancer screening published up to 2015, through a hand search and database search of PubMed, Web of Science, Embase, and OVID Medline. Binary data on the government's decisions were obtained from the Cancer Expert Working Group, Department of Health. Mixed-effect logistic regression analysis was used to examine the impact of ICERs on decision making. Using Youden's index, an optimal ICER threshold value for positive decisions was examined by area under receiver operating characteristic curve (AUC). Eight studies reporting 30 cost-effectiveness pairwise comparisons of population-based cancer screening were identified. Most studies reported an ICER for a cancer screening strategy versus a comparator with outcomes in terms of cost per life-years (55.6 %), or cost per quality-adjusted life-years (55.6 %). Among comparisons with a mean ICER of US$102,931 (range 800-715,137), the increase in ICER value by 1000 was associated with decreased odds (odds ratio 0.990, 0.981-0.999; p = 0.033) of a positive recommendation. An optimal ICER value of US$61,600 per effectiveness unit yielded a high sensitivity of 90 % and specificity of 85 % for a positive recommendation. A lower ICER threshold value of below US$8044 per effectiveness unit was detected for a positive funding decision. Linking published evidence to Government recommendations and practice on cancer screening, ICERs influence decisions on the adoption of health technologies in Hong Kong. The potential ICER threshold for recommendation in Hong Kong may be higher than those of developed countries.
High efficiency low threshold current 1.3 μm InAs quantum dot lasers on on-axis (001) GaP/Si
NASA Astrophysics Data System (ADS)
Jung, Daehwan; Norman, Justin; Kennedy, M. J.; Shang, Chen; Shin, Bongki; Wan, Yating; Gossard, Arthur C.; Bowers, John E.
2017-09-01
We demonstrate highly efficient, low threshold InAs quantum dot lasers epitaxially grown on on-axis (001) GaP/Si substrates using molecular beam epitaxy. Electron channeling contrast imaging measurements show a threading dislocation density of 7.3 × 106 cm-2 from an optimized GaAs template grown on GaP/Si. The high-quality GaAs templates enable as-cleaved quantum dot lasers to achieve a room-temperature continuous-wave (CW) threshold current of 9.5 mA, a threshold current density as low as 132 A/cm2, a single-side output power of 175 mW, and a wall-plug-efficiency of 38.4% at room temperature. As-cleaved QD lasers show ground-state CW lasing up to 80 °C. The application of a 95% high-reflectivity coating on one laser facet results in a CW threshold current of 6.7 mA, which is a record-low value for any kind of Fabry-Perot laser grown on Si.
On plant detection of intact tomato fruits using image analysis and machine learning methods.
Yamamoto, Kyosuke; Guo, Wei; Yoshioka, Yosuke; Ninomiya, Seishi
2014-07-09
Fully automated yield estimation of intact fruits prior to harvesting provides various benefits to farmers. Until now, several studies have been conducted to estimate fruit yield using image-processing technologies. However, most of these techniques require thresholds for features such as color, shape and size. In addition, their performance strongly depends on the thresholds used, although optimal thresholds tend to vary with images. Furthermore, most of these techniques have attempted to detect only mature and immature fruits, although the number of young fruits is more important for the prediction of long-term fluctuations in yield. In this study, we aimed to develop a method to accurately detect individual intact tomato fruits including mature, immature and young fruits on a plant using a conventional RGB digital camera in conjunction with machine learning approaches. The developed method did not require an adjustment of threshold values for fruit detection from each image because image segmentation was conducted based on classification models generated in accordance with the color, shape, texture and size of the images. The results of fruit detection in the test images showed that the developed method achieved a recall of 0.80, while the precision was 0.88. The recall values of mature, immature and young fruits were 1.00, 0.80 and 0.78, respectively.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
On the Design of a Fuzzy Logic-Based Control System for Freeze-Drying Processes.
Fissore, Davide
2016-12-01
This article is focused on the design of a fuzzy logic-based control system to optimize a drug freeze-drying process. The goal of the system is to keep product temperature as close as possible to the threshold value of the formulation being processed, without trespassing it, in such a way that product quality is not jeopardized and the sublimation flux is maximized. The method involves the measurement of product temperature and a set of rules that have been obtained through process simulation with the goal to obtain a unique set of rules for products with very different characteristics. Input variables are the difference between the temperature of the product and the threshold value, the difference between the temperature of the heating fluid and that of the product, and the rate of change of product temperature. The output variables are the variation of the temperature of the heating fluid and the pressure in the drying chamber. The effect of the starting value of the input variables and of the control interval has been investigated, thus resulting in the optimal configuration of the control system. Experimental investigation carried out in a pilot-scale freeze-dryer has been carried out to validate the proposed system. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
A model for HIV/AIDS pandemic with optimal control
NASA Astrophysics Data System (ADS)
Sule, Amiru; Abdullah, Farah Aini
2015-05-01
Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.
Dong, Liang; Zheng, Lei; Yang, Suwen; Yan, Zhenguang; Jin, Weidong; Yan, Yuhong
2017-05-01
Hexabromocyclododecane (HBCD) is a brominated flame retardant used throughout the world. It has been detected in various environmental media and has been shown toxic to aquatic life. The toxic effects of HBCD to aquatic organisms in Chinese freshwater ecosystems are discussed here. Experiments were conducted with nine types of acute toxicity testing and three types of chronic toxicity testing. After comparing a range of species sensitivity distribution models, the optimal model of Bull III was used to derive the safety thresholds for HBCD. The acute safety threshold and the chronic safety threshold of HBCD for Chinese freshwater organisms were found to be 2.32mg/L and 0.128mg/L, respectively. Both values were verified by the methods of the Netherlands and the United States. HBCD was found to be less toxic compared to other widely used brominated flame retardants. The present results provide valuable information for revision of the water quality standard of HBCD in China. Copyright © 2017 Elsevier Inc. All rights reserved.
Clark, D Angus; Bowles, Ryan P
2018-04-23
In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.
Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin
2018-03-05
The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.
An adaptive design for updating the threshold value of a continuous biomarker.
Spencer, Amy V; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2016-11-30
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker 'positive' and 'negative' is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that 'no population subset exists in which the novel treatment has a desirable response rate' to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Convergence of decision rules for value-based pricing of new innovative drugs.
Gandjour, Afschin
2015-04-01
Given the high costs of innovative new drugs, most European countries have introduced policies for price control, in particular value-based pricing (VBP) and international reference pricing. The purpose of this study is to describe how profit-maximizing manufacturers would optimally adjust their launch sequence to these policies and how VBP countries may best respond. To decide about the launching sequence, a manufacturer must consider a tradeoff between price and sales volume in any given country as well as the effect of price in a VBP country on the price in international reference pricing countries. Based on the manufacturer's rationale, it is best for VBP countries in Europe to implicitly collude in the long term and set cost-effectiveness thresholds at the level of the lowest acceptable VBP country. This way, international reference pricing countries would also converge towards the lowest acceptable threshold in Europe.
Fabrication and Characteristics of High Mobility InSnZnO Thin Film Transistors.
Choi, Pyungho; Lee, Junki; Park, Hyoungsun; Baek, Dohyun; Lee, Jaehyeong; Yi, Junsin; Kim, Sangsoo; Choi, Byoungdeog
2016-05-01
In this paper, we describe the fabrication of thin film transistors (TFTs) with amorphous indium-tin-zinc-oxide (ITZO) as the active material. A transparent ITZO channel layer was formed under an optimized oxygen partial pressure (OPP (%) = O2/(Ar + O2)) and subsequent annealing process. The electrical properties exhibited by this device include field-effect mobility (μ(eff)), sub-threshold swing (SS), and on/off current ratio (I(ON/OFF)) values of 28.97 cm2/V x s, 0.2 V/decade, and 2.64 x 10(7), respectively. The average transmittance values for each OPP condition in the visible range were greater than 80%. The positive gate bias stress resulted in a positive threshold voltage (V(th)) shift in the transfer curves and degraded the parameters μ(eff) and SS. These phenomena originated from electron trapping from the ITZO channel layer into the oxide/ITZO interface trap sites.
Chorel, Marine; Lanternier, Thomas; Lavastre, Éric; Bonod, Nicolas; Bousquet, Bruno; Néauport, Jérôme
2018-04-30
We report on a numerical optimization of the laser induced damage threshold of multi-dielectric high reflection mirrors in the sub-picosecond regime. We highlight the interplay between the electric field distribution, refractive index and intrinsic laser induced damage threshold of the materials on the overall laser induced damage threshold (LIDT) of the multilayer. We describe an optimization method of the multilayer that minimizes the field enhancement in high refractive index materials while preserving a near perfect reflectivity. This method yields a significant improvement of the damage resistance since a maximum increase of 40% can be achieved on the overall LIDT of the multilayer.
Diao, Wen-wen; Ni, Dao-feng; Li, Feng-rong; Shang, Ying-ying
2011-03-01
Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency, figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Thresholds with filter settings of 30 - 1500 Hz and 30 - 3000 Hz in children aged 2 - 33 months were recorded by click, tone burst ABR. A total of 18 patients (8 male/10 female), 22 ears were included. The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0.5 kHz and 2.0 kHz (t values were 2.238 and 2.217, P < 0.05), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.
Optimizing Retransmission Threshold in Wireless Sensor Networks
Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang
2016-01-01
The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-03-01
A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2017-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS. PMID:28728470
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2018-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS.
Optimization of a hardware implementation for pulse coupled neural networks for image applications
NASA Astrophysics Data System (ADS)
Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal
2010-04-01
Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.
Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G
2012-09-01
This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.
Mendez, Gustavo; Foster, Bryan R; Li, Xin; Shannon, Jackilen; Garzotto, Mark; Amling, Christopher L; Coakley, Fergus V
2018-04-25
To evaluate the length of contact between dominant tumor foci and the prostatic capsule as a sign of extracapsular extension at endorectal multiparametric MR imaging. We retrospectively identified 101 patients over a three-year interval who underwent endorectal multiparametric prostate MR imaging prior to radical prostatectomy for prostate cancer. Two readers identified the presence of dominant tumor focus (largest lesion with PI-RADS version 2 score of 4 or 5), and measured the length of tumor capsular contact and likelihood of extracapsular extension by standard criteria (1-5 Likert scale). Results were analyzed using histopathological review as reference standard. Extracapsular extension was found at histopathological review in 27 patients. Reader 1 (2) identified dominant tumor in 79 (73) patients, with mean tumor capsular contact length of 18.2 (14.0) mm. The area under the receiver operating characteristic curve for identification of extracapsular extension by tumor capsular contact length was 0.76 for reader 1 and 0.77 for reader 2, with optimal discrimination at values of 18 mm and 21 mm, respectively. In the subset of patients without obvious extracapsular extension by standard criteria (Likert scores 1-3), corresponding values were 0.74 and 0.66 with optimal thresholds of 24 and 21 mm. Length of contact between the dominant tumor focus and the capsule is a moderately useful sign of extracapsular extension at endorectal multiparametric prostate MR imaging, including the subset of patients without obvious extracapsular extension by standard criteria, with optimal discrimination at threshold values of 18 to 24 mm. Copyright © 2018 Elsevier Inc. All rights reserved.
A study of the threshold method utilizing raingage data
NASA Technical Reports Server (NTRS)
Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David
1993-01-01
The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.
Ritchie, Brianne M; Connors, Jean M; Sylvester, Katelyn W
2017-04-01
Previous studies have demonstrated optimized diagnostic accuracy in utilizing higher antiheparin-platelet factor 4 (PF4) enzyme-linked immunosorbent assay (ELISA) optical density (OD) thresholds for diagnosing heparin-induced thrombocytopenia (HIT). We describe the incidence of positive serotonin release assay (SRA) results, as well as performance characteristics, for antiheparin-PF4 ELISA thresholds ≥0.4, ≥0.8, and ≥1.0 OD units in the diagnosis of HIT at our institution. Following institutional review board approval, we conducted a single-center retrospective chart review on adult inpatients with a differential diagnosis of HIT evaluated by both antiheparin-PF4 ELISA and SRA from 2012 to 2014. The major endpoints were to assess incidence of positive SRA results, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy at antiheparin-PF4 ELISA values ≥0.4 OD units when compared to values ≥0.8 and ≥1.0 OD units. Clinical characteristics, including demographics, laboratory values, clinical and safety outcomes, length of stay, and mortality, were collected. A total of 140 patients with 140 antiheparin-PF4 ELISA and SRA values were evaluated, of which 23 patients were SRA positive (16.4%) and 117 patients were SRA negative (83.6%). We identified a sensitivity of 91.3% versus 82.6% and 73.9%, specificity of 61.5% versus 87.2% and 91.5%, PPV of 31.8% versus 55.9% and 63.0%, NPV of 97.3% versus 96.2% and 94.7%, and accuracy of 66.4% versus 86.4% and 88.6% at antiheparin-PF4 ELISA thresholds ≥0.4, ≥0.8, and ≥1.0 OD units, respectively. Our study suggests an increased antiheparin-PF4 ELISA threshold of 0.8 or 1.0 OD units enhances specificity, PPV, and accuracy while maintaining NPV with decreased sensitivity.
Quantitative Ultrasound Assessment of Duchenne Muscular Dystrophy Using Edge Detection Analysis.
Koppaka, Sisir; Shklyar, Irina; Rutkove, Seward B; Darras, Basil T; Anthony, Brian W; Zaidman, Craig M; Wu, Jim S
2016-09-01
The purpose of this study was to investigate the ability of quantitative ultrasound (US) using edge detection analysis to assess patients with Duchenne muscular dystrophy (DMD). After Institutional Review Board approval, US examinations with fixed technical parameters were performed unilaterally in 6 muscles (biceps, deltoid, wrist flexors, quadriceps, medial gastrocnemius, and tibialis anterior) in 19 boys with DMD and 21 age-matched control participants. The muscles of interest were outlined by a tracing tool, and the upper third of the muscle was used for analysis. Edge detection values for each muscle were quantified by the Canny edge detection algorithm and then normalized to the number of edge pixels in the muscle region. The edge detection values were extracted at multiple sensitivity thresholds (0.01-0.99) to determine the optimal threshold for distinguishing DMD from normal. Area under the receiver operating curve values were generated for each muscle and averaged across the 6 muscles. The average age in the DMD group was 8.8 years (range, 3.0-14.3 years), and the average age in the control group was 8.7 years (range, 3.4-13.5 years). For edge detection, a Canny threshold of 0.05 provided the best discrimination between DMD and normal (area under the curve, 0.96; 95% confidence interval, 0.84-1.00). According to a Mann-Whitney test, edge detection values were significantly different between DMD and controls (P < .0001). Quantitative US imaging using edge detection can distinguish patients with DMD from healthy controls at low Canny thresholds, at which discrimination of small structures is best. Edge detection by itself or in combination with other tests can potentially serve as a useful biomarker of disease progression and effectiveness of therapy in muscle disorders.
Hekmat, D; Bauer, R; Fricke, J
2003-12-01
An optimized repeated-fed-batch fermentation process for the synthesis of dihydroxyacetone (DHA) from glycerol utilizing Gluconobacter oxydans is presented. Cleaning, sterilization, and inoculation procedures could be reduced significantly compared to the conventional fed-batch process. A stringent requirement was that the product concentration was kept below a critical threshold level at all times in order to avoid irreversible product inhibition of the cells. On the basis of experimentally validated model calculations, a threshold value of about 60 kg x m(-3) DHA was obtained. The innovative bioreactor system consisted of a stirred tank reactor combined with a packed trickle-bed column. In the packed column, active cells could be retained by in situ immobilization on a hydrophilized Ralu-ring carrier material. Within 17 days, the productivity of the process could be increased by 75% to about 2.8 kg x m(-3) h(-1). However, it was observed that the maximum achievable productivity had not been reached yet.
Engdahl, Bo; Tambs, Kristian; Borchgrevink, Hans M; Hoffman, Howard J
2005-01-01
This study aims to describe the association between otoacoustic emissions (OAEs) and pure-tone hearing thresholds (PTTs) in an unscreened adult population (N =6415), to determine the efficiency by which TEOAEs and DPOAEs can identify ears with elevated PTTs, and to investigate whether a combination of DPOAE and TEOAE responses improves this performance. Associations were examined by linear regression analysis and ANOVA. Test performance was assessed by receiver operator characteristic (ROC) curves. The relation between OAEs and PTTs appeared curvilinear with a moderate degree of non-linearity. Combining DPOAEs and TEOAEs improved performance. Test performance depended on the cut-off thresholds defining elevated PTTs with optimal values between 25 and 45 dB HL, depending on frequency and type of OAE measure. The unique constitution of the present large sample, which reflects the general adult population, makes these results applicable to population-based studies and screening programs.
An Algorithm to Automate Yeast Segmentation and Tracking
Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.
2013-01-01
Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukada, Junichi, E-mail: fukada@rad.med.keio.ac.jp; Shigematsu, Naoyuki; Takeuchi, Hiroya
Purpose: We investigated clinical and treatment-related factors as predictors of symptomatic pericardial effusion in esophageal cancer patients after concurrent chemoradiation therapy. Methods and Materials: We reviewed 214 consecutive primary esophageal cancer patients treated with concurrent chemoradiation therapy between 2001 and 2010 in our institute. Pericardial effusion was detected on follow-up computed tomography. Symptomatic effusion was defined as effusion ≥grade 3 according to Common Terminology Criteria for Adverse Events v4.0 criteria. Percent volume irradiated with 5 to 65 Gy (V5-V65) and mean dose to the pericardium were evaluated employing dose-volume histograms. To evaluate dosimetry for patients treated with two-dimensional planning inmore » the earlier period (2001-2005), computed tomography data at diagnosis were transferred to a treatment planning system to reconstruct three-dimensional plans without modification. Optimal dosimetric thresholds for symptomatic pericardial effusion were calculated by receiver operating characteristic curves. Associating clinical and treatment-related risk factors for symptomatic pericardial effusion were detected by univariate and multivariate analyses. Results: The median follow-up was 29 (range, 6-121) months for eligible 167 patients. Symptomatic pericardial effusion was observed in 14 (8.4%) patients. Dosimetric analyses revealed average values of V30 to V45 for the pericardium and mean pericardial doses were significantly higher in patients with symptomatic pericardial effusion than in those with asymptomatic pericardial effusion (P<.05). Pericardial V5 to V55 and mean pericardial doses were significantly higher in patients with symptomatic pericardial effusion than in those without pericardial effusion (P<.001). Mean pericardial doses of 36.5 Gy and V45 of 58% were selected as optimal cutoff values for predicting symptomatic pericardial effusion. Multivariate analysis identified mean pericardial dose as the strongest risk factor for symptomatic pericardial effusion. Conclusions: Dose-volume thresholds for the pericardium facilitate predicting symptomatic pericardial effusion. Mean pericardial dose was selected based not only on the optimal dose-volume threshold but also on the most significant risk factor for symptomatic pericardial effusion.« less
Symptomatic pericardial effusion after chemoradiation therapy in esophageal cancer patients.
Fukada, Junichi; Shigematsu, Naoyuki; Takeuchi, Hiroya; Ohashi, Toshio; Saikawa, Yoshiro; Takaishi, Hiromasa; Hanada, Takashi; Shiraishi, Yutaka; Kitagawa, Yuko; Fukuda, Keiichi
2013-11-01
We investigated clinical and treatment-related factors as predictors of symptomatic pericardial effusion in esophageal cancer patients after concurrent chemoradiation therapy. We reviewed 214 consecutive primary esophageal cancer patients treated with concurrent chemoradiation therapy between 2001 and 2010 in our institute. Pericardial effusion was detected on follow-up computed tomography. Symptomatic effusion was defined as effusion ≥grade 3 according to Common Terminology Criteria for Adverse Events v4.0 criteria. Percent volume irradiated with 5 to 65 Gy (V5-V65) and mean dose to the pericardium were evaluated employing dose-volume histograms. To evaluate dosimetry for patients treated with two-dimensional planning in the earlier period (2001-2005), computed tomography data at diagnosis were transferred to a treatment planning system to reconstruct three-dimensional plans without modification. Optimal dosimetric thresholds for symptomatic pericardial effusion were calculated by receiver operating characteristic curves. Associating clinical and treatment-related risk factors for symptomatic pericardial effusion were detected by univariate and multivariate analyses. The median follow-up was 29 (range, 6-121) months for eligible 167 patients. Symptomatic pericardial effusion was observed in 14 (8.4%) patients. Dosimetric analyses revealed average values of V30 to V45 for the pericardium and mean pericardial doses were significantly higher in patients with symptomatic pericardial effusion than in those with asymptomatic pericardial effusion (P<.05). Pericardial V5 to V55 and mean pericardial doses were significantly higher in patients with symptomatic pericardial effusion than in those without pericardial effusion (P<.001). Mean pericardial doses of 36.5 Gy and V45 of 58% were selected as optimal cutoff values for predicting symptomatic pericardial effusion. Multivariate analysis identified mean pericardial dose as the strongest risk factor for symptomatic pericardial effusion. Dose-volume thresholds for the pericardium facilitate predicting symptomatic pericardial effusion. Mean pericardial dose was selected based not only on the optimal dose-volume threshold but also on the most significant risk factor for symptomatic pericardial effusion. Copyright © 2013 Elsevier Inc. All rights reserved.
Diabetes screening in overweight and obese children and adolescents: choosing the right test.
Ehehalt, Stefan; Wiegand, Susanna; Körner, Antje; Schweizer, Roland; Liesenkötter, Klaus-Peter; Partsch, Carl-Joachim; Blumenstock, Gunnar; Spielau, Ulrike; Denzer, Christian; Ranke, Michael B; Neu, Andreas; Binder, Gerhard; Wabitsch, Martin; Kiess, Wieland; Reinehr, Thomas
2017-01-01
Type 2 diabetes can occur without any symptoms, and health problems associated with the disease are serious. Screening tests allowing an early diagnosis are desirable. However, optimal screening tests for diabetes in obese youth are discussed controversially. We performed an observational multicenter analysis including 4848 (2668 female) overweight and obese children aged 7 to 17 years without previously known diabetes. Using HbA1c and OGTT as diagnostic criteria, 2.4% (n = 115, 55 female) could be classified as having diabetes. Within this group, 68.7% had HbA1c levels ≥48 mmol/mol (≥6.5%). FPG ≥126 mg/dl (≥7.0 mmol/l) and/or 2-h glucose levels ≥200 mg/dl (≥11.1 mmol/l) were found in 46.1%. Out of the 115 cases fulfilling the OGTT and/or HbA1c criteria for diabetes, diabetes was confirmed in 43.5%. For FPG, the ROC analysis revealed an optimal threshold of 98 mg/dl (5.4 mmol/l) (sensitivity 70%, specificity 88%). For HbA1c, the best cut-off value was 42 mmol/mol (6.0%) (sensitivity 94%, specificity 93%). HbA1c seems to be more reliable than OGTT for diabetes screening in overweight and obese children and adolescents. The optimal HbA1c threshold for identifying patients with diabetes was found to be 42 mmol/mol (6.0%). What is Known: • The prevalence of obesity is increasing and health problems related to type 2 DM can be serious. However, an optimal screening test for diabetes in obese youth seems to be controversial in the literature. What is New: • In our study, the ROC analysis revealed for FPG an optimal threshold of 98 mg/dl (5.4 mmol/l, sensitivity 70%, specificity 88%) and for HbA1c a best cut-off value of 42 mmol/mol (6.0%, sensitivity 94%, specificity 93%) to detect diabetes. Thus, in overweight and obese children and adolescents, HbA1c seems to be a more reliable screening tool than OGTT.
Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng
2015-01-01
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315
Cockburn, Neil; Kovacs, Michael
2016-01-01
CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia time by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155–180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877
Automatic burst detection for the EEG of the preterm infant.
Jennekens, Ward; Ruijs, Loes S; Lommen, Charlotte M L; Niemarkt, Hendrik J; Pasman, Jaco W; van Kranen-Mastenbroek, Vivianne H J M; Wijn, Pieter F F; van Pul, Carola; Andriessen, Peter
2011-10-01
To aid with prognosis and stratification of clinical treatment for preterm infants, a method for automated detection of bursts, interburst-intervals (IBIs) and continuous patterns in the electroencephalogram (EEG) is developed. Results are evaluated for preterm infants with normal neurological follow-up at 2 years. The detection algorithm (MATLAB®) for burst, IBI and continuous pattern is based on selection by amplitude, time span, number of channels and numbers of active electrodes. Annotations of two neurophysiologists were used to determine threshold values. The training set consisted of EEG recordings of four preterm infants with postmenstrual age (PMA, gestational age + postnatal age) of 29-34 weeks. Optimal threshold values were based on overall highest sensitivity. For evaluation, both observers verified detections in an independent dataset of four EEG recordings with comparable PMA. Algorithm performance was assessed by calculation of sensitivity and positive predictive value. The results of algorithm evaluation are as follows: sensitivity values of 90% ± 6%, 80% ± 9% and 97% ± 5% for burst, IBI and continuous patterns, respectively. Corresponding positive predictive values were 88% ± 8%, 96% ± 3% and 85% ± 15%, respectively. In conclusion, the algorithm showed high sensitivity and positive predictive values for bursts, IBIs and continuous patterns in preterm EEG. Computer-assisted analysis of EEG may allow objective and reproducible analysis for clinical treatment.
Bào, Yīmíng; Amarasinghe, Gaya K; Basler, Christopher F; Bavari, Sina; Bukreyev, Alexander; Chandran, Kartik; Dolnik, Olga; Dye, John M; Ebihara, Hideki; Formenty, Pierre; Hewson, Roger; Kobinger, Gary P; Leroy, Eric M; Mühlberger, Elke; Netesov, Sergey V; Patterson, Jean L; Paweska, Janusz T; Smither, Sophie J; Takada, Ayato; Towner, Jonathan S; Volchkov, Viktor E; Wahl-Jensen, Victoria; Kuhn, Jens H
2017-05-11
The mononegaviral family Filoviridae has eight members assigned to three genera and seven species. Until now, genus and species demarcation were based on arbitrarily chosen filovirus genome sequence divergence values (≈50% for genera, ≈30% for species) and arbitrarily chosen phenotypic virus or virion characteristics. Here we report filovirus genome sequence-based taxon demarcation criteria using the publicly accessible PAirwise Sequencing Comparison (PASC) tool of the US National Center for Biotechnology Information (Bethesda, MD, USA). Comparison of all available filovirus genomes in GenBank using PASC revealed optimal genus demarcation at the 55-58% sequence diversity threshold range for genera and at the 23-36% sequence diversity threshold range for species. Because these thresholds do not change the current official filovirus classification, these values are now implemented as filovirus taxon demarcation criteria that may solely be used for filovirus classification in case additional data are absent. A near-complete, coding-complete, or complete filovirus genome sequence will now be required to allow official classification of any novel "filovirus." Classification of filoviruses into existing taxa or determining the need for novel taxa is now straightforward and could even become automated using a presented algorithm/flowchart rooted in RefSeq (type) sequences.
On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos
2015-01-01
For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489
NASA Astrophysics Data System (ADS)
Zhu, C.; Zhang, S.; Xiao, F.; Li, J.; Yuan, L.; Zhang, Y.; Zhu, T.
2018-05-01
The NASA Operation IceBridge (OIB) mission is the largest program in the Earth's polar remote sensing science observation project currently, initiated in 2009, which collects airborne remote sensing measurements to bridge the gap between NASA's ICESat and the upcoming ICESat-2 mission. This paper develop an improved method that optimizing the selection method of Digital Mapping System (DMS) image and using the optimal threshold obtained by experiments in Beaufort Sea to calculate the local instantaneous sea surface height in this area. The optimal threshold determined by comparing manual selection with the lowest (Airborne Topographic Mapper) ATM L1B elevation threshold of 2 %, 1 %, 0.5 %, 0.2 %, 0.1 % and 0.05 % in A, B, C sections, the mean of mean difference are 0.166 m, 0.124 m, 0.083 m, 0.018 m, 0.002 m and -0.034 m. Our study shows the lowest L1B data of 0.1 % is the optimal threshold. The optimal threshold and manual selections are also used to calculate the instantaneous sea surface height over images with leads, we find that improved methods has closer agreement with those from L1B manual selections. For these images without leads, the local instantaneous sea surface height estimated by using the linear equations between distance and sea surface height calculated over images with leads.
A Topological Criterion for Filtering Information in Complex Brain Networks
Latora, Vito; Chavez, Mario
2017-01-01
In many biological systems, the network of interactions between the elements can only be inferred from experimental measurements. In neuroscience, non-invasive imaging tools are extensively used to derive either structural or functional brain networks in-vivo. As a result of the inference process, we obtain a matrix of values corresponding to a fully connected and weighted network. To turn this into a useful sparse network, thresholding is typically adopted to cancel a percentage of the weakest connections. The structural properties of the resulting network depend on how much of the inferred connectivity is eventually retained. However, how to objectively fix this threshold is still an open issue. We introduce a criterion, the efficiency cost optimization (ECO), to select a threshold based on the optimization of the trade-off between the efficiency of a network and its wiring cost. We prove analytically and we confirm through numerical simulations that the connection density maximizing this trade-off emphasizes the intrinsic properties of a given network, while preserving its sparsity. Moreover, this density threshold can be determined a-priori, since the number of connections to filter only depends on the network size according to a power-law. We validate this result on several brain networks, from micro- to macro-scales, obtained with different imaging modalities. Finally, we test the potential of ECO in discriminating brain states with respect to alternative filtering methods. ECO advances our ability to analyze and compare biological networks, inferred from experimental data, in a fast and principled way. PMID:28076353
Lindahl, Jonas; Danell, Rickard
The aim of this study was to provide a framework to evaluate bibliometric indicators as decision support tools from a decision making perspective and to examine the information value of early career publication rate as a predictor of future productivity. We used ROC analysis to evaluate a bibliometric indicator as a tool for binary decision making. The dataset consisted of 451 early career researchers in the mathematical sub-field of number theory. We investigated the effect of three different definitions of top performance groups-top 10, top 25, and top 50 %; the consequences of using different thresholds in the prediction models; and the added prediction value of information on early career research collaboration and publications in prestige journals. We conclude that early career performance productivity has an information value in all tested decision scenarios, but future performance is more predictable if the definition of a high performance group is more exclusive. Estimated optimal decision thresholds using the Youden index indicated that the top 10 % decision scenario should use 7 articles, the top 25 % scenario should use 7 articles, and the top 50 % should use 5 articles to minimize prediction errors. A comparative analysis between the decision thresholds provided by the Youden index which take consequences into consideration and a method commonly used in evaluative bibliometrics which do not take consequences into consideration when determining decision thresholds, indicated that differences are trivial for the top 25 and the 50 % groups. However, a statistically significant difference between the methods was found for the top 10 % group. Information on early career collaboration and publication strategies did not add any prediction value to the bibliometric indicator publication rate in any of the models. The key contributions of this research is the focus on consequences in terms of prediction errors and the notion of transforming uncertainty into risk when we are choosing decision thresholds in bibliometricly informed decision making. The significance of our results are discussed from the point of view of a science policy and management.
Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.
2012-01-01
Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069
Bauer, Robert; Fels, Meike; Royter, Vladislav; Raco, Valerio; Gharabaghi, Alireza
2016-09-01
Considering self-rated mental effort during neurofeedback may improve training of brain self-regulation. Twenty-one healthy, right-handed subjects performed kinesthetic motor imagery of opening their left hand, while threshold-based classification of beta-band desynchronization resulted in proprioceptive robotic feedback. The experiment consisted of two blocks in a cross-over design. The participants rated their perceived mental effort nine times per block. In the adaptive block, the threshold was adjusted on the basis of these ratings whereas adjustments were carried out at random in the other block. Electroencephalography was used to examine the cortical activation patterns during the training sessions. The perceived mental effort was correlated with the difficulty threshold of neurofeedback training. Adaptive threshold-setting reduced mental effort and increased the classification accuracy and positive predictive value. This was paralleled by an inter-hemispheric cortical activation pattern in low frequency bands connecting the right frontal and left parietal areas. Optimal balance of mental effort was achieved at thresholds significantly higher than maximum classification accuracy. Rating of mental effort is a feasible approach for effective threshold-adaptation during neurofeedback training. Closed-loop adaptation of the neurofeedback difficulty level facilitates reinforcement learning of brain self-regulation. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.
Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W
2005-12-01
Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.
NASA Astrophysics Data System (ADS)
Prasad, M. N.; Brown, M. S.; Ahmad, S.; Abtin, F.; Allen, J.; da Costa, I.; Kim, H. J.; McNitt-Gray, M. F.; Goldin, J. G.
2008-03-01
Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis. Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to perform segmentation of the lungs using a technique that selects an optimal threshold for a given scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build the model for selection of the best fitting lung boundary. The performance of the new technique was compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing. The two techniques were evaluated against manual reference segmentations using a volumetric overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the fixed threshold technique.
Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.
2014-01-01
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366
Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.
d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K
2015-12-01
Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.
FOREX Trades: Can the Takens Algorithm Help to Obtain Steady Profit at Investment Reallocations?
NASA Astrophysics Data System (ADS)
Petrov, V. Yu.; Tribelsky, M. I.
2015-12-01
We report our preliminary results of application of the Takens algorithm to build a FOREX trade strategy, resulting in a steady long-time gain for a trader. The actual historical rates for pair EUR vs. USD are used. The values of various parameters of the problem including the "stop loss" and "take profit" thresholds are optimized to provide the maximal gain during the training period. Then, these values are employed for trades. We have succeeded to get the steady gain, if the spread is neglected. It proves that the FOREX market is predictable.
A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
NASA Technical Reports Server (NTRS)
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
NASA Astrophysics Data System (ADS)
Petroliagkis, Thomas I.; Camia, Andrea; Liberta, Giorgio; Durrant, Tracy; Pappenberger, Florian; San-Miguel-Ayanz, Jesus
2014-05-01
The European Forest Fire Information System (EFFIS) has been established by the Joint Research Centre (JRC) and the Directorate General for Environment (DG ENV) of the European Commission (EC) to support the services in charge of the protection of forests against fires in the EU and neighbour countries, and also to provide the EC services and the European Parliament with information on forest fires in Europe. Within its applications, EFFIS provides current and forecast meteorological fire danger maps up to 6 days. Weather plays a key role in affecting wildfire occurrence and behaviour. Meteorological parameters can be used to derive meteorological fire weather indices that provide estimations of fire danger level at a given time over a specified area of interest. In this work, we investigate the suitability of critical thresholds of fire danger to provide an early warning for megafires (fires > 500 ha) over Europe. Past trends of fire danger are analysed computing daily fire danger from weather data taken from re-analysis fields for a period of 31 years (1980 to 2010). Re-analysis global data sets coming from the construction of high-quality climate records, which combine past observations collected from many different observing and measuring platforms, are capable of describing how Fire Danger Indices have evolved over time at a global scale. The latest and most updated ERA-Interim dataset of the European Centre for Medium-Range Weather Forecast (ECMWF) was used to extract meteorological variables needed to compute daily values of the Canadian Fire Weather Index (CFWI) over Europe, with a horizontal resolution of about 75x75 km. Daily time series of CFWI were constructed and analysed over a total of 1,071 European NUTS3 centroids, resulting in a set of percentiles and critical thresholds. Such percentiles could be used as thresholds to help fire services establish a measure of the significance of CFWI outputs as they relate to levels of fire potential, fuel conditions and fire danger. Median percentile values of fire days accumulated over the 31-year period were compared to median values of all days from that period. As expected, the CWFI time series exhibit different values on fire days than on all days. In addition, a percentile analysis was performed in order to determine the behaviour of index values corresponding to fire events falling into the megafire category. This analysis resulted in a set of critical thresholds based on percentiles. By utilising such thresholds, an initial framework of an early warning system has being established. By lowering the value of any of these thresholds, the number of hits could be increased until all extremes were captured (resulting in zero misses). However, in doing so, the number of false alarms tends to increase significantly. Consequently, an optimal trade-off between hits and false alarms has to be established when setting different (critical) CFWI thresholds.
Quantifying cerebellum grey matter and white matter perfusion using pulsed arterial spin labeling.
Li, Xiufeng; Sarkar, Subhendra N; Purdy, David E; Briggs, Richard W
2014-01-01
To facilitate quantification of cerebellum cerebral blood flow (CBF), studies were performed to systematically optimize arterial spin labeling (ASL) parameters for measuring cerebellum perfusion, segment cerebellum to obtain separate CBF values for grey matter (GM) and white matter (WM), and compare FAIR ASST to PICORE. Cerebellum GM and WM CBF were measured with optimized ASL parameters using FAIR ASST and PICORE in five subjects. Influence of volume averaging in voxels on cerebellar grey and white matter boundaries was minimized by high-probability threshold masks. Cerebellar CBF values determined by FAIR ASST were 43.8 ± 5.1 mL/100 g/min for GM and 27.6 ± 4.5 mL/100 g/min for WM. Quantitative perfusion studies indicated that CBF in cerebellum GM is 1.6 times greater than that in cerebellum WM. Compared to PICORE, FAIR ASST produced similar CBF estimations but less subtraction error and lower temporal, spatial, and intersubject variability. These are important advantages for detecting group and/or condition differences in CBF values.
Masoli, Stefano; Rizza, Martina F; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio
2017-01-01
In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (G i-max ) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of G i-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of G i-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental G i-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models.
Mate choice when males are in patches: optimal strategies and good rules of thumb.
Hutchinson, John M C; Halupka, Konrad
2004-11-07
In standard mate-choice models, females encounter males sequentially and decide whether to inspect the quality of another male or to accept a male already inspected. What changes when males are clumped in patches and there is a significant cost to travel between patches? We use stochastic dynamic programming to derive optimum strategies under various assumptions. With zero costs to returning to a male in the current patch, the optimal strategy accepts males above a quality threshold which is constant whenever one or more males in the patch remain uninspected; this threshold drops when inspecting the last male in the patch, so returns may occur only then and are never to a male in a previously inspected patch. With non-zero within-patch return costs, such a two-threshold rule still performs extremely well, but a more gradual decline in acceptance threshold is optimal. Inability to return at all need not decrease performance by much. The acceptance threshold should also decline if it gets harder to discover the last males in a patch. Optimal strategies become more complex when mean male quality varies systematically between patches or years, and females estimate this in a Bayesian manner through inspecting male qualities. It can then be optimal to switch patch before inspecting all males on a patch, or, exceptionally, to return to an earlier patch. We compare performance of various rules of thumb in these environments and in ones without a patch structure. A two-threshold rule performs excellently, as do various simplifications of it. The best-of-N rule outperforms threshold rules only in non-patchy environments with between-year quality variation. The cutoff rule performs poorly.
Optimal dividends in the Brownian motion risk model with interest
NASA Astrophysics Data System (ADS)
Fang, Ying; Wu, Rong
2009-07-01
In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Karoly F.; Vertesi, Tamas
In a recent paper, Bancal et al.[Phys. Rev. Lett. 106, 250404 (2011)] put forward the concept of device-independent witnesses of genuine multipartite entanglement. These witnesses are capable of verifying genuine multipartite entanglement produced in a laboratory without resorting to any knowledge of the dimension of the state space or of the specific form of the measurement operators. As a by-product they found a multiparty three-setting Bell inequality which makes it possible to detect genuine n-partite entanglement in a noisy n-qubit Greenberger-Horne-Zeilinger (GHZ) state for visibilities as low as 2/3 in a device-independent way. In this paper, we generalize this inequalitymore » to an arbitrary number of settings, demonstrating a threshold visibility of 2/{pi}{approx}0.6366 for number of settings going to infinity. We also present a pseudotelepathy Bell inequality achieving the same threshold value. We argue that our device-independent witnesses are optimal in the sense that for n odd the above value cannot be beaten with n-party-correlation Bell inequalities.« less
Kitagawa, Noriyuki; Ushigome, Emi; Matsumoto, Shinobu; Oyabu, Chikako; Ushigome, Hidetaka; Yokota, Isao; Asano, Mai; Tanaka, Muhei; Yamazaki, Masahiro; Fukui, Michiaki
2018-03-01
This cross-sectional multicenter study was designed to evaluate the threshold value of home pulse pressure (PP) and home systolic blood pressure (SBP) predicting the arterial stiffness in 876 patients with type 2 diabetes. We measured the area under the receiver-operating characteristic curve (AUC) and estimated the ability of home PP to identify arterial stiffness using Youden-Index defined cut-off point. The arterial stiffness was measured using the brachial-ankle pulse wave velocity (baPWV). AUC for arterial stiffness in morning PP was significantly greater than that in morning SBP (P < .001). AUC for arterial stiffness in evening PP was also significantly greater than that in evening SBP (P < .001). The optimal cut-off points for morning PP and evening PP, which predicted arterial stiffness, were 54.6 and 56.9 mm Hg, respectively. Our findings indicate that we should pay more attention to increased home PP in patients with type 2 diabetes. ©2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Albanese, Mark A; Farrell, Philip; Dottl, Susan L
2005-01-01
Using Medical College Admission Test-grade point average (MCAT-GPA) scores as a threshold has the potential to address issues raised in recent Supreme Court cases, but it introduces complicated methodological issues for medical school admissions. To assess various statistical indexes to determine optimally discriminating thresholds for MCAT-GPA scores. Entering classes from 1992 through 1998 (N = 752) are used to develop guidelines for cut scores that optimize discrimination between students who pass and do not pass the United States Medical Licensing Examination (USMLE) Step 1 on the first attempt. Risk differences, odds ratios, sensitivity, and specificity discriminated best for setting thresholds. Compensatory versus noncompensatory procedures both accounted for 54% of Step 1 failures, but demanded different performance requirements (noncompensatory MCAT-biological sciences = 8, physical sciences = 7, verbal reasoning = 7--sum of scores = 22; compensatory MCAT total = 24). Rational and defensible intellectual achievement thresholds that are likely to comply with recent Supreme Court decisions can be set from MCAT scores and GPAs.
A stimulus-dependent spike threshold is an optimal neural coder
Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama
2015-01-01
A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
I. RENAL THRESHOLDS FOR HEMOGLOBIN IN DOGS
Lichty, John A.; Havill, William H.; Whipple, George H.
1932-01-01
We use the term "renal threshold for hemoglobin" to indicate the smallest amount of hemoglobin which given intravenously will effect the appearance of recognizable hemoglobin in the urine. The initial renal threshold level for dog hemoglobin is established by the methods employed at an average value of 155 mg. hemoglobin per kilo body weight with maximal values of 210 and minimal of 124. Repeated daily injections of hemoglobin will depress this initial renal threshold level on the average 46 per cent with maximal values of 110 and minimal values of 60 mg. hemoglobin per kilo body weight. This minimal or depression threshold is relatively constant if the injections are continued. Rest periods without injections cause a return of the renal threshold for hemoglobin toward the initial threshold levels—recovery threshold level. Injections of hemoglobin below the initial threshold level but above the minimal or depression threshold will eventually reduce the renal threshold for hemoglobin to its depression threshold level. We believe the depression threshold or minimal renal threshold level due to repeated hemoglobin injections is a little above the glomerular threshold which we assume is the base line threshold for hemoglobin. Our reasons for this belief in the glomerular threshold are given above and in the other papers of this series. PMID:19870016
Sensitivity of goodness-of-fit statistics to rainfall data rounding off
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Puliga, Michelangelo
An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.
Zhang, Yifei; Kang, Jian
2017-11-01
The building of biomass combined heat and power (CHP) plants is an effective means of developing biomass energy because they can satisfy demands for winter heating and electricity consumption. The purpose of this study was to analyse the effect of the distribution density of a biomass CHP plant network on heat utilisation efficiency in a village-town system. The distribution density is determined based on the heat transmission threshold, and the heat utilisation efficiency is determined based on the heat demand distribution, heat output efficiency, and heat transmission loss. The objective of this study was to ascertain the optimal value for the heat transmission threshold using a multi-scheme comparison based on an analysis of these factors. To this end, a model of a biomass CHP plant network was built using geographic information system tools to simulate and generate three planning schemes with different heat transmission thresholds (6, 8, and 10 km) according to the heat demand distribution. The heat utilisation efficiencies of these planning schemes were then compared by calculating the gross power, heat output efficiency, and heat transmission loss of the biomass CHP plant for each scenario. This multi-scheme comparison yielded the following results: when the heat transmission threshold was low, the distribution density of the biomass CHP plant network was high and the biomass CHP plants tended to be relatively small. In contrast, when the heat transmission threshold was high, the distribution density of the network was low and the biomass CHP plants tended to be relatively large. When the heat transmission threshold was 8 km, the distribution density of the biomass CHP plant network was optimised for efficient heat utilisation. To promote the development of renewable energy sources, a planning scheme for a biomass CHP plant network that maximises heat utilisation efficiency can be obtained using the optimal heat transmission threshold and the nonlinearity coefficient for local roads. Copyright © 2017 Elsevier Ltd. All rights reserved.
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data
2014-01-01
Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.
Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried
2014-01-01
Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.
HbA1c as a Screening tool for Ketosis in Patients with Type 2 Diabetes Mellitus
Zhu, Bing; Bu, Le; Zhang, Manna; Gusdon, Aaron M.; Zheng, Liang; Rampersad, Sharvan; Li, Jue; Qu, Shen
2016-01-01
Ketosis in patients with type 2 diabetes mellitus (T2DM) is overlooked due to atypical symptoms. The objective of this study is to evaluate the value of hemoglobin A1c (HbA1c) as a screening tool for ketosis in T2DM patients. This retrospective study consisted of 253 T2DM patients with ketosis at Shanghai 10th People’s Hospital during a period from January 1, 2011 to June 30, 2015. A control group consisted of 221 T2DM patients without ketosis randomly selected from inpatients during the same period. Receiver operating characteristic curve (ROC) analysis was used to examine the sensitivity and specificity of HbA1c as an indicator for ketosis. Higher HbA1c levels were correlated with ketosis. In patients with newly diagnosed T2DM, the area under the curve (AUC) was 0.832, with 95% confidence interval (CI) 0.754–0.911. The optimal threshold was 10.1% (87 mmol/mol). In patients with previously diagnosed T2DM, the AUC was 0.811 (95% CI: 0.767–0.856), with an optimal threshold of 8.6% (70 mmol/mol). HbA1c is a potential screening tool for ketosis in patients with T2DM. Ketosis is much more likely with HbA1c values at ≥10.1% in patients with newly diagnosed T2DM and HbA1c values at ≥8.6% in patients with previously diagnosed T2DM. PMID:28009017
Performance evaluation and optimization of the MiniPET-II scanner
NASA Astrophysics Data System (ADS)
Lajtos, Imre; Emri, Miklos; Kis, Sandor A.; Opposits, Gabor; Potari, Norbert; Kiraly, Beata; Nagy, Ferenc; Tron, Lajos; Balkay, Laszlo
2013-04-01
This paper presents results of the performance of a small animal PET system (MiniPET-II) installed at our Institute. MiniPET-II is a full ring camera that includes 12 detector modules in a single ring comprised of 1.27×1.27×12 mm3 LYSO scintillator crystals. The axial field of view and the inner ring diameter are 48 mm and 211 mm, respectively. The goal of this study was to determine the NEMA-NU4 performance parameters of the scanner. In addition, we also investigated how the calculated parameters depend on the coincidence time window (τ=2, 3 and 4 ns) and the low threshold settings of the energy window (Elt=250, 350 and 450 keV). Independent measurements supported optimization of the effective system radius and the coincidence time window of the system. We found that the optimal coincidence time window and low threshold energy window are 3 ns and 350 keV, respectively. The spatial resolution was close to 1.2 mm in the center of the FOV with an increase of 17% at the radial edge. The maximum value of the absolute sensitivity was 1.37% for a point source. Count rate tests resulted in peak values for the noise equivalent count rate (NEC) curve and scatter fraction of 14.2 kcps (at 36 MBq) and 27.7%, respectively, using the rat phantom. Numerical values of the same parameters obtained for the mouse phantom were 55.1 kcps (at 38.8 MBq) and 12.3%, respectively. The recovery coefficients of the image quality phantom ranged from 0.1 to 0.87. Altering the τ and Elt resulted in substantial changes in the NEC peak and the sensitivity while the effect on the image quality was negligible. The spatial resolution proved to be, as expected, independent of the τ and Elt. The calculated optimal effective system radius (resulting in the best image quality) was 109 mm. Although the NEC peak parameters do not compare favorably with those of other small animal scanners, it can be concluded that under normal counting situations the MiniPET-II imaging capability assures remarkably good image quality, sensitivity and spatial resolution.
Non-adaptive and adaptive hybrid approaches for enhancing water quality management
NASA Astrophysics Data System (ADS)
Kalwij, Ineke M.; Peralta, Richard C.
2008-09-01
SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.
Variational-based segmentation of bio-pores in tomographic images
NASA Astrophysics Data System (ADS)
Bauer, Benjamin; Cai, Xiaohao; Peth, Stephan; Schladitz, Katja; Steidl, Gabriele
2017-01-01
X-ray computed tomography (CT) combined with a quantitative analysis of the resulting volume images is a fruitful technique in soil science. However, the variations in X-ray attenuation due to different soil components keep the segmentation of single components within these highly heterogeneous samples a challenging problem. Particularly demanding are bio-pores due to their elongated shape and the low gray value difference to the surrounding soil structure. Recently, variational models in connection with algorithms from convex optimization were successfully applied for image segmentation. In this paper we apply these methods for the first time for the segmentation of bio-pores in CT images of soil samples. We introduce a novel convex model which enforces smooth boundaries of bio-pores and takes the varying attenuation values in the depth into account. Segmentation results are reported for different real-world 3D data sets as well as for simulated data. These results are compared with two gray value thresholding methods, namely indicator kriging and a global thresholding procedure, and with a morphological approach. Pros and cons of the methods are assessed by considering geometric features of the segmented bio-pore systems. The variational approach features well-connected smooth pores while not detecting smaller or shallower pores. This is an advantage in cases where the main bio-pores network is of interest and where infillings, e.g., excrements of earthworms, would result in losing pore connections as observed for the other thresholding methods.
NASA Astrophysics Data System (ADS)
Braud, A.; Girard, S.; Doualan, J. L.; Thuau, M.; Moncorgé, R.; Tkachuk, A. M.
2000-02-01
Energy-transfer processes have been quantitatively studied in various Tm:Yb-doped fluoride crystals. A comparison between the three host crystals which have been examined (KY3F10, LiYF4, and BaY2F8) shows clearly that the efficiency of the Yb-->Tm energy transfers is larger in KY3F10 than in LiYF4 or BaY2F8. The dependence of the energy-transfer parameters upon the codopant concentrations has been experimentally measured and compared with the results calculated on the basis of migration-assisted energy-transfer models. Using these energy-transfer parameters and a rate equation model, we have performed a theoretical calculation of the laser thresholds for the 3H4-->3F4 and 3H4-->3H5 laser transitions of the Tm ion around 1.5 and 2.3 μm, respectively. Laser experiments performed at 1.5 μm in Yb:Tm:LiYF4 then led to laser threshold values in good agreement with those derived theoretically. Based on these results, optimized values for the Yb and Tm dopant concentrations for typical values of laser cavity and pump modes were finally derived to minimize the threshold pump powers for the laser transitions around 1.5 and 2.3 μm.
Jiang, Wei-jie; Jin, Fan; Zhou, Li-ming
2016-05-01
To investigate the influence of the DNA integrity of optimized sperm on the embryonic development and clinical outcomes of in vitro fertilization and embryo transfer (IVF-ET). This study included 605 cycles of conventional IVF-ET for pure oviductal infertility performed from January 1, 2013 to December 31, 2014. On the day of retrieval, we examined the DNA integrity of the sperm using the sperm chromatin dispersion method. According to the ROC curve and Youden index, we grouped the cycles based on the sperm DNA fragmentation index (DFI) threshold value for predicting implantation failure, early miscarriage, and fertilization failure, followed by analysis of the correlation between DFI and the outcomes of IVF-ET. According to the DFI threshold values obtained, the 605 cycles fell into four groups (DFI value < 5%, 5-10%, 10-15%, and ≥ 15%). Statistically significant differences were observed among the four groups in the rates of fertilization, cleavage, high-quality embryo, implantation, clinical pregnancy, early miscarriage, and live birth (P < 0.05), but not in the rates of multiple pregnancy, premature birth, and low birth weight (P > 0.05). DFI was found to be correlated negatively with the rates of fertilization (r = -0.32, P < 0.01), cleavage (r = -0.19, P < 0.01), high-quality embryo (r = -0.40, P < 0.01), clinical pregnancy (r = -0.20, P < 0.01), and live birth (r = -0.09 P = 0.04), positively with the rate of early miscarriage (r = 0.23, P < 0.01), but not with the rates of multiple pregnancy (r = -0.01, P = 0.83), premature birth (r = 0.04, P = 0.54), and low birth weight (r = 0.03, P = 0.62). The DNA integrity of optimized sperm influences fertilization, embryonic development, early miscarriage, and live birth of IVF-ET, but its correlation with premature birth and low birth weight has to be further studied.
Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J
2012-12-15
A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.
Raghunandhan, S; Ravikumar, A; Kameswaran, Mohan; Mandke, Kalyani; Ranjith, R
2014-05-01
Indications for cochlear implantation have expanded today to include very young children and those with syndromes/multiple handicaps. Programming the implant based on behavioural responses may be tedious for audiologists in such cases, wherein matching an effective Measurable Auditory Percept (MAP) and appropriate MAP becomes the key issue in the habilitation program. In 'Difficult to MAP' scenarios, objective measures become paramount to predict optimal current levels to be set in the MAP. We aimed to (a) study the trends in multi-modal electrophysiological tests and behavioural responses sequentially over the first year of implant use; (b) generate normative data from the above; (c) correlate the multi-modal electrophysiological thresholds levels with behavioural comfort levels; and (d) create predictive formulae for deriving optimal comfort levels (if unknown), using linear and multiple regression analysis. This prospective study included 10 profoundly hearing impaired children aged between 2 and 7 years with normal inner ear anatomy and no additional handicaps. They received the Advanced Bionics HiRes 90 K Implant with Harmony Speech processor and used HiRes-P with Fidelity 120 strategy. They underwent, impedance telemetry, neural response imaging, electrically evoked stapedial response telemetry (ESRT), and electrically evoked auditory brainstem response (EABR) tests at 1, 4, 8, and 12 months of implant use, in conjunction with behavioural mapping. Trends in electrophysiological and behavioural responses were analyzed using paired t-test. By Karl Pearson's correlation method, electrode-wise correlations were derived for neural response imaging (NRI) thresholds versus most comfortable level (M-levels) and offset based (apical, mid-array, and basal array) correlations for EABR and ESRT thresholds versus M-levels were calculated over time. These were used to derive predictive formulae by linear and multiple regression analysis. Such statistically predicted M-levels were compared with the behaviourally recorded M-levels among the cohort, using Cronbach's alpha reliability test method for confirming the efficacy of this method. NRI, ESRT, and EABR thresholds showed statistically significant positive correlations with behavioural M-levels, which improved with implant use over time. These correlations were used to derive predicted M-levels using regression analysis. On an average, predicted M-levels were found to be statistically reliable and they were a fair match to the actual behavioural M-levels. When applied in clinical practice, the predicted values were found to be useful for programming members of the study group. However, individuals showed considerable deviations in behavioural M-levels, above and below the electrophysiologically predicted values, due to various factors. While the current method appears helpful as a reference to predict initial maps in 'difficult to Map' subjects, it is recommended that behavioural measures are mandatory to further optimize the maps for these individuals. The study explores the trends, correlations and individual variabilities that occur between electrophysiological tests and behavioural responses, recorded over time among a cohort of cochlear implantees. The statistical method shown may be used as a guideline to predict optimal behavioural levels in difficult situations among future implantees, bearing in mind that optimal M-levels for individuals can vary from predicted values. In 'Difficult to MAP' scenarios, following a protocol of sequential behavioural programming, in conjunction with electrophysiological correlates will provide the best outcomes.
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
On the optimal z-score threshold for SISCOM analysis to localize the ictal onset zone.
De Coster, Liesbeth; Van Laere, Koen; Cleeren, Evy; Baete, Kristof; Dupont, Patrick; Van Paesschen, Wim; Goffin, Karolien E
2018-04-17
In epilepsy patients, SISCOM or subtraction ictal single photon emission computed tomography co-registered to magnetic resonance imaging has become a routinely used, non-invasive technique to localize the ictal onset zone (IOZ). Thresholding of clusters with a predefined number of standard deviations from normality (z-score) is generally accepted to localize the IOZ. In this study, we aimed to assess the robustness of this parameter in a group of patients with well-characterized drug-resistant epilepsy in whom the exact location of the IOZ was known after successful epilepsy surgery. Eighty patients underwent preoperative SISCOM and were seizure free in a postoperative period of minimum 1 year. SISCOMs with z-threshold 2 and 1.5 were analyzed by two experienced readers separately, blinded from the clinical ground truth data. Their reported location of the IOZ was compared with the operative resection zone. Furthermore, confidence scores of the SISCOM IOZ were compared for the two thresholds. Visual reporting with a z-score threshold of 1.5 and 2 showed no statistically significant difference in localizing correspondence with the ground truth (70 vs. 72% respectively, p = 0.17). Interrater agreement was moderate (κ = 0.65) at the threshold of 1.5, but high (κ = 0.84) at a threshold of 2, where also reviewers were significantly more confident (p < 0.01). SISCOM is a clinically useful, routinely used modality in the preoperative work-up in many epilepsy surgery centers. We found no significant differences in localizing value of the IOZ using a threshold of 1.5 or 2, but interrater agreement and reader confidence were higher using a z-score threshold of 2.
Multiple Fingers - One Gestalt.
Lezkan, Alexandra; Manuel, Steven G; Colgate, J Edward; Klatzky, Roberta L; Peshkin, Michael A; Drewing, Knut
2016-01-01
The Gestalt theory of perception offered principles by which distributed visual sensations are combined into a structured experience ("Gestalt"). We demonstrate conditions whereby haptic sensations at two fingertips are integrated in the perception of a single object. When virtual bumps were presented simultaneously to the right hand's thumb and index finger during lateral arm movements, participants reported perceiving a single bump. A discrimination task measured the bump's perceived location and perceptual reliability (assessed by differential thresholds) for four finger configurations, which varied in their adherence to the Gestalt principles of proximity (small versus large finger separation) and synchrony (virtual spring to link movements of the two fingers versus no spring). According to models of integration, reliability should increase with the degree to which multi-finger cues integrate into a unified percept. Differential thresholds were smaller in the virtual-spring condition (synchrony) than when fingers were unlinked. Additionally, in the condition with reduced synchrony, greater proximity led to lower differential thresholds. Thus, with greater adherence to Gestalt principles, thresholds approached values predicted for optimal integration. We conclude that the Gestalt principles of synchrony and proximity apply to haptic perception of surface properties and that these principles can interact to promote multi-finger integration.
Prevalence scaling: applications to an intelligent workstation for the diagnosis of breast cancer.
Horsch, Karla; Giger, Maryellen L; Metz, Charles E
2008-11-01
Our goal was to investigate the effects of changes that the prevalence of cancer in a population have on the probability of malignancy (PM) output and an optimal combination of a true-positive fraction (TPF) and a false-positive fraction (FPF) of a mammographic and sonographic automatic classifier for the diagnosis of breast cancer. We investigate how a prevalence-scaling transformation that is used to change the prevalence inherent in the computer estimates of the PM affects the numerical and histographic output of a previously developed multimodality intelligent workstation. Using Bayes' rule and the binormal model, we study how changes in the prevalence of cancer in the diagnostic breast population affect our computer classifiers' optimal operating points, as defined by maximizing the expected utility. Prevalence scaling affects the threshold at which a particular TPF and FPF pair is achieved. Tables giving the thresholds on the scaled PM estimates that result in particular pairs of TPF and FPF are presented. Histograms of PMs scaled to reflect clinically relevant prevalence values differ greatly from histograms of laboratory-designed PMs. The optimal pair (TPF, FPF) of our lower performing mammographic classifier is more sensitive to changes in clinical prevalence than that of our higher performing sonographic classifier. Prevalence scaling can be used to change computer PM output to reflect clinically more appropriate prevalence. Relatively small changes in clinical prevalence can have large effects on the computer classifier's optimal operating point.
Subtil, Fabien; Rabilloud, Muriel
2015-07-01
The receiver operating characteristic curves (ROC curves) are often used to compare continuous diagnostic tests or determine the optimal threshold of a test; however, they do not consider the costs of misclassifications or the disease prevalence. The ROC graph was extended to allow for these aspects. Two new lines are added to the ROC graph: a sensitivity line and a specificity line. Their slopes depend on the disease prevalence and on the ratio of the net benefit of treating a diseased subject to the net cost of treating a nondiseased one. First, these lines help researchers determine the range of specificities within which test comparisons of partial areas under the curves is clinically relevant. Second, the ROC curve point the farthest from the specificity line is shown to be the optimal threshold in terms of expected utility. This method was applied: (1) to determine the optimal threshold of ratio specific immunoglobulin G (IgG)/total IgG for the diagnosis of congenital toxoplasmosis and (2) to select, among two markers, the most accurate for the diagnosis of left ventricular hypertrophy in hypertensive subjects. The two additional lines transform the statistically valid ROC graph into a clinically relevant tool for test selection and threshold determination. Copyright © 2015 Elsevier Inc. All rights reserved.
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2010 CFR
2010-07-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2011 CFR
2011-01-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2012 CFR
2012-01-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
Briaire, Jeroen J; Frijns, Johan H M
2006-04-01
Cochlear implant research endeavors to optimize the spatial selectivity, threshold and dynamic range with the objective of improving the speech perception performance of the implant user. One of the ways to achieve some of these goals is by electrode design. New cochlear implant electrode designs strive to bring the electrode contacts into close proximity to the nerve fibers in the modiolus: this is done by placing the contacts on the medial side of the array and positioning the implant against the medial wall of scala tympani. The question remains whether this is the optimal position for a cochlea with intact neural fibers and, if so, whether it is also true for a cochlea with degenerated neural fibers. In this study a computational model of the implanted human cochlea is used to investigate the optimal position of the array with respect to threshold, dynamic range and spatial selectivity for a cochlea with intact nerve fibers and for degenerated nerve fibers. In addition, the model is used to evaluate the predictive value of eCAP measurements for obtaining peri-operative information on the neural status. The model predicts improved threshold, dynamic range and spatial selectivity for the peri-modiolar position at the basal end of the cochlea, with minimal influence of neural degeneration. At the apical end of the array (1.5 cochlear turns), the dynamic range and the spatial selectivity are limited due to the occurrence of cross-turn stimulation, with the exception of the condition without neural degeneration and with the electrode array along the lateral wall of scala tympani. The eCAP simulations indicate that a large P(0) peak occurs before the N(1)P(1) complex when the fibers are not degenerated. The absence of this peak might be used as an indicator for neural degeneration.
Two-step adaptive management for choosing between two management actions
Moore, Alana L.; Walker, Leila; Runge, Michael C.; McDonald-Madden, Eve; McCarthy, Michael A
2017-01-01
Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period.
Two-step adaptive management for choosing between two management actions.
Moore, Alana L; Walker, Leila; Runge, Michael C; McDonald-Madden, Eve; McCarthy, Michael A
2017-06-01
Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period. © 2017 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin
Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.
Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi
2018-06-01
Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity. Cancer Epidemiol Biomarkers Prev; 27(6); 704-9. ©2018 AACR . ©2018 American Association for Cancer Research.
Methods for automatic trigger threshold adjustment
Welch, Benjamin J; Partridge, Michael E
2014-03-18
Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.
Bossard, N; Descotes, F; Bremond, A G; Bobin, Y; De Saint Hilaire, P; Golfier, F; Awada, A; Mathevet, P M; Berrerd, L; Barbier, Y; Estève, J
2003-11-01
The prognostic value of cathepsin D has been recently recognized, but as many quantitative tumor markers, its clinical use remains unclear partly because of methodological issues in defining cut-off values. Guidelines have been proposed for analyzing quantitative prognostic factors, underlining the need for keeping data continuous, instead of categorizing them. Flexible approaches, parametric and non-parametric, have been proposed in order to improve the knowledge of the functional form relating a continuous factor to the risk. We studied the prognostic value of cathepsin D in a retrospective hospital cohort of 771 patients with breast cancer, and focused our overall survival analysis, based on the Cox regression, on two flexible approaches: smoothing splines and fractional polynomials. We also determined a cut-off value from the maximum likelihood estimate of a threshold model. These different approaches complemented each other for (1) identifying the functional form relating cathepsin D to the risk, and obtaining a cut-off value and (2) optimizing the adjustment for complex covariate like age at diagnosis in the final multivariate Cox model. We found a significant increase in the death rate, reaching 70% with a doubling of the level of cathepsin D, after the threshold of 37.5 pmol mg(-1). The proper prognostic impact of this marker could be confirmed and a methodology providing appropriate ways to use markers in clinical practice was proposed.
Anderson, Jeffrey R; Barrett, Steven F
2009-01-01
Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.
[The new German general threshold limit value for dust--pro and contra the adoption in Austria].
Godnic-Cvar, Jasminka; Ponocny, Ivo
2004-01-01
Since it has been realised that inhalation of inert dust is one of the important confounding variables for the development of chronic bronchitis, the threshold values for occupational exposure to these dusts needs to be further decreased. The German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (MAK-Commission) has set a new threshold (MAK-Value) for inert dusts (4 mg/m3 for inhalable dust, 1.5 mg/m3 for respirable dust) in 1997. This value is much lower than the threshold values currently used world-wide. The aim of the present article is to assess the scientific plausibility of the methodology (databases and statistics) used to set these new German MAK-Values, regarding their adoption in Austria. Although we believe that it is substantial to lower the MAK-Value for inert dust in order to prevent the development of chronic bronchitis as a consequence of occupational exposure to inert dusts, the applied methodology used by the German MAK-Commission in 1997 to set the new MAK-Values does not justify the reduction of the threshold limit value. A carefully designed study to establish an appropriate scientific basis for setting a new threshold value for inert dusts in the workplace should be carried out. Meanwhile, at least the currently internationally applied threshold values should be adopted in Austria.
Fujisawa, Jun-Ichi; Osawa, Ayumi; Hanaya, Minoru
2016-08-10
Photoinduced carrier injection from dyes to inorganic semiconductors is a crucial process in various dye-sensitized solar energy conversions such as photovoltaics and photocatalysis. It has been reported that an energy offset larger than 0.2-0.3 eV (threshold value) is required for efficient electron injection from excited dyes to metal-oxide semiconductors such as titanium dioxide (TiO2). Because the energy offset directly causes loss in the potential of injected electrons, it is a crucial issue to minimize the energy offset for efficient solar energy conversions. However, a fundamental understanding of the energy offset, especially the threshold value, has not been obtained yet. In this paper, we report the origin of the threshold value of the energy offset, solving the long-standing questions of why such a large energy offset is necessary for the electron injection and which factors govern the threshold value, and suggest a strategy to minimize the threshold value. The threshold value is determined by the sum of two reorganization energies in one-electron reduction of semiconductors and typically-used donor-acceptor (D-A) dyes. In fact, the estimated values (0.21-0.31 eV) for several D-A dyes are in good agreement with the threshold value, supporting our conclusion. In addition, our results reveal that the threshold value is possible to be reduced by enlarging the π-conjugated system of the acceptor moiety in dyes and enhancing its structural rigidity. Furthermore, we extend the analysis to hole injection from excited dyes to semiconductors. In this case, the threshold value is given by the sum of two reorganization energies in one-electron oxidation of semiconductors and D-A dyes.
Optimal estimation of recurrence structures from time series
NASA Astrophysics Data System (ADS)
beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel
2016-05-01
Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
Tang, Sanyi; Liang, Juhua; Tan, Yuanshun; Cheke, Robert A
2013-01-01
Impulsive differential equations (hybrid dynamical systems) can provide a natural description of pulse-like actions such as when a pesticide kills a pest instantly. However, pesticides may have long-term residual effects, with some remaining active against pests for several weeks, months or years. Therefore, a more realistic method for modelling chemical control in such cases is to use continuous or piecewise-continuous periodic functions which affect growth rates. How to evaluate the effects of the duration of the pesticide residual effectiveness on successful pest control is key to the implementation of integrated pest management (IPM) in practice. To address these questions in detail, we have modelled IPM including residual effects of pesticides in terms of fixed pulse-type actions. The stability threshold conditions for pest eradication are given. Moreover, effects of the killing efficiency rate and the decay rate of the pesticide on the pest and on its natural enemies, the duration of residual effectiveness, the number of pesticide applications and the number of natural enemy releases on the threshold conditions are investigated with regard to the extent of depression or resurgence resulting from pulses of pesticide applications and predator releases. Latin Hypercube Sampling/Partial Rank Correlation uncertainty and sensitivity analysis techniques are employed to investigate the key control parameters which are most significantly related to threshold values. The findings combined with Volterra's principle confirm that when the pesticide has a strong effect on the natural enemies, repeated use of the same pesticide can result in target pest resurgence. The results also indicate that there exists an optimal number of pesticide applications which can suppress the pest most effectively, and this may help in the design of an optimal control strategy.
Choi, Hyeok; Yang, Seong-Yoon; Cho, Hee-Seung; Kim, Woorim; Park, Eun-Cheol; Han, Kyu-Tae
2017-07-17
Many studies have assessed the volume-outcome relationship in cancer patients, but most focused on better outcomes in higher volume groups rather than identifying a specific threshold that could assist in clinical decision-making for achieving the best outcomes. The current study suggests an optimal volume for achieving good outcome, as an extension of previous studies on the volume-outcome relationship in stomach cancer patients. We used National Health Insurance Service (NHIS) Sampling Cohort data during 2004-2013, comprising healthcare claims for 2550 patients with newly diagnosed stomach cancer. We conducted survival analyses adopting the Cox proportional hazard model to investigate the association of three threshold values for surgical volume of stomach cancer patients for cancer-specific mortality using the Youden index. Overall, 17.10% of patients died due to cancer during the study period. The risk of mortality among patients who received surgical treatment gradually decreased with increasing surgical volume at the hospital, while the risk of mortality increased again in "high" surgical volume hospitals, resulting in a j-shaped curve (mid-low = hazard ratio (HR) 0.773, 95% confidence interval (CI) 0.608-0.983; mid-high = HR 0.541, 95% CI 0.372-0.788; high = HR 0.659, 95% CI 0.473-0.917; ref = low). These associations were especially significant in regions with unsubstantial surgical volumes and less severe cases. The optimal surgical volume threshold was about 727.3 surgical cases for stomach cancer per hospital over the 1-year study period in South Korea. However, such positive effects decreased after exceeding a certain volume of surgeries.
van Gestel, A J R; Clarenbach, C F; Stöwhas, A C; Teschler, S; Russi, E W; Teschler, H; Kohler, M
2012-01-01
Previous studies with small sample sizes reported contradicting findings as to whether pulmonary function tests can predict exercise-induced oxygen desaturation (EID). To evaluate whether forced expiratory volume in one second (FEV(1)), resting oxygen saturation (SpO(2)) and diffusion capacity for carbon monoxide (DLCO) are predictors of EID in chronic obstructive pulmonary disease (COPD). We measured FEV(1), DLCO, SpO(2) at rest and during a 6-min walking test as well as physical activity by an accelerometer. A drop in SpO(2) of >4 to <90% was defined as EID. To evaluate associations between measures of lung function and EID univariate and multivariate analyses were used and positive/negative predictive values were calculated. Receiver operating characteristic curve analysis was performed to determine the most useful threshold in order to predict/exclude EID. We included 154 patients with COPD (87 females). The mean FEV(1) was 43.0% (19.2) predicted and the prevalence of EID was 61.7%. The only independent predictor of EID was FEV(1) and the optimal cutoff value of FEV(1) was at 50% predicted (area under ROC curve, 0.85; p < 0.001). The positive predictive value of a threshold of FEV(1) <50% was 0.83 with a likelihood ratio of 3.03 and the negative predicting value of a threshold of FEV(1) ≥80% was 1.0. The severity of EID was correlated with daily physical activity (r = -0.31, p = 0.008). EID is highly prevalent among patients with COPD and can be predicted by FEV(1). EID seems to be associated with impaired daily physical activity which supports its clinical importance. Copyright © 2012 S. Karger AG, Basel.
Optimization under uncertainty of parallel nonlinear energy sinks
NASA Astrophysics Data System (ADS)
Boroson, Ethan; Missoum, Samy; Mattei, Pierre-Olivier; Vergez, Christophe
2017-04-01
Nonlinear Energy Sinks (NESs) are a promising technique for passively reducing the amplitude of vibrations. Through nonlinear stiffness properties, a NES is able to passively and irreversibly absorb energy. Unlike the traditional Tuned Mass Damper (TMD), NESs do not require a specific tuning and absorb energy over a wider range of frequencies. Nevertheless, they are still only efficient over a limited range of excitations. In order to mitigate this limitation and maximize the efficiency range, this work investigates the optimization of multiple NESs configured in parallel. It is well known that the efficiency of a NES is extremely sensitive to small perturbations in loading conditions or design parameters. In fact, the efficiency of a NES has been shown to be nearly discontinuous in the neighborhood of its activation threshold. For this reason, uncertainties must be taken into account in the design optimization of NESs. In addition, the discontinuities require a specific treatment during the optimization process. In this work, the objective of the optimization is to maximize the expected value of the efficiency of NESs in parallel. The optimization algorithm is able to tackle design variables with uncertainty (e.g., nonlinear stiffness coefficients) as well as aleatory variables such as the initial velocity of the main system. The optimal design of several parallel NES configurations for maximum mean efficiency is investigated. Specifically, NES nonlinear stiffness properties, considered random design variables, are optimized for cases with 1, 2, 3, 4, 5, and 10 NESs in parallel. The distributions of efficiency for the optimal parallel configurations are compared to distributions of efficiencies of non-optimized NESs. It is observed that the optimization enables a sharp increase in the mean value of efficiency while reducing the corresponding variance, thus leading to more robust NES designs.
Quantifying Cerebellum Grey Matter and White Matter Perfusion Using Pulsed Arterial Spin Labeling
Li, Xiufeng; Sarkar, Subhendra N.; Purdy, David E.; Briggs, Richard W.
2014-01-01
To facilitate quantification of cerebellum cerebral blood flow (CBF), studies were performed to systematically optimize arterial spin labeling (ASL) parameters for measuring cerebellum perfusion, segment cerebellum to obtain separate CBF values for grey matter (GM) and white matter (WM), and compare FAIR ASST to PICORE. Cerebellum GM and WM CBF were measured with optimized ASL parameters using FAIR ASST and PICORE in five subjects. Influence of volume averaging in voxels on cerebellar grey and white matter boundaries was minimized by high-probability threshold masks. Cerebellar CBF values determined by FAIR ASST were 43.8 ± 5.1 mL/100 g/min for GM and 27.6 ± 4.5 mL/100 g/min for WM. Quantitative perfusion studies indicated that CBF in cerebellum GM is 1.6 times greater than that in cerebellum WM. Compared to PICORE, FAIR ASST produced similar CBF estimations but less subtraction error and lower temporal, spatial, and intersubject variability. These are important advantages for detecting group and/or condition differences in CBF values. PMID:24949416
NASA Astrophysics Data System (ADS)
Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel
2015-06-01
We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.
CHOW PARAMETERS IN THRESHOLD LOGIC,
respect to threshold functions, they provide the optimal test-synthesis method for completely specified 7-argument (or less) functions, reflect the...signs and relative magnitudes of realizing weights and threshold , and can be used themselves as approximating weights. Results are reproved in a
Zhang, Lin; Huttin, Olivier; Marie, Pierre-Yves; Felblinger, Jacques; Beaumont, Marine; Chillou, Christian DE; Girerd, Nicolas; Mandry, Damien
2016-11-01
To compare three widely used methods for myocardial infarct (MI) sizing on late gadolinium-enhanced (LGE) magnetic resonance (MR) images: manual delineation and two semiautomated techniques (full-width at half-maximum [FWHM] and n-standard deviation [SD]). 3T phase-sensitive inversion-recovery (PSIR) LGE images of 114 patients after an acute MI (2-4 days and 6 months) were analyzed by two independent observers to determine both total and core infarct sizes (TIS/CIS). Manual delineation served as the reference for determination of optimal thresholds for semiautomated methods after thresholding at multiple values. Reproducibility and accuracy were expressed as overall bias ± 95% limits of agreement. Mean infarct sizes by manual methods were 39.0%/24.4% for the acute MI group (TIS/CIS) and 29.7%/17.3% for the chronic MI group. The optimal thresholds (ie, providing the closest mean value to the manual method) were FWHM30% and 3SD for the TIS measurement and FWHM45% and 6SD for the CIS measurement (paired t-test; all P > 0.05). The best reproducibility was obtained using FWHM. For TIS measurement in the acute MI group, intra-/interobserver agreements, from Bland-Altman analysis, with FWHM30%, 3SD, and manual were -0.02 ± 7.74%/-0.74 ± 5.52%, 0.31 ± 9.78%/2.96 ± 16.62% and -2.12 ± 8.86%/0.18 ± 16.12, respectively; in the chronic MI group, the corresponding values were 0.23 ± 3.5%/-2.28 ± 15.06, -0.29 ± 10.46%/3.12 ± 13.06% and 1.68 ± 6.52%/-2.88 ± 9.62%, respectively. A similar trend for reproducibility was obtained for CIS measurement. However, semiautomated methods produced inconsistent results (variabilities of 24-46%) compared to manual delineation. The FWHM technique was the most reproducible method for infarct sizing both in acute and chronic MI. However, both FWHM and n-SD methods showed limited accuracy compared to manual delineation. J. Magn. Reson. Imaging 2016;44:1206-1217. © 2016 International Society for Magnetic Resonance in Medicine.
Sensitivity of photon-counting based K-edge imaging in X-ray computed tomography.
Roessl, Ewald; Brendel, Bernhard; Engel, Klaus-Jürgen; Schlomka, Jens-Peter; Thran, Axel; Proksa, Roland
2011-09-01
The feasibility of K-edge imaging using energy-resolved, photon-counting transmission measurements in X-ray computed tomography (CT) has been demonstrated by simulations and experiments. The method is based on probing the discontinuities of the attenuation coefficient of heavy elements above and below the K-edge energy by using energy-sensitive, photon counting X-ray detectors. In this paper, we investigate the dependence of the sensitivity of K-edge imaging on the atomic number Z of the contrast material, on the object diameter D , on the spectral response of the X-ray detector and on the X-ray tube voltage. We assume a photon-counting detector equipped with six adjustable energy thresholds. Physical effects leading to a degradation of the energy resolution of the detector are taken into account using the concept of a spectral response function R(E,U) for which we assume four different models. As a validation of our analytical considerations and in order to investigate the influence of elliptically shaped phantoms, we provide CT simulations of an anthropomorphic Forbild-Abdomen phantom containing a gold-contrast agent. The dependence on the values of the energy thresholds is taken into account by optimizing the achievable signal-to-noise ratios (SNR) with respect to the threshold values. We find that for a given X-ray spectrum and object size the SNR in the heavy element's basis material image peaks for a certain atomic number Z. The dependence of the SNR in the high- Z basis-material image on the object diameter is the natural, exponential decrease with particularly deteriorating effects in the case where the attenuation from the object itself causes a total signal loss below the K-edge. The influence of the energy-response of the detector is very important. We observed that the optimal SNR values obtained with an ideal detector and with a CdTe pixel detector whose response, showing significant tailing, has been determined at a synchrotron differ by factors of about two to three. The potentially very important impact of scattered X-ray radiation and pulse pile-up occurring at high photon rates on the sensitivity of the technique is qualitatively discussed.
Rodbard, David
2012-10-01
We describe a new approach to estimate the risks of hypo- and hyperglycemia based on the mean and SD of the glucose distribution using optional transformations of the glucose scale to achieve a more nearly symmetrical and Gaussian distribution, if necessary. We examine the correlation of risks of hypo- and hyperglycemia calculated using different glucose thresholds and the relationships of these risks to the mean glucose, SD, and percentage coefficient of variation (%CV). Using representative continuous glucose monitoring datasets, one can predict the risk of glucose values above or below any arbitrary threshold if the glucose distribution is Gaussian or can be transformed to be Gaussian. Symmetry and gaussianness can be tested objectively and used to optimize the transformation. The method performs well with excellent correlation of predicted and observed risks of hypo- or hyperglycemia for individual subjects by time of day or for a specified range of dates. One can compare observed and calculated risks of hypo- and hyperglycemia for a series of thresholds considering their uncertainties. Thresholds such as 80 mg/dL can be used as surrogates for thresholds such as 50 mg/dL. We observe a high correlation of risk of hypoglycemia with %CV and illustrate the theoretical basis for that relationship. One can estimate the historical risks of hypo- and hyperglycemia by time of day, date, day of the week, or range of dates, using any specified thresholds. Risks of hypoglycemia with one threshold (e.g., 80 mg/dL) can be used as an effective surrogate marker for hypoglycemia at other thresholds (e.g., 50 mg/dL). These estimates of risk can be useful in research studies and in the clinical care of patients with diabetes.
Seymour, Erlene K; Schiffer, Charles A; de Souza, Jonas A
2017-12-01
The ASCO Value Framework calculates the value of cancer therapies. Given costly novel therapeutics for chronic lymphocytic leukemia, we used the framework to compare net health benefit (NHB) and cost within Medicare of all regimens listed in the National Comprehensive Cancer Network (NCCN) guidelines. The current NCCN guidelines for chronic lymphocytic leukemia were reviewed. All referenced studies were screened, and only randomized controlled prospective trials were included. The revised ASCO Value Framework was used to calculate NHB. Medicare drug pricing was used to calculate the cost of therapies. Forty-nine studies were screened. The following observations were made: only 10 studies (20%) could be evaluated; when comparing regimens studied against the same control arm, ranking NHB scores were comparable to their preference in guidelines; NHB scores varied depending on which variables were used, and there were no clinically validated thresholds for low or high values; treatment-related deaths were not weighted in the toxicity scores; and six of the 10 studies used less potent control arms, ranked as the least-preferred NCCN-recommended regimens. The ASCO Value Framework is an important initial step to quantify value of therapies. Essential limitations include the lack of clinically relevant validated thresholds for NHB scores and lack of incorporation of grade 5 toxicities/treatment-related mortality into its methodology. To optimize its application for clinical practice, we urge investigators/sponsors to incorporate and report the required variables to calculate the NHB of regimens and encourage trials with stronger comparator arms to properly quantify the relative value of therapies.
Montesinos, Isabel; Brancart, Françoise; Schepers, Kinda; Jacobs, Frederique; Denis, Olivier; Delforge, Marie-Luce
2015-06-01
A total of 120 bronchoalveolar lavage specimens from HIV and non-HIV immunocompromised patients, positive for Pneumocystis jirovecii by an "in house" real-time polymerase chain reaction (PCR), were evaluated by the Bio-Evolution Pneumocystis real-time PCR, a commercial quantitative assay. Patients were classified in 2 categories based on clinical and radiological findings: definite and unlikely Pneumocystis pneumonia (PCP). For the "in house" PCR, cycle threshold 34 was established as cut-off value to discriminate definite PCP from unlikely PCP with 65% and 85% of sensitivity and specificity, respectively. For the Bio-Evolution quantitative PCR, a cut-off value of 2.8×10(5)copies/mL was defined with 72% and 82% of sensitivity and specificity, respectively. Overlapped zones of results for definite and unlikely PCP were observed. Quantitative PCR is probably a useful tool for PCP diagnosis. However, for optimal management of PCP in non-HIV immunocompromised patients, operational thresholds should be assessed according to underlying diseases and other clinical and radiological parameters. Copyright © 2015 Elsevier Inc. All rights reserved.
Randomness fault detection system
NASA Technical Reports Server (NTRS)
Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)
1996-01-01
A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.
Optimizing Functional Network Representation of Multivariate Time Series
NASA Astrophysics Data System (ADS)
Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco Del; Menasalvas, Ernestina; Boccaletti, Stefano
2012-09-01
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks.
Optimizing Functional Network Representation of Multivariate Time Series
Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco del; Menasalvas, Ernestina; Boccaletti, Stefano
2012-01-01
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks. PMID:22953051
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
Multiobjective hedging rules for flood water conservation
NASA Astrophysics Data System (ADS)
Ding, Wei; Zhang, Chi; Cai, Ximing; Li, Yu; Zhou, Huicheng
2017-03-01
Flood water conservation can be beneficial for water uses especially in areas with water stress but also can pose additional flood risk. The potential of flood water conservation is affected by many factors, especially decision makers' preference for water conservation and reservoir inflow forecast uncertainty. This paper discusses the individual and joint effects of these two factors on the trade-off between flood control and water conservation, using a multiobjective, two-stage reservoir optimal operation model. It is shown that hedging between current water conservation and future flood control exists only when forecast uncertainty or decision makers' preference is within a certain range, beyond which, hedging is trivial and the multiobjective optimization problem is reduced to a single objective problem with either flood control or water conservation. Different types of hedging rules are identified with different levels of flood water conservation preference, forecast uncertainties, acceptable flood risk, and reservoir storage capacity. Critical values of decision preference (represented by a weight) and inflow forecast uncertainty (represented by standard deviation) are identified. These inform reservoir managers with a feasible range of their preference to water conservation and thresholds of forecast uncertainty, specifying possible water conservation within the thresholds. The analysis also provides inputs for setting up an optimization model by providing the range of objective weights and the choice of hedging rule types. A case study is conducted to illustrate the concepts and analyses.
Designing a Broadband Pump for High-Quality Micro-Lasers via Modified Net Radiation Method.
Nechayev, Sergey; Reusswig, Philip D; Baldo, Marc A; Rotschild, Carmel
2016-12-07
High-quality micro-lasers are key ingredients in non-linear optics, communication, sensing and low-threshold solar-pumped lasers. However, such micro-lasers exhibit negligible absorption of free-space broadband pump light. Recently, this limitation was lifted by cascade energy transfer, in which the absorption and quality factor are modulated with wavelength, enabling non-resonant pumping of high-quality micro-lasers and solar-pumped laser to operate at record low solar concentration. Here, we present a generic theoretical framework for modeling the absorption, emission and energy transfer of incoherent radiation between cascade sensitizer and laser gain media. Our model is based on linear equations of the modified net radiation method and is therefore robust, fast converging and has low complexity. We apply this formalism to compute the optimal parameters of low-threshold solar-pumped lasers. It is revealed that the interplay between the absorption and self-absorption of such lasers defines the optimal pump absorption below the maximal value, which is in contrast to conventional lasers for which full pump absorption is desired. Numerical results are compared to experimental data on a sensitized Nd 3+ :YAG cavity, and quantitative agreement with theoretical models is found. Our work modularizes the gain and sensitizing components and paves the way for the optimal design of broadband-pumped high-quality micro-lasers and efficient solar-pumped lasers.
Designing a Broadband Pump for High-Quality Micro-Lasers via Modified Net Radiation Method
Nechayev, Sergey; Reusswig, Philip D.; Baldo, Marc A.; Rotschild, Carmel
2016-01-01
High-quality micro-lasers are key ingredients in non-linear optics, communication, sensing and low-threshold solar-pumped lasers. However, such micro-lasers exhibit negligible absorption of free-space broadband pump light. Recently, this limitation was lifted by cascade energy transfer, in which the absorption and quality factor are modulated with wavelength, enabling non-resonant pumping of high-quality micro-lasers and solar-pumped laser to operate at record low solar concentration. Here, we present a generic theoretical framework for modeling the absorption, emission and energy transfer of incoherent radiation between cascade sensitizer and laser gain media. Our model is based on linear equations of the modified net radiation method and is therefore robust, fast converging and has low complexity. We apply this formalism to compute the optimal parameters of low-threshold solar-pumped lasers. It is revealed that the interplay between the absorption and self-absorption of such lasers defines the optimal pump absorption below the maximal value, which is in contrast to conventional lasers for which full pump absorption is desired. Numerical results are compared to experimental data on a sensitized Nd3+:YAG cavity, and quantitative agreement with theoretical models is found. Our work modularizes the gain and sensitizing components and paves the way for the optimal design of broadband-pumped high-quality micro-lasers and efficient solar-pumped lasers. PMID:27924844
Lefevre, N; Bohu, Y; Naouri, J F; Klouche, S; Herman, S
2014-02-01
The main goal of this study was to compare the results of the GNRB(®) arthrometer to those of Telos™ in the diagnosis of partial thickness tears of the anterior cruciate ligament (ACL). A prospective study performed January-December 2011 included all patients presenting with a partial or full-thickness ACL tears without ACL reconstruction and with a healthy contralateral knee. Anterior laxity was measured in all patients by the Telos™ and GNRB(®) devices. This series included 139 patients, mean age 30.7 ± 9.3 years. Arthroscopic reconstruction was performed in 109 patients, 97 for complete tears and 12 single bundle reconstructions for partial thickness tears. Conservative treatment was proposed in 30 patients with a partial thickness tear. The correlation between the two devices was evaluated by the Spearman coefficient. The optimal laxity thresholds were determined with ROC curves, and the diagnostic value of the tests was assessed by the area under the curve (AUC). The differential laxities of full and partial thickness tears were significantly different with the two tests. The correlation between the results of laxity measurement with the two devices was fair, with the strongest correlation between Telos™ 250 N and GNRB(®) 250 N (r = 0.46, p = 0.00001). Evaluation of the AUC showed that the informative value of all tests was fair with the best results with the GNRB(®) 250 N: AUC = 0.89 [95 % CI 0.83-0.94]. The optimal differential laxity threshold with the GNRB(®) 250 N was 2.5 mm (Se = 84 %, Sp = 81 %). The diagnostic value of GNRB(®) was better than Telos™ for ACL partial thickness tears.
Shah, Sachita P; Penn, Kevin; Kaplan, Stephen J; Vrablik, Michael; Jablonowski, Karl; Pham, Tam N; Reed, May J
2018-04-14
Frailty is linked to poor outcomes in older patients. We prospectively compared the utility of the picture-based Clinical Frailty Scale (CFS9), clinical assessments, and ultrasound muscle measurements against the reference FRAIL scale in older adult trauma patients in the emergency department (ED). We recruited a convenience sample of adults 65 yrs. or older with blunt trauma and injury severity scores <9. We queried subjects (or surrogates) on the FRAIL scale, and compared this to: physician-based and subject/surrogate-based CFS9; mid-upper arm circumference (MUAC) and grip strength; and ultrasound (US) measures of muscle thickness (limbs and abdominal wall). We derived optimal diagnostic thresholds and calculated performance metrics for each comparison using sensitivity, specificity, predictive values, and area under receiver operating characteristic curves (AUROC). Fifteen of 65 patients were frail by FRAIL scale (23%). CFS9 performed well when assessed by subject/surrogate (AUROC 0.91 [95% CI 0.84-0.98] or physician (AUROC 0.77 [95% CI 0.63-0.91]. Optimal thresholds for both physician and subject/surrogate were CFS9 of 4 or greater. If both physician and subject/surrogate provided scores <4, sensitivity and negative predictive value were 90.0% (54.1-99.5%) and 95.0% (73.1-99.7%). Grip strength and MUAC were not predictors. US measures that combined biceps and quadriceps thickness showed an AUROC of 0.75 compared to the reference standard. The ED needs rapid, validated tools to screen for frailty. The CFS9 has excellent negative predictive value in ruling out frailty. Ultrasound of combined biceps and quadriceps has modest concordance as an alternative in trauma patients who cannot provide a history. Copyright © 2017 Elsevier Inc. All rights reserved.
Azuara, Daniel; Santos, Cristina; Lopez-Doriga, Adriana; Grasselli, Julieta; Nadal, Marga; Sanjuan, Xavier; Marin, Fátima; Vidal, Joana; Montal, Robert; Moreno, Victor; Bellosillo, Beatriz; Argiles, Guillem; Elez, Elena; Dienstmann, Rodrigo; Montagut, Clara; Tabernero, Josep; Capellá, Gabriel; Salazar, Ramon
2016-05-01
The clinical significance of low-frequent RAS pathway-mutated alleles and the optimal sensitivity cutoff value in the prediction of response to anti-EGFR therapy in metastatic colorectal cancer (mCRC) patients remains controversial. We aimed to evaluate the added value of genotyping an extended RAS panel using a robust nanofluidic digital PCR (dPCR) approach. A panel of 34 hotspots, including RAS (KRAS and NRAS exons 2/3/4) and BRAF (V600E), was analyzed in tumor FFPE samples from 102 mCRC patients treated with anti-EGFR therapy. dPCR was compared with conventional quantitative PCR (qPCR). Response rates, progression-free survival (PFS), and overall survival (OS) were correlated to the mutational status and the mutated allele fraction. Tumor response evaluations were not available in 9 patients and were excluded for response rate analysis. Twenty-two percent of patients were positive for one mutation with qPCR (mutated alleles ranged from 2.1% to 66.6%). Analysis by dPCR increased the number of positive patients to 47%. Mutated alleles for patients only detected by dPCR ranged from 0.04% to 10.8%. An inverse correlation between the fraction of mutated alleles and radiologic response was observed. ROC analysis showed that a fraction of 1% or higher of any mutated alleles offered the best predictive value for all combinations of RAS and BRAF analysis. In addition, this threshold also optimized prediction both PFS and OS. We conclude that mutation testing using an extended gene panel, including RAS and BRAF with a threshold of 1% improved prediction of response to anti-EGFR therapy. Mol Cancer Ther; 15(5); 1106-12. ©2016 AACR. ©2016 American Association for Cancer Research.
Application of machine learning methodology for pet-based definition of lung cancer
Kerhet, A.; Small, C.; Quon, H.; Riauka, T.; Schrader, L.; Greiner, R.; Yee, D.; McEwan, A.; Roa, W.
2010-01-01
We applied a learning methodology framework to assist in the threshold-based segmentation of non-small-cell lung cancer (nsclc) tumours in positron-emission tomography–computed tomography (pet–ct) imaging for use in radiotherapy planning. Gated and standard free-breathing studies of two patients were independently analysed (four studies in total). Each study had a pet–ct and a treatment-planning ct image. The reference gross tumour volume (gtv) was identified by two experienced radiation oncologists who also determined reference standardized uptake value (suv) thresholds that most closely approximated the gtv contour on each slice. A set of uptake distribution-related attributes was calculated for each pet slice. A machine learning algorithm was trained on a subset of the pet slices to cope with slice-to-slice variation in the optimal suv threshold: that is, to predict the most appropriate suv threshold from the calculated attributes for each slice. The algorithm’s performance was evaluated using the remainder of the pet slices. A high degree of geometric similarity was achieved between the areas outlined by the predicted and the reference suv thresholds (Jaccard index exceeding 0.82). No significant difference was found between the gated and the free-breathing results in the same patient. In this preliminary work, we demonstrated the potential applicability of a machine learning methodology as an auxiliary tool for radiation treatment planning in nsclc. PMID:20179802
Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection
NASA Astrophysics Data System (ADS)
Naveau, Philippe; Huser, Raphael; Ribereau, Pierre; Hannart, Alexis
2016-04-01
In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.
Li, Jing; Xu, Xin; Yang, Jun; Liu, Zhidong; Xu, Lei; Gao, Jinghong; Liu, Xiaobo; Wu, Haixia; Wang, Jun; Yu, Jieqiong; Jiang, Baofa; Liu, Qiyong
2017-07-01
Understanding the health consequences of continuously rising temperatures-as is projected for China-is important in terms of developing heat-health adaptation and intervention programs. This study aimed to examine the association between mortality and daily maximum (T max ), mean (T mean ), and minimum (T min ) temperatures in warmer months; to explore threshold temperatures; and to identify optimal heat indicators and vulnerable populations. Daily data on temperature and mortality were obtained for the period 2007-2013. Heat thresholds for condition-specific mortality were estimated using an observed/expected analysis. We used a generalised additive model with a quasi-Poisson distribution to examine the association between mortality and T max /T min /T mean values higher than the threshold values, after adjustment for covariates. T max /T mean /T min thresholds were 32/28/24°C for non-accidental deaths; 32/28/24°C for cardiovascular deaths; 35/31/26°C for respiratory deaths; and 34/31/28°C for diabetes-related deaths. For each 1°C increase in T max /T mean /T min above the threshold, the mortality risk of non-accidental-, cardiovascular-, respiratory, and diabetes-related death increased by 2.8/5.3/4.8%, 4.1/7.2/6.6%, 6.6/25.3/14.7%, and 13.3/30.5/47.6%, respectively. Thresholds for mortality differed according to health condition when stratified by sex, age, and education level. For non-accidental deaths, effects were significant in individuals aged ≥65 years (relative risk=1.038, 95% confidence interval: 1.026-1.050), but not for those ≤64 years. For most outcomes, women and people ≥65 years were more vulnerable. High temperature significantly increases the risk of mortality in the population of Jinan, China. Climate change with rising temperatures may bring about the situation worse. Public health programs should be improved and implemented to prevent and reduce health risks during hot days, especially for the identified vulnerable groups. Copyright © 2017. Published by Elsevier Inc.
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Lasing in optimized two-dimensional iron-nail-shaped rod photonic crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Soon-Yong; Moon, Seul-Ki; Yang, Jin-Kyu, E-mail: jinkyuyang@kongju.ac.kr
2016-03-15
We demonstrated lasing at the Γ-point band-edge (BE) modes in optimized two-dimensional iron-nail-shaped rod photonic crystals by optical pulse pumping at room temperature. As the radius of the rod increased quadratically toward the edge of the pattern, the quality factor of the Γ-point BE mode increased up to three times, and the modal volume decreased to 56% compared with the values of the original Γ-point BE mode because of the reduction of the optical loss in the horizontal direction. Single-mode lasing from an optimized iron-nail-shaped rod array with an InGaAsP multiple quantum well embedded in the nail heads was observedmore » at a low threshold pump power of 160 μW. Real-image-based numerical simulations showed that the lasing actions originated from the optimized Γ-point BE mode and agreed well with the measurement results, including the lasing polarization, wavelength, and near-field image.« less
Single-agent parallel window search
NASA Technical Reports Server (NTRS)
Powley, Curt; Korf, Richard E.
1991-01-01
Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.
The limits of thresholds: silica and the politics of science, 1935 to 1990.
Markowitz, G; Rosner, D
1995-01-01
Since the 1930s threshold limit values have been presented as an objectively established measure of US industrial safety. However, there have been important questions raised regarding the adequacy of these thresholds for protecting workers from silicosis. This paper explores the historical debates over silica threshold limit values and the intense political negotiation that accompanied their establishment. In the 1930s and early 1940s, a coalition of business, public health, insurance, and political interests formed in response to a widely perceived "silicosis crisis." Part of the resulting program aimed at containing the crisis was the establishment of threshold limit values. Yet silicosis cases continued to be documented. By the 1960s these cases had become the basis for a number of revisions to the thresholds. In the 1970s, following a National Institute for Occupational Safety and Health recommendation to lower the threshold limit value for silica and to eliminate sand as an abrasive in blasting, industry fought attempts to make the existing values more stringent. This paper traces the process by which threshold limit values became part of a compromise between the health of workers and the economic interests of industry. Images p254-a p256-a p257-a p259-a PMID:7856788
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
The diagnostic utility of separation anxiety disorder symptoms: An item response theory analysis
Cooper-Vince, Christine E.; Emmert-Aronson, Benjamin O.; Pincus, Donna B.; Comer, Jonathan S.
2013-01-01
At present, it is not clear whether the current definition of separation anxiety disorder (SAD) is the optimal classification of developmentally inappropriate, severe, and interfering separation anxiety in youth. Much remains to be learned about the relative contributions of individual SAD symptoms for informing diagnosis. Two-parameter logistic Item Response Theory analyses were conducted on the eight core SAD symptoms in an outpatient anxiety sample of treatment-seeking children (N=359, 59.3% female, MAge=11.2) and their parents to determine the diagnostic utility of each of these symptoms. Analyses considered values of item threshold, which characterize the SAD severity level at which each symptom has a 50% chance of being endorsed, and item discrimination, which characterize how well each symptom distinguishes individuals with higher and lower levels of SAD. Distress related to separation and fear of being alone without major attachment figures showed the strongest discrimination properties and the lowest thresholds for being endorsed. In contrast, worry about harm befalling attachment figures showed the poorest discrimination properties, and nightmares about separation showed the highest threshold for being endorsed. Distress related to separation demonstrated crossing differential item functioning associated with age—at lower separation anxiety levels excessive fear at separation was more likely to be endorsed for children ≥9 years, whereas at higher levels this symptom was more likely to be endorsed by children <9 years. Implications are discussed for optimizing the taxonomy of SAD in youth. PMID:23963543
Uncertainties in extreme surge level estimates from observational records.
van den Brink, H W; Können, G P; Opsteegh, J D
2005-06-15
Ensemble simulations with a total length of 7540 years are generated with a climate model, and coupled to a simple surge model to transform the wind field over the North Sea to the skew surge level at Delfzijl, The Netherlands. The 65 constructed surge records, each with a record length of 116 years, are analysed with the generalized extreme value (GEV) and the generalized Pareto distribution (GPD) to study both the model and sample uncertainty in surge level estimates with a return period of 104 years, as derived from 116-year records. The optimal choice of the threshold, needed for an unbiased GPD estimate from peak over threshold (POT) values, cannot be determined objectively from a 100-year dataset. This fact, in combination with the sensitivity of the GPD estimate to the threshold, and its tendency towards too low estimates, leaves the application of the GEV distribution to storm-season maxima as the best approach. If the GPD analysis is applied, then the exceedance rate, lambda, chosen should not be larger than 4. The climate model hints at the existence of a second population of very intense storms. As the existence of such a second population can never be excluded from a 100-year record, the estimated 104-year wind-speed from such records has always to be interpreted as a lower limit.
Optimization of a matched-filter receiver for frequency hopping code acquisition in jamming
NASA Astrophysics Data System (ADS)
Pawlowski, P. R.; Polydoros, A.
A matched-filter receiver for frequency hopping (FH) code acquisition is optimized when either partial-band tone jamming or partial-band Gaussian noise jamming is present. The receiver is matched to a segment of the FH code sequence, sums hard per-channel decisions to form a test, and uses multiple tests to verify acquisition. The length of the matched filter and the number of verification tests are fixed. Optimization is then choosing thresholds to maximize performance based upon the receiver's degree of knowledge about the jammer ('side-information'). Four levels of side-information are considered, ranging from none to complete. The latter level results in a constant-false-alarm-rate (CFAR) design. At each level, performance sensitivity to threshold choice is analyzed. Robust thresholds are chosen to maximize performance as the jammer varies its power distribution, resulting in simple design rules which aid threshold selection. Performance results, which show that optimum distributions for the jammer power over the total FH bandwidth exist, are presented.
Health Monitoring Survey of Bell 412EP Transmissions
NASA Technical Reports Server (NTRS)
Tucker, Brian E.; Dempsey, Paula J.
2016-01-01
Health and usage monitoring systems (HUMS) use vibration-based Condition Indicators (CI) to assess the health of helicopter powertrain components. A fault is detected when a CI exceeds its threshold value. The effectiveness of fault detection can be judged on the basis of assessing the condition of actual components from fleet aircraft. The Bell 412 HUMS-equipped helicopter is chosen for such an evaluation. A sample of 20 aircraft included 12 aircraft with confirmed transmission and gearbox faults (detected by CIs) and eight aircraft with no known faults. The associated CI data is classified into "healthy" and "faulted" populations based on actual condition and these populations are compared against their CI thresholds to quantify the probability of false alarm and the probability of missed detection. Receiver Operator Characteristic analysis is used to optimize thresholds. Based on the results of the analysis, shortcomings in the classification method are identified for slow-moving CI trends. Recommendations for improving classification using time-dependent receiver-operator characteristic methods are put forth. Finally, lessons learned regarding OEM-operator communication are presented.
Catch bonds govern adhesion through L-selectin at threshold shear.
Yago, Tadayuki; Wu, Jianhua; Wey, C Diana; Klopocki, Arkadiusz G; Zhu, Cheng; McEver, Rodger P
2004-09-13
Flow-enhanced cell adhesion is an unexplained phenomenon that might result from a transport-dependent increase in on-rates or a force-dependent decrease in off-rates of adhesive bonds. L-selectin requires a threshold shear to support leukocyte rolling on P-selectin glycoprotein ligand-1 (PSGL-1) and other vascular ligands. Low forces decrease L-selectin-PSGL-1 off-rates (catch bonds), whereas higher forces increase off-rates (slip bonds). We determined that a force-dependent decrease in off-rates dictated flow-enhanced rolling of L-selectin-bearing microspheres or neutrophils on PSGL-1. Catch bonds enabled increasing force to convert short-lived tethers into longer-lived tethers, which decreased rolling velocities and increased the regularity of rolling steps as shear rose from the threshold to an optimal value. As shear increased above the optimum, transitions to slip bonds shortened tether lifetimes, which increased rolling velocities and decreased rolling regularity. Thus, force-dependent alterations of bond lifetimes govern L-selectin-dependent cell adhesion below and above the shear optimum. These findings establish the first biological function for catch bonds as a mechanism for flow-enhanced cell adhesion.
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
NASA Astrophysics Data System (ADS)
Mroczyński, R.; Wachnicki, Ł.; Gierałtowska, S.
2016-12-01
In this work, we present the design of the technology and fabrication of TFTs with amorphous IGZO semiconductor and high-k gate dielectric layer in the form of hafnium oxide (HfOx). In the course of this work, the IGZO fabrication was optimized by means of Taguchi orthogonal tables approach in order to obtain an active semiconductor with reasonable high concentration of charge carriers, low roughness and relatively high mobility. The obtained Thin-Film Transistors can be characterized by very good electrical parameters, i.e., the effective mobility (μeff ≍ 12.8 cm2V-1s-1) significantly higher than that for a-Si TFTs (μeff ≍ 1 cm2V-1s-1). However, the value of sub-threshold swing (i.e., 640 mV/dec) points that the interfacial properties of IGZO/HfOx stack is characterized by high value of interface states density (Dit) which, in turn, demands further optimization for future applications of the demonstrated TFT structures.
NASA Astrophysics Data System (ADS)
Barker, Cathleen; Zhu, Ting; Rolison, Lucas; Kiff, Scott; Jordan, Kelly; Enqvist, Andreas
2018-01-01
Using natural helium (helium-4), the Arktis 180-bar pressurized gas scintillator is capable of detecting and distinguishing fast neutrons and gammas. The detector has a unique design of three optically separated segments in which 12 silicon-photomultiplier (SiPM) pairs are positioned equilaterally across the detector to allow for them to be fully immersed in the helium-4 gas volume; consequently, no additional optical interfaces are necessary. The SiPM signals were amplified, shaped, and readout by an analog board; a 250 MHz, 14-bit digitizer was used to examine the output pulses from each SiPMpair channel. The SiPM over-voltage had to be adjusted in order to reduce pulse clipping and negative overshoot, which was observed for events with high scintillation production. Pulse shaped discrimination (PSD) was conducted by evaluating three different parameters: time over threshold (TOT), pulse amplitude, and pulse integral. In order to differentiate high and low energy events, a 30ns gate window was implemented to group pulses from two SiPM channels or more for the calculation of TOT. It was demonstrated that pulses from a single SiPM channel within the 30ns window corresponded to low-energy gamma events while groups of pulses from two-channels or more were most likely neutron events. Due to gamma pulses having lower pulse amplitude, the percentage of measured gamma also depends on the threshold value in TOT calculations. Similarly, the threshold values were varied for the optimal PSD methods of using pulse amplitude and pulse area parameters. Helium-4 detectors equipped with SiPMs are excellent for in-the-field radiation measurement of nuclear spent fuel casks. With optimized PSD methods, the goal of developing a fuel cask content monitoring and inspection system based on these helium-4 detectors will be achieved.
Flagging threshold optimization for manual blood smear review in primary care laboratory.
Bihl, Pierre-Adrien
2018-04-01
Manual blood smear review is required when an anomaly detected by the automated hematologic analyzer triggers a flag. Our will through this study is to optimize these flagging thresholds for manual slide review in order to limit workload, while insuring clinical care through no extra false-negative. Flagging causes of 4,373 samples were investigated by manual slide review, after having been run on ADVIA 2120i. A set of 6 user-adjustments is proposed. By implementing all recommendations that we made, false-positive rate falls from 81.8% to 58.6%, while PPV increases from 18.2% to 23.7%. Hence, use of such optimized thresholds enables us to maximize efficiency without altering clinical care, but each laboratory should establish its own criteria to take into consideration local distinctive features.
Laursen, Stig B; Dalton, Harry R; Murray, Iain A; Michell, Nick; Johnston, Matt R; Schultz, Michael; Hansen, Jane M; Schaffalitzky de Muckadell, Ove B; Blatchford, Oliver; Stanley, Adrian J
2015-01-01
Upper gastrointestinal hemorrhage (UGIH) is a common cause of hospital admission. The Glasgow Blatchford score (GBS) is an accurate determinant of patients' risk for hospital-based intervention or death. Patients with a GBS of 0 are at low risk for poor outcome and could be managed as outpatients. Some investigators therefore have proposed extending the definition of low-risk patients by using a higher GBS cut-off value, possibly with an age adjustment. We compared 3 thresholds of the GBS and 2 age-adjusted modifications to identify the optimal cut-off value or modification. We performed an observational study of 2305 consecutive patients presenting with UGIH at 4 centers (Scotland, England, Denmark, and New Zealand). The performance of each threshold and modification was evaluated based on sensitivity and specificity analyses, the proportion of low-risk patients identified, and outcomes of patients classified as low risk. There were differences in age (P = .0001), need for intervention (P < .0001), mortality (P < .015), and GBS (P = .0001) among sites. All systems identified low-risk patients with high levels of sensitivity (>97%). The GBS at cut-off values of ≤1 and ≤2, and both modifications, identified low-risk patients with higher levels of specificity (40%-49%) than the GBS with a cut-off value of 0 (22% specificity; P < .001). The GBS at a cut-off value of ≤2 had the highest specificity, but 3% of patients classified as low-risk patients had adverse outcomes. All GBS cut-off values, and score modifications, had low levels of specificity when tested in New Zealand (2.5%-11%). A GBS cut-off value of ≤1 and both GBS modifications identify almost twice as many low-risk patients with UGIH as a GBS at a cut-off value of 0. Implementing a protocol for outpatient management, based on one of these scores, could reduce hospital admissions by 15% to 20%. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan
2018-01-01
Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow < 1.239 × 10 -3 mm 2 /s), and D fast (D fast < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.
NASA Astrophysics Data System (ADS)
Farano, Mirko; Cherubini, Stefania; Robinet, Jean-Christophe; De Palma, Pietro
2016-12-01
Subcritical transition in plane Poiseuille flow is investigated by means of a Lagrange-multiplier direct-adjoint optimization procedure with the aim of finding localized three-dimensional perturbations optimally growing in a given time interval (target time). Space localization of these optimal perturbations (OPs) is achieved by choosing as objective function either a p-norm (with p\\gg 1) of the perturbation energy density in a linear framework; or the classical (1-norm) perturbation energy, including nonlinear effects. This work aims at analyzing the structure of linear and nonlinear localized OPs for Poiseuille flow, and comparing their transition thresholds and scenarios. The nonlinear optimization approach provides three types of solutions: a weakly nonlinear, a hairpin-like and a highly nonlinear optimal perturbation, depending on the value of the initial energy and the target time. The former shows localization only in the wall-normal direction, whereas the latter appears much more localized and breaks the spanwise symmetry found at lower target times. Both solutions show spanwise inclined vortices and large values of the streamwise component of velocity already at the initial time. On the other hand, p-norm optimal perturbations, although being strongly localized in space, keep a shape similar to linear 1-norm optimal perturbations, showing streamwise-aligned vortices characterized by low values of the streamwise velocity component. When used for initializing direct numerical simulations, in most of the cases nonlinear OPs provide the most efficient route to transition in terms of time to transition and initial energy, even when they are less localized in space than the p-norm OP. The p-norm OP follows a transition path similar to the oblique transition scenario, with slightly oscillating streaks which saturate and eventually experience secondary instability. On the other hand, the nonlinear OP rapidly forms large-amplitude bent streaks and skips the phases of streak saturation, providing a contemporary growth of all of the velocity components due to strong nonlinear coupling.
Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
2016-09-01
This paper studies the Sensor Management (SM) problem for optical Space Object (SO) tracking. The tasking problem is formulated as a Markov Decision Process (MDP) and solved using Reinforcement Learning (RL). The RL problem is solved using the actor-critic policy gradient approach. The actor provides a policy which is random over actions and given by a parametric probability density function (pdf). The critic evaluates the policy by calculating the estimated total reward or the value function for the problem. The parameters of the policy action pdf are optimized using gradients with respect to the reward function. Both the critic and the actor are modeled using deep neural networks (multi-layer neural networks). The policy neural network takes the current state as input and outputs probabilities for each possible action. This policy is random, and can be evaluated by sampling random actions using the probabilities determined by the policy neural network's outputs. The critic approximates the total reward using a neural network. The estimated total reward is used to approximate the gradient of the policy network with respect to the network parameters. This approach is used to find the non-myopic optimal policy for tasking optical sensors to estimate SO orbits. The reward function is based on reducing the uncertainty for the overall catalog to below a user specified uncertainty threshold. This work uses a 30 km total position error for the uncertainty threshold. This work provides the RL method with a negative reward as long as any SO has a total position error above the uncertainty threshold. This penalizes policies that take longer to achieve the desired accuracy. A positive reward is provided when all SOs are below the catalog uncertainty threshold. An optimal policy is sought that takes actions to achieve the desired catalog uncertainty in minimum time. This work trains the policy in simulation by letting it task a single sensor to "learn" from its performance. The proposed approach for the SM problem is tested in simulation and good performance is found using the actor-critic policy gradient method.
Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing
2012-01-01
In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.
Validity of Simpson-Angus Scale (SAS) in a naturalistic schizophrenia population.
Janno, Sven; Holi, Matti M; Tuisku, Katinka; Wahlbeck, Kristian
2005-03-17
Simpson-Angus Scale (SAS) is an established instrument for neuroleptic-induced parkinsonism (NIP), but its statistical properties have been studied insufficiently. Some shortcomings concerning its content have been suggested as well. According to a recent report, the widely used SAS mean score cut-off value 0.3 of for NIP detection may be too low. Our aim was to evaluate SAS against DSM-IV diagnostic criteria for NIP and objective motor assessment (actometry). Ninety-nine chronic institutionalised schizophrenia patients were evaluated during the same interview by standardised actometric recording and SAS. The diagnosis of NIP was based on DSM-IV criteria. Internal consistency measured by Cronbach's alpha, convergence to actometry and the capacity for NIP case detection were assessed. Cronbach's alpha for the scale was 0.79. SAS discriminated between DSM-IV NIP and non-NIP patients. The actometric findings did not correlate with SAS. ROC-analysis yielded a good case detection power for SAS mean score. The optimal threshold value of SAS mean score was between 0.65 and 0.95, i.e. clearly higher than previously suggested threshold value. We conclude that SAS seems a reliable and valid instrument. The previously commonly used cut-off mean score of 0.3 has been too low resulting in low specificity, and we suggest a new cut-off value of 0.65, whereby specificity could be doubled without loosing sensitivity.
Validity of Simpson-Angus Scale (SAS) in a naturalistic schizophrenia population
Janno, Sven; Holi, Matti M; Tuisku, Katinka; Wahlbeck, Kristian
2005-01-01
Background Simpson-Angus Scale (SAS) is an established instrument for neuroleptic-induced parkinsonism (NIP), but its statistical properties have been studied insufficiently. Some shortcomings concerning its content have been suggested as well. According to a recent report, the widely used SAS mean score cut-off value 0.3 of for NIP detection may be too low. Our aim was to evaluate SAS against DSM-IV diagnostic criteria for NIP and objective motor assessment (actometry). Methods Ninety-nine chronic institutionalised schizophrenia patients were evaluated during the same interview by standardised actometric recording and SAS. The diagnosis of NIP was based on DSM-IV criteria. Internal consistency measured by Cronbach's α, convergence to actometry and the capacity for NIP case detection were assessed. Results Cronbach's α for the scale was 0.79. SAS discriminated between DSM-IV NIP and non-NIP patients. The actometric findings did not correlate with SAS. ROC-analysis yielded a good case detection power for SAS mean score. The optimal threshold value of SAS mean score was between 0.65 and 0.95, i.e. clearly higher than previously suggested threshold value. Conclusion We conclude that SAS seems a reliable and valid instrument. The previously commonly used cut-off mean score of 0.3 has been too low resulting in low specificity, and we suggest a new cut-off value of 0.65, whereby specificity could be doubled without loosing sensitivity. PMID:15774006
Dynamical Behavior of a Malaria Model with Discrete Delay and Optimal Insecticide Control
NASA Astrophysics Data System (ADS)
Kar, Tuhin Kumar; Jana, Soovoojeet
In this paper we have proposed and analyzed a simple three-dimensional mathematical model related to malaria disease. We consider three state variables associated with susceptible human population, infected human population and infected mosquitoes, respectively. A discrete delay parameter has been incorporated to take account of the time of incubation period with infected mosquitoes. We consider the effect of insecticide control, which is applied to the mosquitoes. Basic reproduction number is figured out for the proposed model and it is shown that when this threshold is less than unity then the system moves to the disease-free state whereas for higher values other than unity, the system would tend to an endemic state. On the other hand if we consider the system with delay, then there may exist some cases where the endemic equilibrium would be unstable although the numerical value of basic reproduction number may be greater than one. We formulate and solve the optimal control problem by considering insecticide as the control variable. Optimal control problem assures to obtain better result than the noncontrol situation. Numerical illustrations are provided in support of the theoretical results.
A new potential for radiation studies of borosilicate glass
NASA Astrophysics Data System (ADS)
Alharbi, Amal F.; Jolley, Kenny; Smith, Roger; Archer, Andrew J.; Christie, Jamieson K.
2017-02-01
Borosilicate glass containing 70 mol% SiO2 and 30 mol% B2O3 is investigated theoretically using fixed charge potentials. An existing potential parameterisation for borosilicate glass is found to give good agreement for the bond angle and bond length distributions compared to experimental values but the optimal density is 30% higher than experiment. Therefore the potential parameters are refitted to give an optimal density of 2.1 g/cm3, in line with experiment. To determine the optimal density, a series of random initial structures are quenched at a rate of 5 × 1012 K/s using constant volume molecular dynamics. An average of 10 such quenches is carried out for each fixed volume. For each quenched structure, the bond angles, bond lengths, mechanical properties and melting points are determined. The new parameterisation is found to give the density, bond angles, bond lengths and Young's modulus comparable with experimental data, however, the melting points and Poisson's ratio are higher than the reported experimental values. The displacement energy thresholds are computed to be similar to those determined with the earlier parameterisation, which is lower than those for ionic crystalline materials.
Maximizing algebraic connectivity in interconnected networks.
Shakeri, Heman; Albin, Nathan; Darabi Sahneh, Faryad; Poggi-Corradini, Pietro; Scoglio, Caterina
2016-03-01
Algebraic connectivity, the second eigenvalue of the Laplacian matrix, is a measure of node and link connectivity on networks. When studying interconnected networks it is useful to consider a multiplex model, where the component networks operate together with interlayer links among them. In order to have a well-connected multilayer structure, it is necessary to optimally design these interlayer links considering realistic constraints. In this work, we solve the problem of finding an optimal weight distribution for one-to-one interlayer links under budget constraint. We show that for the special multiplex configurations with identical layers, the uniform weight distribution is always optimal. On the other hand, when the two layers are arbitrary, increasing the budget reveals the existence of two different regimes. Up to a certain threshold budget, the second eigenvalue of the supra-Laplacian is simple, the optimal weight distribution is uniform, and the Fiedler vector is constant on each layer. Increasing the budget past the threshold, the optimal weight distribution can be nonuniform. The interesting consequence of this result is that there is no need to solve the optimization problem when the available budget is less than the threshold, which can be easily found analytically.
Tao, Chao; Regner, Michael F.; Zhang, Yu; Jiang, Jack J.
2014-01-01
Summary The relationship between the vocal fold elongation and the phonation threshold pressure (PTP) was experimentally and theoretically investigated. The PTP values of seventeen excised canine larynges with 0% to 15% bilateral vocal fold elongations in 5% elongation steps were measured using an excised larynx phonation system. It was found that twelve larynges exhibited a monotonic relationship between PTP and elongation; in these larynges, the 0% elongation condition had the lowest PTP. Five larynges exhibited a PTP minimum at 5% elongation. To provide a theoretical explanation of these phenomena, a two-mass model was modified to simulate vibration of the elongated vocal folds. Two pairs of longitudinal springs were used to represent the longitudinal elastin in the vocal folds. This model showed that when the vocal folds were elongated, the increased longitudinal tension would increase the PTP value and the increased vocal fold length would decrease the PTP value. The antagonistic effects contributed by these two factors were found to be able to cause either a monotonic or a non-monotonic relationship between PTP and elongation, which were consistent with experimental observations. Because PTP describes the ease of phonation, this study suggests that there may exist a nonzero optimal vocal fold elongation for the greatest ease for phonation in some larynges. PMID:25530744
Choi, H Y; Kim, H Y; Baek, S Y; Kang, B C; Lee, S W
1999-01-01
The objective of this article is to evaluate the significance of resistive index in differentiation between benign and malignant breast lesions on duplex ultrasonographic examination. Resistive indices obtained in 106 breast lesions of 104 patients were included. Sixty-four were benign (mean age: 32.4 +/- 11.1 years), and 42 were malignant lesions (mean age: 47.8 +/- 11.4 years). The resistive index was classified as follows: below 0.49, from 0.5 to 0.59, 0.6 to 0.69, 0.7 to 0.79, and above 0.8. We analyzed and defined the optimal threshold value of RI between benign and malignant lesions. The mean values of the RI of benign and malignant lesions were 0.62 +/- 0.095 (range 0.44-0.86) and 0.74 +/- 0.097 (range, 0.50-0.92), respectively. The resistive index exceeded 0.7 in 80% of malignant lesions. The difference of the RI between malignant and benign lesions was statistically significant when the threshold value was 0.7 (P < 0.001). A resistive index over 0.7 may suggest malignant lesions. Due to the considerable overlap of the range of the RI, it may not be diagnostic in any single patient; however, it may be helpful in conjunct with gray-scale image.
Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation
Liu, Yang; Liu, Junfei
2016-01-01
This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency. PMID:27725826
Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation.
Liu, Yang; Liu, Junfei; Tian, Liwei; Ma, Lianbo
2016-01-01
This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency.
2013-01-01
Background Insulin resistance has been associated with metabolic and hemodynamic alterations and higher cardio metabolic risk. There is great variability in the threshold homeostasis model assessment of insulin resistance (HOMA-IR) levels to define insulin resistance. The purpose of this study was to describe the influence of age and gender in the estimation of HOMA-IR optimal cut-off values to identify subjects with higher cardio metabolic risk in a general adult population. Methods It included 2459 adults (range 20–92 years, 58.4% women) in a random Spanish population sample. As an accurate indicator of cardio metabolic risk, Metabolic Syndrome (MetS), both by International Diabetes Federation criteria and by Adult Treatment Panel III criteria, were used. The effect of age was analyzed in individuals with and without diabetes mellitus separately. ROC regression methodology was used to evaluate the effect of age on HOMA-IR performance in classifying cardio metabolic risk. Results In Spanish population the threshold value of HOMA-IR drops from 3.46 using 90th percentile criteria to 2.05 taking into account of MetS components. In non-diabetic women, but no in men, we found a significant non-linear effect of age on the accuracy of HOMA-IR. In non-diabetic men, the cut-off values were 1.85. All values are between 70th-75th percentiles of HOMA-IR levels in adult Spanish population. Conclusions The consideration of the cardio metabolic risk to establish the cut-off points of HOMA-IR, to define insulin resistance instead of using a percentile of the population distribution, would increase its clinical utility in identifying those patients in whom the presence of multiple metabolic risk factors imparts an increased metabolic and cardiovascular risk. The threshold levels must be modified by age in non-diabetic women. PMID:24131857
Gayoso-Diz, Pilar; Otero-González, Alfonso; Rodriguez-Alvarez, María Xosé; Gude, Francisco; García, Fernando; De Francisco, Angel; Quintela, Arturo González
2013-10-16
Insulin resistance has been associated with metabolic and hemodynamic alterations and higher cardio metabolic risk. There is great variability in the threshold homeostasis model assessment of insulin resistance (HOMA-IR) levels to define insulin resistance. The purpose of this study was to describe the influence of age and gender in the estimation of HOMA-IR optimal cut-off values to identify subjects with higher cardio metabolic risk in a general adult population. It included 2459 adults (range 20-92 years, 58.4% women) in a random Spanish population sample. As an accurate indicator of cardio metabolic risk, Metabolic Syndrome (MetS), both by International Diabetes Federation criteria and by Adult Treatment Panel III criteria, were used. The effect of age was analyzed in individuals with and without diabetes mellitus separately. ROC regression methodology was used to evaluate the effect of age on HOMA-IR performance in classifying cardio metabolic risk. In Spanish population the threshold value of HOMA-IR drops from 3.46 using 90th percentile criteria to 2.05 taking into account of MetS components. In non-diabetic women, but no in men, we found a significant non-linear effect of age on the accuracy of HOMA-IR. In non-diabetic men, the cut-off values were 1.85. All values are between 70th-75th percentiles of HOMA-IR levels in adult Spanish population. The consideration of the cardio metabolic risk to establish the cut-off points of HOMA-IR, to define insulin resistance instead of using a percentile of the population distribution, would increase its clinical utility in identifying those patients in whom the presence of multiple metabolic risk factors imparts an increased metabolic and cardiovascular risk. The threshold levels must be modified by age in non-diabetic women.
2012-03-01
both a transmitter and receiver antenna. The lower coil was located 42 cm above the ground surface for optimal data collection using the standard wheel ... eccentricity . Over 54% (26 of the 46) had P0x parameter values below the 4,500 Category 3 threshold in order to reduce the risk of missing TOI smaller...it is not uncommon to have a large eccentricity for an ordnance item. As previously stated, URS used LM as secondary, in that it served to override
Threshold network of a financial market using the P-value of correlation coefficients
NASA Astrophysics Data System (ADS)
Ha, Gyeong-Gyun; Lee, Jae Woo; Nobi, Ashadun
2015-06-01
Threshold methods in financial networks are important tools for obtaining important information about the financial state of a market. Previously, absolute thresholds of correlation coefficients have been used; however, they have no relation to the length of time. We assign a threshold value depending on the size of the time window by using the P-value concept of statistics. We construct a threshold network (TN) at the same threshold value for two different time window sizes in the Korean Composite Stock Price Index (KOSPI). We measure network properties, such as the edge density, clustering coefficient, assortativity coefficient, and modularity. We determine that a significant difference exists between the network properties of the two time windows at the same threshold, especially during crises. This implies that the market information depends on the length of the time window when constructing the TN. We apply the same technique to Standard and Poor's 500 (S&P500) and observe similar results.
NASA Astrophysics Data System (ADS)
Rossi, Giuseppe; Garrote, Luis; Caporali, Enrica
2010-05-01
Identifying the occurrence, the extent and the magnitude of a drought can be delicate, requiring detection of depletions of supplies and increases in demand. Drought indices, particularly the meteorological ones, can describe the onset and the persistency of droughts, especially in natural systems. However they have to be used cautiously when applied to water supply systems. They show little correlation with water shortage situations, since water storage, as well as demand fluctuation, play an important role in water resources management. For that reason a more dynamic indicator relating supply and demand is required in order to identify situations when there is risk of water shortages. In water supply systems there is great variability on the natural water resources and also on the demands. These quantities can only be defined probabilistically. This great variability is faced defining some threshold values, expressed in probabilistic terms, that measure the hydrologic state of the system. They can identify specific actions in an operational context in different levels of severity, like the normal, pre-alert, alert and emergency scenarios. They can simplify the decision-making required during stressful periods and can help mitigate the impacts of drought by clearly defining the conditions requiring actions. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are calibrated through discussion with water managers. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are validated with a long term simulation that takes into account the characteristics of the evaluated system. The levels and volumes in the different reservoirs are simulated using 20-30 years time series. The critical situations are assessed month by month in order to evaluate optimal management rules during the year and avoid conditions of total water shortage. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in central Italy. The catchment of the investigated area has a surface of 1231 km2 and, accordingly to the census ISTAT 2001, 945˙972 inhabitants.
AN EVALUATION OF HEURISTICS FOR THRESHOLD-FUNCTION TEST-SYNTHESIS,
Linear programming offers the most attractive procedure for testing and obtaining optimal threshold gate realizations for functions generated in...The design of the experiments may be of general interest to students of automatic problem solving; the results should be of interest in threshold logic and linear programming. (Author)
Ultra-low threshold polariton condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steger, Mark; Fluegel, Brian; Alberi, Kirstin
Here, we demonstrate the condensation of microcavity polaritons with a very sharp threshold occurring at a two orders of magnitude pump intensity lower than previous demonstrations of condensation. The long cavity lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or 'polariton lasing' has long been proposed as a promising source of coherent light at a lower threshold than traditional lasing, and these results indicate some considerations for optimizing designs for lower thresholds.
Ultra-low threshold polariton condensation
Steger, Mark; Fluegel, Brian; Alberi, Kirstin; ...
2017-03-13
Here, we demonstrate the condensation of microcavity polaritons with a very sharp threshold occurring at a two orders of magnitude pump intensity lower than previous demonstrations of condensation. The long cavity lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or 'polariton lasing' has long been proposed as a promising source of coherent light at a lower threshold than traditional lasing, and these results indicate some considerations for optimizing designs for lower thresholds.
Optimization of MLS receivers for multipath environments
NASA Technical Reports Server (NTRS)
Mcalpine, G. A.; Irwin, S. H.; NELSON; Roleyni, G.
1977-01-01
Optimal design studies of MLS angle-receivers and a theoretical design-study of MLS DME-receivers are reported. The angle-receiver results include an integration of the scan data processor and tracking filter components of the optimal receiver into a unified structure. An extensive simulation study comparing the performance of the optimal and threshold receivers in a wide variety of representative dynamical interference environments was made. The optimal receiver was generally superior. A simulation of the performance of the threshold and delay-and-compare receivers in various signal environments was performed. An analysis of combined errors due to lateral reflections from vertical structures with small differential path delays, specular ground reflections with neglible differential path delays, and thermal noise in the receivers is provided.
Enhanced Detectability of Community Structure in Multilayer Networks through Layer Aggregation.
Taylor, Dane; Shai, Saray; Stanley, Natalie; Mucha, Peter J
2016-06-03
Many systems are naturally represented by a multilayer network in which edges exist in multiple layers that encode different, but potentially related, types of interactions, and it is important to understand limitations on the detectability of community structure in these networks. Using random matrix theory, we analyze detectability limitations for multilayer (specifically, multiplex) stochastic block models (SBMs) in which L layers are derived from a common SBM. We study the effect of layer aggregation on detectability for several aggregation methods, including summation of the layers' adjacency matrices for which we show the detectability limit vanishes as O(L^{-1/2}) with increasing number of layers, L. Importantly, we find a similar scaling behavior when the summation is thresholded at an optimal value, providing insight into the common-but not well understood-practice of thresholding pairwise-interaction data to obtain sparse network representations.
Non-Gaussian, non-dynamical stochastic resonance
NASA Astrophysics Data System (ADS)
Szczepaniec, Krzysztof; Dybiec, Bartłomiej
2013-11-01
The classical model revealing stochastic resonance is a motion of an overdamped particle in a double-well fourth order potential when combined action of noise and external periodic driving results in amplifying of weak signals. Resonance behavior can also be observed in non-dynamical systems. The simplest example is a threshold triggered device. It consists of a periodic modulated input and noise. Every time an output crosses the threshold the signal is recorded. Such a digitally filtered signal is sensitive to the noise intensity. There exists the optimal value of the noise intensity resulting in the "most" periodic output. Here, we explore properties of the non-dynamical stochastic resonance in non-equilibrium situations, i.e. when the Gaussian noise is replaced by an α-stable noise. We demonstrate that non-equilibrium α-stable noises, depending on noise parameters, can either weaken or enhance the non-dynamical stochastic resonance.
Estimation of the geochemical threshold and its statistical significance
Miesch, A.T.
1981-01-01
A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.
Optimal control strategy for a novel computer virus propagation model on scale-free networks
NASA Astrophysics Data System (ADS)
Zhang, Chunming; Huang, Haitao
2016-06-01
This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.
Huang, Daizheng; Wu, Zhihui
2017-01-01
Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194
Huang, Daizheng; Wu, Zhihui
2017-01-01
Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Inhalation hazards; threshold limit values for... SURFACE WORK AREAS OF UNDERGROUND COAL MINES Airborne Contaminants § 71.700 Inhalation hazards; threshold... containing quartz, and asbestos dust) in excess of, on the basis of a time-weighted average, the threshold...
van Wagenberg, Coen P A; Backus, Gé B C; Wisselink, Henk J; van der Vorst, Jack G A J; Urlings, Bert A P
2013-09-01
In this paper we analyze the impact of the sensitivity and specificity of a Mycobacterium avium (Ma) test on pig producer incentives to control Ma in finishing pigs. A possible Ma control system which includes a serodiagnostic test and a penalty on finishing pigs in herds detected with Ma infection was modelled. Using a dynamic optimization model and a grid search of deliveries of herds from pig producers to slaughterhouse, optimal control measures for pig producers and optimal penalty values for deliveries with increased Ma risk were identified for different sensitivity and specificity values. Results showed that higher sensitivity and lower specificity induced use of more intense control measures and resulted in higher pig producer costs and lower Ma seroprevalence. The minimal penalty value needed to comply with a threshold for Ma seroprevalence in finishing pigs at slaughter was lower at higher sensitivity and lower specificity. With imperfect specificity a larger sample size decreased pig producer incentives to control Ma seroprevalence, because the higher number of false positives resulted in an increased probability of rejecting a batch of finishing pigs irrespective of whether the pig producer applied control measures. We conclude that test sensitivity and specificity must be considered in incentive system design to induce pig producers to control Ma in finishing pigs with minimum negative effects. Copyright © 2013 Elsevier B.V. All rights reserved.
Ravindra, Vijay M; Riva-Cambrin, Jay; Horn, Kevin P; Ginos, Jason; Brockmeyer, Russell; Guan, Jian; Rampton, John; Brockmeyer, Douglas L
2017-04-01
OBJECTIVE Measurement of the occipital condyle-C1 interval (CCI) is important in the evaluation of atlantooccipital dislocation (AOD) in pediatric trauma patients. The authors studied a large cohort of children with and without AOD to identify a 2D measurement threshold that maximizes the diagnostic yield of the CCI on cervical spine CT scans obtained in trauma patients. METHODS This retrospective, single-center study included all children who underwent CT of the cervical spine at Primary Children's Hospital from January 1, 2011, through December 31, 2014, for trauma evaluation. Bilateral CCI measurements in the coronal (3 measurements per side) and sagittal (4 measurements per side) planes were recorded. Using an iterative method, the authors determined optimal cutoffs for the maximal CCI in each plane in relation to AOD. The primary outcome was AOD requiring occipitocervical fusion. RESULTS A total of 597 pediatric patients underwent cervical spine CT for trauma evaluation: 578 patients without AOD and 19 patients with AOD requiring occipitocervical fusion. The authors found a statistically significant correlation between CCI and age (p < 0.001), with younger patients having higher CCIs. Using a 2D threshold requiring a sagittal CCI ≥ 2.5 mm and a coronal CCI ≥ 3.5 mm predicted AOD with a sensitivity of 95%, a specificity of 73%, positive predictive value of 10.3%, and negative predictive value of 99%. The accuracy of this 2D threshold was 84%. CONCLUSIONS In the present study population, age-dependent differences in the CCI were found on CT scans of the cervical spine in a large cohort of patients with and without AOD. A 2D CCI threshold as a screening method maximizes identification of patients at high risk for AOD while minimizing unnecessary imaging studies in children being evaluated for trauma.
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
Lerman, Tamara; Depenbusch, Marion; Schultze-Mosgau, Askan; von Otte, Soeren; Scheinhardt, Markus; Koenig, Inke; Kamischke, Axel; Macek, Milan; Schwennicke, Arne; Segerer, Sabine; Griesinger, Georg
2017-05-01
The incidence of low (<6 oocytes) and high (>18 oocytes) ovarian response to 150 µg corifollitropin alfa in relation to anti-Müllerian hormone (AMH) and other biomarkers was studied in a multi-centre (n = 5), multi-national, prospective, investigator-initiated, observational cohort study. Infertile women (n = 212), body weight >60 kg, underwent controlled ovarian stimulation in a gonadotrophin-releasing hormone-antagonist multiple-dose protocol. Demographic, sonographic and endocrine parameters were prospectively assessed on cycle day 2 or 3 of a spontaneous menstruation before the administration of 150 µg corifollitropin alfa. Serum AMH showed the best correlation with the number of oocytes obtained among all predictor variables. In receiver-operating characteristic analysis, AMH at a threshold of 0.91 ng/ml showed a sensitivity of 82.4%, specificity of 82.4%, positive predictive value 52.9%and negative predictive value 95.1% for predicting low response (area under the curve [AUC], 95% CI; P-value: 0.853, 0.769-0.936; <0.0001). For predicting high response, the optimal threshold for AMH was 2.58 ng/ml, relating to a sensitivity of 80.0%, specificity 82.1%, positive predictive value 42.5% and negative predictive value 96.1% (AUC, 95% CI; P-value: 0.871, 0.787-0.955; <0.0001). In conclusion, patients with serum AMH concentrations between approximately 0.9 and 2.6 ng/ml were unlikely to show extremes of response. Copyright © 2017. Published by Elsevier Ltd.
Terayama, Yasushi; Uchiyama, Shigeharu; Ueda, Kazuhiko; Iwakura, Nahoko; Ikegami, Shota; Kato, Yoshiharu; Kato, Hiroyuki
2018-06-01
Imaging criteria for diagnosing compressive ulnar neuropathy at the elbow (UNE) have recently been established as the maximum ulnar nerve cross-sectional area (UNCSA) upon magnetic resonance imaging (MRI) and/or ultrasonography (US). However, the levels of maximum UNCSA and diagnostic cutoff values have not yet been established. We therefore analyzed UNCSA by MRI and US in patients with UNE and in controls. We measured UNCSA at 7 levels in 30 patients with UNE and 28 controls by MRI and at 15 levels in 12 patients with UNE and 24 controls by US. We compared UNCSA as determined by MRI or US and determined optimal diagnostic cutoff values based on receiver operating characteristic curve analysis. The UNCSA was significantly larger in the UNE group than in controls at 3, 2, 1, and 0 cm proximal and 1, 2, and 3 cm distal to the medial epicondyle for both modalities. The UNCSA was maximal at 1 cm proximal to the medial epicondyle for MRI (16.1 ± 3.5 mm 2 ) as well as for US (17 ± 7 mm 2 ). A cutoff value of 11.0 mm 2 for MRI and US was found to be optimal for differentiating between patients with UNE and controls, with an area under the receiver operating characteristic curve of 0.95 for MRI and 0.96 for US. The UNCSA measured by MRI was not significantly different from that by US. Intra-rater and interrater reliabilities for UNCSA were all greater than 0.77. The UNCSA in the severe nerve dysfunction group of 18 patients was significantly larger than that in the mild nerve dysfunction group of 12 patients. By measuring UNCSA with MRI or US at 1 cm proximal to the ME, patients with and without UNE could be discriminated at a cutoff threshold of 11.0 mm 2 with high sensitivity, specificity, and reliability. Diagnostic III. Copyright © 2018 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Manning, F.W.; Groothuis, S.E.; Lykins, J.H.; Papke, D.M.
1962-06-12
S>An improved area radiation dose monitor is designed which is adapted to compensate continuously for background radiation below a threshold dose rate and to give warning when the dose integral of the dose rate of an above-threshold radiation excursion exceeds a selected value. This is accomplished by providing means for continuously charging an ionization chamber. The chamber provides a first current proportional to the incident radiation dose rate. Means are provided for generating a second current including means for nulling out the first current with the second current at all values of the first current corresponding to dose rates below a selected threshold dose rate value. The second current has a maximum value corresponding to that of the first current at the threshold dose rate. The excess of the first current over the second current, which occurs above the threshold, is integrated and an alarm is given at a selected integrated value of the excess corresponding to a selected radiation dose. (AEC)
Validity of the Talk Test for exercise prescription after myocardial revascularization.
Zanettini, Renzo; Centeleghe, Paola; Franzelli, Cristina; Mori, Ileana; Benna, Stefania; Penati, Chiara; Sorlini, Nadia
2013-04-01
For exercise prescription, rating of perceived exertion is the subjective tool most frequently used in addition to methods based on percentage of peak exercise variables. The aim of this study was the validation of a subjective method widely called the Talk Test (TT) for optimization of training intensity in patients with recent myocardial revascularization. Fifty patients with recent myocardial revascularization (17 by coronary artery bypass grafting and 33 by percutaneous coronary intervention) were enrolled in a cardiac rehabilitation programme. Each patient underwent three repetitions of the TT during three different exercise sessions to evaluate the within-patient and between-operators reliability in assessing the workload (WL) at TT thresholds. These parameters were then compared with the data of a final cardiopulmonary exercise testing, and the WL range between the individual aerobic threshold (AeT) and anaerobic threshold (AnT) was considered as the optimal training zone. The within-patient and between-operators reliability in assessing TT thresholds were satisfactory. No significant differences were found between patients' and physiotherapists' evaluations of WL at different TT thresholds. WL at Last TT+ was between AeT and AnT in 88% of patients and slightly
Prostate-specific antigen 1.5-4.0 ng/mL: a diagnostic challenge and danger zone.
Crawford, E David; Moul, Judd W; Rove, Kyle O; Pettaway, Curtis A; Lamerato, Lois E; Hughes, Alexa
2011-12-01
What's known on the subject? and What does the study add? Large population screening trials like the ERSPC, PCPT and PLCO have noted that men with seemingly low PSA (even as low as 0.5 ng/dL) still can have prostate cancer. Despite these findings, PSA is still predominantly used as a current indicator for possible presence of prostate cancer rather than also serving as a prognostic marker. This study examines a larger number of men in a diverse US population to determine the prognostic value of a man's baseline or first PSA. • To assess the value of a PSA threshold of 1.5 ng/mL as a predictor of increased prostate cancer risk over a four-year period based on a man's first PSA test, including racial differences. • To review the risk of progression of benign prostatic hyperplasia (BPH) based on a similar PSA threshold. • A retrospective review involving 21,502 men from a large Midwestern health system was performed. • Men at least 40 years old with baseline PSA values between 0 and 4.0 ng/mL and at least four years of follow-up after initial PSA test were included. • Optimal PSA threshold and predictive value of PSA for development of prostate cancer were calculated. • Prostate cancer rates were 15-fold higher in patients with PSA ≥1.5 ng/mL vs patients with PSA <1.5 ng/mL (7.85% vs 0.51%). • African American patients with baseline PSA <1.5 ng/mL faced prostate cancer rates similar to the whole study population (0.54% vs 0.51%, respectively), while African American patients with PSA 1.5-4.0 ng/mL faced a 19-fold increase in prostate cancer. • Both Caucasian and African American men with baseline PSA values between 1.5 and 4.0 ng/mL are at increased risk for future prostate cancer compared with those who have an initial PSA value below the 1.5 ng/mL threshold. • Based on a growing body of literature and this analysis, it is recommended that a first PSA test threshold of 1.5 ng/mL and above, or somewhere between 1.5 and 4.0 ng/mL, represent the Early-Warning PSA Zone (EWP Zone). • This should serve to inform patients and clinicians alike to future clinical activities with respect to prostate cancer and BPH. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.
NASA Astrophysics Data System (ADS)
Bellingeri, Michele; Agliari, Elena; Cassi, Davide
2015-10-01
The best strategy to immunize a complex network is usually evaluated in terms of the percolation threshold, i.e. the number of vaccine doses which make the largest connected cluster (LCC) vanish. The strategy inducing the minimum percolation threshold represents the optimal way to immunize the network. Here we show that the efficacy of the immunization strategies can change during the immunization process. This means that, if the number of doses is limited, the best strategy is not necessarily the one leading to the smallest percolation threshold. This outcome should warn about the adoption of global measures in order to evaluate the best immunization strategy.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.
Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-09-15
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks
Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-01-01
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity. PMID:28914818
The absolute threshold of cone vision
Koeing, Darran; Hofer, Heidi
2013-01-01
We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115
Fatness and fitness: exposing the logic of evolutionary explanations for obesity.
Higginson, Andrew D; McNamara, John M; Houston, Alasdair I
2016-01-13
To explore the logic of evolutionary explanations of obesity we modelled food consumption in an animal that minimizes mortality (starvation plus predation) by switching between activities that differ in energy gain and predation. We show that if switching does not incur extra predation risk, the animal should have a single threshold level of reserves above which it performs the safe activity and below which it performs the dangerous activity. The value of the threshold is determined by the environmental conditions, implying that animals should have variable 'set points'. Selection pressure to prevent energy stores exceeding the optimal level is usually weak, suggesting that immediate rewards might easily overcome the controls against becoming overweight. The risk of starvation can have a strong influence on the strategy even when starvation is extremely uncommon, so the incidence of mortality during famine in human history may be unimportant for explanations for obesity. If there is an extra risk of switching between activities, the animal should have two distinct thresholds: one to initiate weight gain and one to initiate weight loss. Contrary to the dual intervention point model, these thresholds will be inter-dependent, such that altering the predation risk alters the location of both thresholds; a result that undermines the evolutionary basis of the drifty genes hypothesis. Our work implies that understanding the causes of obesity can benefit from a better understanding of how evolution shapes the mechanisms that control body weight. © 2016 The Authors.
Fatness and fitness: exposing the logic of evolutionary explanations for obesity
Higginson, Andrew D.; McNamara, John M.; Houston, Alasdair I.
2016-01-01
To explore the logic of evolutionary explanations of obesity we modelled food consumption in an animal that minimizes mortality (starvation plus predation) by switching between activities that differ in energy gain and predation. We show that if switching does not incur extra predation risk, the animal should have a single threshold level of reserves above which it performs the safe activity and below which it performs the dangerous activity. The value of the threshold is determined by the environmental conditions, implying that animals should have variable ‘set points’. Selection pressure to prevent energy stores exceeding the optimal level is usually weak, suggesting that immediate rewards might easily overcome the controls against becoming overweight. The risk of starvation can have a strong influence on the strategy even when starvation is extremely uncommon, so the incidence of mortality during famine in human history may be unimportant for explanations for obesity. If there is an extra risk of switching between activities, the animal should have two distinct thresholds: one to initiate weight gain and one to initiate weight loss. Contrary to the dual intervention point model, these thresholds will be inter-dependent, such that altering the predation risk alters the location of both thresholds; a result that undermines the evolutionary basis of the drifty genes hypothesis. Our work implies that understanding the causes of obesity can benefit from a better understanding of how evolution shapes the mechanisms that control body weight. PMID:26740612
Martín-Dávila, P; Fortún, J; Gutiérrez, C; Martí-Belda, P; Candelas, A; Honrubia, A; Barcena, R; Martínez, A; Puente, A; de Vicente, E; Moreno, S
2005-06-01
Preemptive therapy required highly predictive tests for CMV disease. CMV antigenemia assay (pp65 Ag) has been commonly used for rapid diagnosis of CMV infection. Amplification methods for early detection of CMV DNA are under analysis. To compare two diagnostic methods for CMV infection and disease in this population: quantitative PCR (qPCR) performed in two different samples, plasma and leukocytes (PMNs) and using a commercial diagnostic test (COBAS Amplicor Monitor Test) versus pp65 Ag. Prospective study conducted in liver transplant recipients from February 2000 to February 2001. Analyses were performed on 164 samples collected weekly during early post-transplant period from 33 patients. Agreements higher than 78% were observed between the three assays. Optimal qPCR cut-off values were calculated using ROC curves for two specific antigenemia values. For antigenemia >or=10 positive cells, the optimal cut-off value for qPCR in plasma was 1330 copies/ml, with a sensitivity (S) of 58% and a specificity (E) of 98% and the optimal cut-off value for qPCR-cells was 713 copies/5x10(6) cells (S:91.7% and E:86%). Using a threshold of antigenemia >or=20 positive cells, the optimal cut-off values were 1330 copies/ml for qPCR-plasma (S 87%; E 98%) and 4755 copies/5x10(6) cells for qPCR-cells (S 87.5%; E 98%). Prediction values for the three assays were calculated in patients with CMV disease (9 pts; 27%). Considering the assays in a qualitative way, the most sensitive was CMV PCR in cells (S: 100%, E: 54%, PPV: 40%; NPV: 100%). Using specific cut-off values for disease detection the sensitivity, specificity, PPV and NPV for antigenemia >or=10 positive cells were: 89%; 83%; 67%; 95%, respectively. For qPCR-cells >or=713 copies/5x10(6) cells: 100%; 54%; 33% and 100% and for plasma-qPCR>or=1330 copies/ml: 78%, 77%, 47%, 89% respectively. Optimal cut-off for viral load performed in plasma and cells can be obtained for the breakpoint antigenemia value recommended for initiating preemptive therapy with high specificities and sensitivities. Diagnostic assays like CMV pp65 Ag and quantitative PCR for CMV have similar efficiency and could be recommended as methods of choice for diagnosis and monitoring of active CMV infection after transplantation.
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch. PMID:25540814
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch.
NASA Astrophysics Data System (ADS)
Akkala, Arun Goud
Leakage currents in CMOS transistors have risen dramatically with technology scaling leading to significant increase in standby power consumption. Among the various transistor candidates, the excellent short channel immunity of Silicon double gate FinFETs have made them the best contender for successful scaling to sub-10nm nodes. For sub-10nm FinFETs, new quantum mechanical leakage mechanisms such as direct source to drain tunneling (DSDT) of charge carriers through channel potential energy barrier arising due to proximity of source/drain regions coupled with the high transport direction electric field is expected to dominate overall leakage. To counter the effects of DSDT and worsening short channel effects and to maintain Ion/ Ioff, performance and power consumption at reasonable values, device optimization techniques are necessary for deeply scaled transistors. In this work, source/drain underlapping of FinFETs has been explored using quantum mechanical device simulations as a potentially promising method to lower DSDT while maintaining the Ion/ Ioff ratio at acceptable levels. By adopting a device/circuit/system level co-design approach, it is shown that asymmetric underlapping, where the drain side underlap is longer than the source side underlap, results in optimal energy efficiency for logic circuits in near-threshold as well as standard, super-threshold operating regimes. In addition, read/write conflict in 6T SRAMs and the degradation in cell noise margins due to the low supply voltage can be mitigated by using optimized asymmetric underlapped n-FinFETs for the access transistor, thereby leading to robust cache memories. When gate-workfunction tuning is possible, using asymmetric underlapped n-FinFETs for both access and pull-down devices in an SRAM bit cell can lead to high-speed and low-leakage caches. Further, it is shown that threshold voltage degradation in the presence of Hot Carrier Injection (HCI) is less severe in asymmetric underlap n-FinFETs. A lifetime projection is carried out assuming that HCI is the major degradation mechanism and it is shown that a 3.4x improvement in device lifetime is possible over symmetric underlapped n-FinFET.
Active adaptive management for reintroduction of an animal population
Runge, Michael C.
2013-01-01
Captive animals are frequently reintroduced to the wild in the face of uncertainty, but that uncertainty can often be reduced over the course of the reintroduction effort, providing the opportunity for adaptive management. One common uncertainty in reintroductions is the short-term survival rate of released adults (a release cost), an important factor because it can affect whether releasing adults or juveniles is better. Information about this rate can improve the success of the reintroduction program, but does the expected gain offset the costs of obtaining the information? I explored this question for reintroduction of the griffon vulture (Gyps fulvus) by framing the management question as a belief Markov decision process, characterizing uncertainty about release cost with 2 information state variables, and finding the solution using stochastic dynamic programming. For a reintroduction program of fixed length (e.g., 5 years of releases), the optimal policy in the final release year resembles the deterministic solution: release either all adults or all juveniles depending on whether the point estimate for the survival rate in question is above or below a specific threshold. But the optimal policy in the earlier release years 1) includes release of a mixture of juveniles and adults under some circumstances, and 2) recommends release of adults even when the point estimate of survival is much less than the deterministic threshold. These results show that in an iterated decision setting, the optimal decision in early years can be quite different from that in later years because of the value of learning.
Lanspa, Michael J.; Grissom, Colin K.; Hirshberg, Eliotte L.; Jones, Jason P.; Brown, Samuel M.
2013-01-01
Background Volume expansion is a mainstay of therapy in septic shock, although its effect is difficult to predict using conventional measurements. Dynamic parameters, which vary with respiratory changes, appear to predict hemodynamic response to fluid challenge in mechanically ventilated, paralyzed patients. Whether they predict response in patients who are free from mechanical ventilation is unknown. We hypothesized that dynamic parameters would be predictive in patients not receiving mechanical ventilation. Methods This is a prospective, observational, pilot study. Patients with early septic shock and who were not receiving mechanical ventilation received 10 ml/kg volume expansion (VE) at their treating physician's discretion after initial resuscitation in the emergency department. We used transthoracic echocardiography to measure vena cava collapsibility index (VCCI) and aortic velocity variation (AoVV) prior to VE. We used a pulse contour analysis device to measure stroke volume variation (SVV). Cardiac index was measured immediately before and after VE using transthoracic echocardiography. Hemodynamic response was defined as an increase in cardiac index ≥ 15%. Results 14 patients received VE, 5 of which demonstrated a hemodynamic response. VCCI and SVV were predictive (Area under curve = 0.83, 0.92, respectively). Optimal thresholds were calculated: VCCI ≥ 15% (Positive predictive value, PPV 62%, negative predictive value, NPV 100%, p = 0.03); SVV ≥ 17% (PPV 100%, NPV 82%, p = 0.03). AoVV was not predictive. Conclusions VCCI and SVV predict hemodynamic response to fluid challenge patients with septic shock who are not mechanically ventilated. Optimal thresholds differ from those described in mechanically ventilated patients. PMID:23324885
Takahashi, Shigekiyo; Kawasaki, Masanori; Miyata, Shusaku; Suzuki, Keita; Yamaura, Makoto; Ido, Takahisa; Aoyama, Takuma; Fujiwara, Hisayoshi; Minatoguchi, Shinya
2016-01-01
Recently, a new generation of multi-detector row computed tomography (CT) with 320-detector rows (DR) has become available in the clinical settings. The purpose of the present study was to determine the cutoff values of Hounsfield unit (HU) for discrimination of plaque components by comparing HU of coronary plaques with integrated backscatter intravascular ultrasound (IB-IVUS) serving as a gold standard. Seventy-seven coronary atherosclerotic lesions in 77 patients with angina were visualized by both 320-DR CT (Aquilion One, Toshiba, Japan) and IB-IVUS at the same site. To determine the thresholds for discrimination of plaque components, we compared HU with IB values as a gold standard. Optimal thresholds were determined from receiver operating characteristic (ROC) curves analysis. The HU values of lipid pool (n = 115), fibrosis (n = 93), vessel lumen and calcification (n = 73) were 28 ± 19 HU (range -18 to 69 HU), 98 ± 31 HU (44 to 195 HU), 357 ± 65 HU (227 to 534 HU) and 998 ± 236 HU (366 to 1,489 HU), respectively. The thresholds of 56 HU, 210 HU and 490 HU were the most reliable predictors of lipid pool, fibrosis, vessel lumen and calcification, respectively. Lipid volume measured by 320-DR CT was correlated with that measured by IB-IVUS (r = 0.63, p < 0.05), whereas fibrous volume measured by 320-DR CT was not. Lipid volume measured by 320-DR CT was correlated with that measured by IB-IVUS, whereas fibrous volume was not correlated with that measured by IB-IVUS because manual exclusion of the outside of vessel hindered rigorous discrimination between fibrosis and extravascular components.
Determination of the measurement threshold in gamma-ray spectrometry.
Korun, M; Vodenik, B; Zorko, B
2017-03-01
In gamma-ray spectrometry the measurement threshold describes the lover boundary of the interval of peak areas originating in the response of the spectrometer to gamma-rays from the sample measured. In this sense it presents a generalization of the net indication corresponding to the decision threshold, which is the measurement threshold at the quantity value zero for a predetermined probability for making errors of the first kind. Measurement thresholds were determined for peaks appearing in the spectra of radon daughters 214 Pb and 214 Bi by measuring the spectrum 35 times under repeatable conditions. For the calculation of the measurement threshold the probability for detection of the peaks and the mean relative uncertainty of the peak area were used. The relative measurement thresholds, the ratios between the measurement threshold and the mean peak area uncertainty, were determined for 54 peaks where the probability for detection varied between some percent and about 95% and the relative peak area uncertainty between 30% and 80%. The relative measurement thresholds vary considerably from peak to peak, although the nominal value of the sensitivity parameter defining the sensitivity for locating peaks was equal for all peaks. At the value of the sensitivity parameter used, the peak analysis does not locate peaks corresponding to the decision threshold with the probability in excess of 50%. This implies that peaks in the spectrum may not be located, although the true value of the measurand exceeds the decision threshold. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pressure and cold pain threshold reference values in a large, young adult, pain-free population.
Waller, Robert; Smith, Anne Julia; O'Sullivan, Peter Bruce; Slater, Helen; Sterling, Michele; McVeigh, Joanne Alexandra; Straker, Leon Melville
2016-10-01
Currently there is a lack of large population studies that have investigated pain sensitivity distributions in healthy pain free people. The aims of this study were: (1) to provide sex-specific reference values of pressure and cold pain thresholds in young pain-free adults; (2) to examine the association of potential correlates of pain sensitivity with pain threshold values. This study investigated sex specific pressure and cold pain threshold estimates for young pain free adults aged 21-24 years. A cross-sectional design was utilised using participants (n=617) from the Western Australian Pregnancy Cohort (Raine) Study at the 22-year follow-up. The association of site, sex, height, weight, smoking, health related quality of life, psychological measures and activity with pain threshold values was examined. Pressure pain threshold (lumbar spine, tibialis anterior, neck and dorsal wrist) and cold pain threshold (dorsal wrist) were assessed using standardised quantitative sensory testing protocols. Reference values for pressure pain threshold (four body sites) stratified by sex and site, and cold pain threshold (dorsal wrist) stratified by sex are provided. Statistically significant, independent correlates of increased pressure pain sensitivity measures were site (neck, dorsal wrist), sex (female), higher waist-hip ratio and poorer mental health. Statistically significant, independent correlates of increased cold pain sensitivity measures were, sex (female), poorer mental health and smoking. These data provide the most comprehensive and robust sex specific reference values for pressure pain threshold specific to four body sites and cold pain threshold at the dorsal wrist for young adults aged 21-24 years. Establishing normative values in this young age group is important given that the transition from adolescence to adulthood is a critical temporal period during which trajectories for persistent pain can be established. These data will provide an important research resource to enable more accurate profiling and interpretation of pain sensitivity in clinical pain disorders in young adults. The robust and comprehensive data can assist interpretation of future clinical pain studies and provide further insight into the complex associations of pain sensitivity that can be used in future research. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fanti, Riccardo; Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Catani, Filippo
2017-04-01
SIGMA is a regional landslide warning system that operates in the Emilia Romagna region (Italy). In this work, we depict its birth and the continuous development process, still ongoing, after over a decade of operational employ. Traditionally, landslide rainfall thresholds are defined by the empirical correspondence between a rainfall database and a landslide database. However, in the early stages of the research, a complete catalogue of dated landslides was not available. Therefore, the prototypal version of SIGMA was based on rainfall thresholds defined by means of a statistical analysis performed over the rainfall time series. SIGMA was purposely designed to take into account both shallow and deep seated landslides and it was based on the hypothesis that anomalous or extreme values of accumulated rainfall are responsible for landslide triggering. The statistical distribution of the rainfall series was analyzed, and multiples of the standard deviation (σ) were used as thresholds to discriminate between ordinary and extraordinary rainfall events. In the warning system, the measured and the forecasted rainfall are compared with these thresholds. Since the response of slope stability to rainfall may be complex, SIGMA is based on a decision algorithm aimed at identifying short but exceptionally intense rainfalls and mild but exceptionally prolonged rains: while the former are commonly associated with shallow landslides, the latter are mainly associated with deep-seated landslides. In the first case, the rainfall threshold is defined by high σ values and short durations (i.e. a few days); in the second case, σ values are lower but the decision algorithm checks long durations (i.e. some months). The exact definition of "high" and "low" σ values and of "short" and "long" duration varied through time according as it was adjusted during the evolution of the model. Indeed, since 2005, a constant work was carried out to gather and organize newly available data (rainfall recordings and landslides occurred) and to use them to define more robust relationships between rainfalls and landslide triggering, with the final aim to increase the forecasting effectiveness of the warning system. The updated rainfall and landslide database were used to periodically perform a quantitative validation and to analyze the errors affecting the system forecasts. The errors characterization was used to implement a continuous process of updating and modification of SIGMA, that included: - Main model upgrades (generalization from a pilot test site to the whole Emilia Romagna region; calibration against well documented landslide events to define specific σ levels for each territorial units; definition of different alert levels according to the number of expected - Ordinary updates (periodically, the new landslide and rainfall data were used to re-calibrate the thresholds, taking into account a more robust sample). - Model tuning (set up of the optimal version of the decisional algorithm, including different definitions of "long" and "short" periods; selection of the optimal reference rain gauge for each Territorial Unit; modification of the boundaries of some territorial - Additional features (definition of a module that takes into account the effect of snow melt and snow accumulation; coupling with a landslide susceptibility model to improve the spatial accuracy of the model). - Various performance tests (including the comparison with alternate versions of SIGMA or with thresholds based on rainfall intensity and duration). This process has led to an evolution of the warning system and to a documented improvement of its forecasting effectiveness. Landslide forecasting at regional scale is a very complex task, but as time passes by and with the systematic gathering of new substantial data and the continuous progresses of research, uncertainties can be progressively reduced and a warning system can be set that increases its performances and reliability with time.
Antunes, Amanda H; Alberton, Cristine L; Finatto, Paula; Pinto, Stephanie S; Cadore, Eduardo L; Zaffari, Paula; Kruel, Luiz F M
2015-01-01
Maximal tests conducted on land are not suitable for the prescription of aquatic exercises, which makes it difficult to optimize the intensity of water aerobics classes. The aim of the present study was to evaluate the maximal and anaerobic threshold cardiorespiratory responses to 6 water aerobics exercises. Volunteers performed 3 of the exercises in the sagittal plane and 3 in the frontal plane. Twelve active female volunteers (aged 24 ± 2 years) performed 6 maximal progressive test sessions. Throughout the exercise tests, we measured heart rate (HR) and oxygen consumption (VO2). We randomized all sessions with a minimum interval of 48 hr between each session. For statistical analysis, we used repeated-measures 1-way analysis of variance. Regarding the maximal responses, for the peak VO2, abductor hop and jumping jacks (JJ) showed significantly lower values than frontal kick and cross-country skiing (CCS; p < .001; partial η(2) = .509), while for the peak HR, JJ showed statistically significantly lower responses compared with stationary running and CCS (p < .001; partial η(2) = .401). At anaerobic threshold intensity expressed as the percentage of the maximum values, no statistically significant differences were found among exercises. Cardiorespiratory responses are directly associated with the muscle mass involved in the exercise. Thus, it is worth emphasizing the importance of performing a maximal test that is specific to the analyzed exercise so the prescription of the intensity can be safer and valid.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.
Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A
2013-02-01
The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on understanding the distributional characteristics of such uncertainty. Our approach provides a tool to improve decision making. © 2013 Society for Conservation Biology.
Midline Shift Threshold Value for Hemiparesis in Chronic Subdural Hematoma.
Juković, Mirela F; Stojanović, Dejan B
2015-01-01
Chronic subdural hematoma (CSDH) has a variety of clinical presentations, with numerous neurological symptoms and signs. Hemiparesis is one of the leading signs that potentially indicates CSDH. Purpose of this study was to determine the threshold (cut-off) value of midsagittal line (MSL) shift after which hemiparesis is likely to appear. The study evaluated 83 patients with 53 unilateral and 30 bilateral CSDHs in period of three years. Evaluated computed tomography (CT) findings in patients with CSDH were diameter of the hematoma and midsagittal line shift, measured on non-contrast CT scan in relation with occurrence of hemiparesis. Threshold values of MSL shift for both types of CSDHs were obtained as maximal (equal) sensitivity and specificity (intersection of the curves). MSL is a good predictor for hemiparesis occurrence (total sample, AUROC 0.75, p=0.0001). Unilateral and bilateral CSDHs had different threshold values of the MSL for hemiparesis development. Results suggested that in unilateral CSDH the threshold values of MSL could be at 10 mm (AUROC=0.65; p=0.07). For bilateral CSDH the threshold level of MSL shift was 4.5 mm (AUROC=0.77; p=0.01). Our study pointed on the phenomenon that midsagittal line shift can predict hemiparesis occurrence. Hemiparesis in patients with bilateral CSDH was more related to midsagittal line shift compared with unilateral CSDH. When value of midsagittal line shift exceed the threshold level, hemiparesis occurs with certain probability.
Potgieter, Danielle; Simmers, Dale; Ryan, Lisa; Biccard, Bruce M; Lurati-Buse, Giovanna A; Cardinale, Daniela M; Chong, Carol P W; Cnotliwy, Miloslaw; Farzi, Sylvia I; Jankovic, Radmilo J; Lim, Wen Kwang; Mahla, Elisabeth; Manikandan, Ramaswamy; Oscarsson, Anna; Phy, Michael P; Rajagopalan, Sriram; Van Gaal, William J; Waliszek, Marek; Rodseth, Reitze N
2015-08-01
N-terminal fragment B-type natriuretic peptide (NT-proBNP) prognostic utility is commonly determined post hoc by identifying a single optimal discrimination threshold tailored to the individual study population. The authors aimed to determine how using these study-specific post hoc thresholds impacts meta-analysis results. The authors conducted a systematic review of studies reporting the ability of preoperative NT-proBNP measurements to predict the composite outcome of all-cause mortality and nonfatal myocardial infarction at 30 days after noncardiac surgery. Individual patient-level data NT-proBNP thresholds were determined using two different methodologies. First, a single combined NT-proBNP threshold was determined for the entire cohort of patients, and a meta-analysis conducted using this single threshold. Second, study-specific thresholds were determined for each individual study, with meta-analysis being conducted using these study-specific thresholds. The authors obtained individual patient data from 14 studies (n = 2,196). Using a single NT-proBNP cohort threshold, the odds ratio (OR) associated with an increased NT-proBNP measurement was 3.43 (95% CI, 2.08 to 5.64). Using individual study-specific thresholds, the OR associated with an increased NT-proBNP measurement was 6.45 (95% CI, 3.98 to 10.46). In smaller studies (<100 patients) a single cohort threshold was associated with an OR of 5.4 (95% CI, 2.27 to 12.84) as compared with an OR of 14.38 (95% CI, 6.08 to 34.01) for study-specific thresholds. Post hoc identification of study-specific prognostic biomarker thresholds artificially maximizes biomarker predictive power, resulting in an amplification or overestimation during meta-analysis of these results. This effect is accentuated in small studies.
Value-based differential pricing: efficient prices for drugs in a global context.
Danzon, Patricia; Towse, Adrian; Mestre-Ferrandiz, Jorge
2015-03-01
This paper analyzes pharmaceutical pricing between and within countries to achieve second-best static and dynamic efficiency. We distinguish countries with and without universal insurance, because insurance undermines patients' price sensitivity, potentially leading to prices above second-best efficient levels. In countries with universal insurance, if each payer unilaterally sets an incremental cost-effectiveness ratio (ICER) threshold based on its citizens' willingness-to-pay for health; manufacturers price to that ICER threshold; and payers limit reimbursement to patients for whom a drug is cost-effective at that price and ICER, then the resulting price levels and use within each country and price differentials across countries are roughly consistent with second-best static and dynamic efficiency. These value-based prices are expected to differ cross-nationally with per capita income and be broadly consistent with Ramsey optimal prices. Countries without comprehensive insurance avoid its distorting effects on prices but also lack financial protection and affordability for the poor. Improving pricing efficiency in these self-pay countries includes improving regulation and consumer information about product quality and enabling firms to price discriminate within and between countries. © 2013 The Authors. Health Economics published by John Wiley & Sons Ltd.
Optimization of rare-earth-doped fluorides for infrared lasers
NASA Astrophysics Data System (ADS)
Peterson, Rita Dedomenico
2000-11-01
The rare-earth-doped fluoride crystals Tm,Dy:BaY2F8 (Tm,Dy:BYF), Yb,Pr:NaYF4 (Yb,Pr:NYF), and Nd:NYF show considerable promise as infrared laser materials, operating at 3 μm, 1.3 μm, and 1.06 μm respectively. Lasing has been reported previously on all three ionic transitions, but not in these crystals. Optimization of these materials for laser applications requires a more complete spectroscopic characterization than is currently available, particularly with regard to the key parameters of fluorescence lifetime and stimulated emission cross section. To further the optimization process, polarized absorption and emission have been measured for Tm,Dy:BYF, Yb,Pr:NYF, and Nd:NYF, and relevant fluorescence lifetimes have been measured or estimated. For Tm,Dy:BYF and Yb,Pr:NYF which rely upon sensitization, energy transfer parameters were calculated. Results were used in a mathematical model to determine the conditions in which lasing may be obtained. The long upper laser level lifetime in Tm,Dy:BYF translates into low threshold pump intensity, but the ability to reach threshold depends strongly on active ion concentration. The short lifetime in Yb,Pr:NYF leads to much higher threshold pump intensities, but lasing is still attainable if resonator loss is minimized. In Nd:NYF lasing was demonstrated, with a maximum of 60 mW output from an absorbed pump power of 345 mW, and a slope efficiency of 21%. Thresholds were high owing to resonator losses near 9%. Two chief issues involving the optimization of these laser materials were identified and explored. First, identification of the orientation for which emission cross section is highest is complicated in Tm,Dy:BYF by the presence of strong magnetic dipole radiation on the 3 μm transition. This effect makes it necessary to account for the polarization of both the electric and magnetic fields of the emitted radiation when determining an optimal crystal orientation, an accounting further complicated by the low symmetry of the monoclinic BYF host crystal. Second, the effect of host crystal on fluorescence lifetime was considered by comparing lifetime values for the same ionic manifolds in BYF, NYF, and other host crystals. NYF has especially low phonon energies, which leads to longer lifetimes on the longer wavelength transitions which are susceptible to multiphonon relaxation. This advantage is especially needed for lasing at 1.3 μm in Pr where the upper level lifetime is very short. On the shorter wavelength transitions in Tm and Nd, however, the role of phonons is negligible and lifetimes are somewhat shorter than in other fluoride hosts.
Clark, Duncan B; Martin, Christopher S; Chung, Tammy; Gordon, Adam J; Fiorentino, Lisa; Tootell, Mason; Rubio, Doris M
2016-06-01
To examine the National Institute on Alcohol Abuse and Alcoholism Youth Guide alcohol frequency screening thresholds when applied to Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) diagnostic criteria, and to describe alcohol use patterns and alcohol use disorder (AUD) characteristics in rural youth from primary care settings. Adolescents (n = 1193; ages 12 through 20 years) visiting their primary care practitioner for outpatient visits in six rural primary care clinics were assessed prior to their practitioner visit. A tablet computer collected youth self-report of past-year frequency and quantity of alcohol use and DSM-5 AUD symptoms. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were determined. For early adolescents (ages 12 through 14 years), 1.9% met DSM-5 criteria for past-year AUD and ≥3 days with alcohol use in the past year yielded a screen for DSM-5 with optimal psychometric properties (sensitivity: 89%; specificity: 95%; PPV: 37%; NPV: 100%). For middle adolescents (ages 15 through 17 years), 9.5% met DSM-5 AUD criteria, and ≥3 past year drinking days showed optimal screening results (sensitivity: 91%; specificity: 89%; PPV: 50%; NPV: 99%). For late adolescents (ages 18 through 20 years), 10.0% met DSM-5 AUD criteria, and ≥12 past year drinking days showed optimal screening results (sensitivity: 92%; specificity: 75%; PPV: 31%; NPV: 99%). The age stratified National Institute on Alcohol Abuse and Alcoholism frequency thresholds also produced effective results. In rural primary care clinics, 10% of youth over age 14 years had a past-year DSM-5 AUD. These at-risk adolescents can be identified with a single question on alcohol use frequency. Copyright © 2016 Elsevier Inc. All rights reserved.
Threshold concepts: implications for the management of natural resources
Guntenspergen, Glenn R.; Gross, John
2014-01-01
Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.
A novel gene network inference algorithm using predictive minimum description length approach.
Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang
2010-05-28
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the PMDL principle is effective in determining the MI threshold and the developed algorithm improves precision of gene regulatory network inference. Based on the sensitivity analysis of all tested cases, an optimal CMI threshold value has been identified. Finally it was observed that the performance of the algorithms saturates at a certain threshold of data size.
NASA Technical Reports Server (NTRS)
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Cohen, H.; Bloomberg, J.J.;
2015-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). Our previous work has shown the advantageous effects of VSR in a balance task of standing on an unstable surface [1]. This technique to improve detection of vestibular signals uses a stimulus delivery system that provides imperceptibly low levels of white noise-based binaural bipolar electrical stimulation of the vestibular system. The goal of this project is to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection. A series of experiments were carried out to determine a robust paradigm to identify a vestibular threshold that can then be used to recommend optimal stimulation levels for sensorimotor adaptability (SA) training applications customized to each crewmember. The amplitude of stimulation to be used in the VSR application has varied across studies in the literature such as 60% of nociceptive stimulus thresholds [2]. We compared subjects' perceptual threshold with that obtained from two measures of body sway. Each test session was 463s long and consisted of several 15s long sinusoidal stimuli, at different current amplitudes (0-2 mA), interspersed with 20-20.5s periods of no stimulation. Subjects sat on a chair with their eyes closed and had to report their perception of motion through a joystick. A force plate underneath the chair recorded medio-lateral shear forces and roll moments. Comparison of threshold of motion detection obtained from joystick data versus body sway suggests that perceptual thresholds were significantly lower. In the balance task, subjects stood on an unstable surface and had to maintain balance, and the stimulation was administered from 20-400% of subjects' vestibular threshold. Optimal stimulation amplitude was determined at which the balance performance was best compared to control (no stimulation). Preliminary results show that, in general, using stimulation amplitudes at 40-60% of perceptual motion threshold significantly improved the balance performance. We hypothesize that VSR stimulation will act synergistically with SA training to improve adaptability by increasing utilization of vestibular information and therefore will help us to optimize and personalize a SA countermeasure prescription. This combination may help to significantly reduce the number of days required to recover functional performance to preflight levels after long-duration spaceflight.
Schomaker, Michael; Egger, Matthias; Ndirangu, James; Phiri, Sam; Moultrie, Harry; Technau, Karl; Cox, Vivian; Giddy, Janet; Chimbetete, Cleophas; Wood, Robin; Gsponer, Thomas; Bolton Moore, Carolyn; Rabie, Helena; Eley, Brian; Muhe, Lulu; Penazzato, Martina; Essajee, Shaffiq; Keiser, Olivia; Davies, Mary-Ann
2013-01-01
Background There is limited evidence on the optimal timing of antiretroviral therapy (ART) initiation in children 2–5 y of age. We conducted a causal modelling analysis using the International Epidemiologic Databases to Evaluate AIDS–Southern Africa (IeDEA-SA) collaborative dataset to determine the difference in mortality when starting ART in children aged 2–5 y immediately (irrespective of CD4 criteria), as recommended in the World Health Organization (WHO) 2013 guidelines, compared to deferring to lower CD4 thresholds, for example, the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4 percentage (CD4%) <25%. Methods and Findings ART-naïve children enrolling in HIV care at IeDEA-SA sites who were between 24 and 59 mo of age at first visit and with ≥1 visit prior to ART initiation and ≥1 follow-up visit were included. We estimated mortality for ART initiation at different CD4 thresholds for up to 3 y using g-computation, adjusting for measured time-dependent confounding of CD4 percent, CD4 count, and weight-for-age z-score. Confidence intervals were constructed using bootstrapping. The median (first; third quartile) age at first visit of 2,934 children (51% male) included in the analysis was 3.3 y (2.6; 4.1), with a median (first; third quartile) CD4 count of 592 cells/mm3 (356; 895) and median (first; third quartile) CD4% of 16% (10%; 23%). The estimated cumulative mortality after 3 y for ART initiation at different CD4 thresholds ranged from 3.4% (95% CI: 2.1–6.5) (no ART) to 2.1% (95% CI: 1.3%–3.5%) (ART irrespective of CD4 value). Estimated mortality was overall higher when initiating ART at lower CD4 values or not at all. There was no mortality difference between starting ART immediately, irrespective of CD4 value, and ART initiation at the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4% <25%, with mortality estimates of 2.1% (95% CI: 1.3%–3.5%) and 2.2% (95% CI: 1.4%–3.5%) after 3 y, respectively. The analysis was limited by loss to follow-up and the unavailability of WHO staging data. Conclusions The results indicate no mortality difference for up to 3 y between ART initiation irrespective of CD4 value and ART initiation at a threshold of CD4 count <750 cells/mm3 or CD4% <25%, but there are overall higher point estimates for mortality when ART is initiated at lower CD4 values. Please see later in the article for the Editors' Summary PMID:24260029
Problematic smartphone use, nature connectedness, and anxiety.
Richardson, Miles; Hussain, Zaheer; Griffiths, Mark D
2018-03-01
Background Smartphone use has increased greatly at a time when concerns about society's disconnection from nature have also markedly increased. Recent research has also indicated that smartphone use can be problematic for a small minority of individuals. Methods In this study, associations between problematic smartphone use (PSU), nature connectedness, and anxiety were investigated using a cross-sectional design (n = 244). Results Associations between PSU and both nature connectedness and anxiety were confirmed. Receiver operating characteristic (ROC) curves were used to identify threshold values on the Problematic Smartphone Use Scale (PSUS) at which strong associations with anxiety and nature connectedness occur. The area under the curve was calculated and positive likelihood ratios used as a diagnostic parameter to identify optimal cut-off for PSU. These provided good diagnostic ability for nature connectedness, but poor and non-significant results for anxiety. ROC analysis showed the optimal PSUS threshold for high nature connectedness to be 15.5 (sensitivity: 58.3%; specificity: 78.6%) in response to an LR+ of 2.88. Conclusions The results demonstrate the potential utility for the PSUS as a diagnostic tool, with a level of smartphone use that users may perceive as non-problematic being a significant cut-off in terms of achieving beneficial levels of nature connectedness. Implications of these findings are discussed.
The topographic grain concept in DEM-based geomorphometric mapping
NASA Astrophysics Data System (ADS)
Józsa, Edina
2016-04-01
A common drawback of geomorphological analyses based on digital elevation datasets is the definition of search window size for the derivation of morphometric variables. The fixed-size neighbourhood determines the scale of the analysis and mapping, which can lead to the generalization of smaller surface details or the elimination of larger landform elements. The methods of DEM-based geomorphometric mapping are constantly developing into the direction of multi-scale landform delineation, but the optimal threshold for search window size is still a limiting factor. A possible way to determine the suitable value for the parameter is to consider the topographic grain principle (Wood, W. F. - Snell, J. B. 1960, Pike, R. J. et al. 1989). The calculation is implemented as a bash shell script for GRASS GIS to determine the optimal threshold for the r.geomorphon module. The approach relies on the potential of the topographic grain to detect the characteristic local ridgeline-to-channel spacing. By calculating the relative relief values with nested neighbourhood matrices it is possible to define a break-point where the increase rate of local relief encountered by the sample is significantly reducing. The geomorphons approach (Jasiewicz, J. - Stepinski, T. F. 2013) is a cell-based DEM classification method for the identification of landform elements at a broad range of scales by using line-of-sight technique. The landforms larger than the maximum lookup distance are broken down to smaller elements therefore the threshold needs to be set for a relatively large value. On the contrary, the computational requirements and the size of the study sites determine the upper limit for the value. Therefore the aim was to create a tool that would help to determine the optimal parameter for r.geomorphon tool. As a result it would be possible to produce more objective and consistent maps with achieving the full efficiency of this mapping technique. For the thorough analysis on the applicability of the proposed methodology a test site covering hilly and low mountainous regions in Southern Transdanubia, Hungary was chosen. As elevation dataset the freely available SRTM DSM with 1 arc-second resolution was used, after implementing necessary error correction. Based on the delineated landform elements and morphometric variables the physiographic characteristics of the landscape could be analysed and compared with the existing expert-based map of microregions. References: Wood, W. F. and J. B. Snell (1960). A quantitative system for classifying landforms. - Technical Report EP-124. U.S. Army Quartermaster Research and Engineering Center, Natick, 20 pp. Pike, R. J., et al. (1989). Topographic grain automated from digital elevation models. - Proceedings, Auto-Carto 9, ASPRS/ACSM Baltimore MD, 2-7 April 1989. Jasiewicz, J. and T. F. Stepinski (2013). Geomorphons - a pattern recognition approach to classification and mapping of landforms. - Geomorphology 182(0): 147-156.
Fluorescently labeled bevacizumab in human breast cancer: defining the classification threshold
NASA Astrophysics Data System (ADS)
Koch, Maximilian; de Jong, Johannes S.; Glatz, Jürgen; Symvoulidis, Panagiotis; Lamberts, Laetitia E.; Adams, Arthur L. L.; Kranendonk, Mariëtte E. G.; Terwisscha van Scheltinga, Anton G. T.; Aichler, Michaela; Jansen, Liesbeth; de Vries, Jakob; Lub-de Hooge, Marjolijn N.; Schröder, Carolien P.; Jorritsma-Smit, Annelies; Linssen, Matthijs D.; de Boer, Esther; van der Vegt, Bert; Nagengast, Wouter B.; Elias, Sjoerd G.; Oliveira, Sabrina; Witkamp, Arjen J.; Mali, Willem P. Th. M.; Van der Wall, Elsken; Garcia-Allende, P. Beatriz; van Diest, Paul J.; de Vries, Elisabeth G. E.; Walch, Axel; van Dam, Gooitzen M.; Ntziachristos, Vasilis
2017-07-01
In-vivo fluorescently labelled drug (bevacizumab) breast cancer specimen where obtained from patients. We propose a new structured method to determine the optimal classification threshold in targeted fluorescence intra-operative imaging.
Percolation threshold determines the optimal population density for public cooperation
NASA Astrophysics Data System (ADS)
Wang, Zhen; Szolnoki, Attila; Perc, Matjaž
2012-03-01
While worldwide census data provide statistical evidence that firmly link the population density with several indicators of social welfare, the precise mechanisms underlying these observations are largely unknown. Here we study the impact of population density on the evolution of public cooperation in structured populations and find that the optimal density is uniquely related to the percolation threshold of the host graph irrespective of its topological details. We explain our observations by showing that spatial reciprocity peaks in the vicinity of the percolation threshold, when the emergence of a giant cooperative cluster is hindered neither by vacancy nor by invading defectors, thus discovering an intuitive yet universal law that links the population density with social prosperity.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Xiong, Zhihua
2016-10-01
The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.
An Optimal Method for Detecting Internal and External Intrusion in MANET
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Aliahmadipour, Laya; Javidi, Mohammad M.
Mobile Ad hoc Network (MANET) is formed by a set of mobile hosts which communicate among themselves through radio waves. The hosts establish infrastructure and cooperate to forward data in a multi-hop fashion without a central administration. Due to their communication type and resources constraint, MANETs are vulnerable to diverse types of attacks and intrusions. In this paper, we proposed a method for prevention internal intruder and detection external intruder by using game theory in mobile ad hoc network. One optimal solution for reducing the resource consumption of detection external intruder is to elect a leader for each cluster to provide intrusion service to other nodes in the its cluster, we call this mode moderate mode. Moderate mode is only suitable when the probability of attack is low. Once the probability of attack is high, victim nodes should launch their own IDS to detect and thwart intrusions and we call robust mode. In this paper leader should not be malicious or selfish node and must detect external intrusion in its cluster with minimum cost. Our proposed method has three steps: the first step building trust relationship between nodes and estimation trust value for each node to prevent internal intrusion. In the second step we propose an optimal method for leader election by using trust value; and in the third step, finding the threshold value for notifying the victim node to launch its IDS once the probability of attack exceeds that value. In first and third step we apply Bayesian game theory. Our method due to using game theory, trust value and honest leader can effectively improve the network security, performance and reduce resource consumption.
Decision curve analysis: a novel method for evaluating prediction models.
Vickers, Andrew J; Elkin, Elena B
2006-01-01
Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.
Ochando-Pulido, J M; Hodaifa, G; Victor-Ortega, M D; Rodriguez-Vives, S; Martinez-Ferez, A
2013-12-15
Production of olive oil results in the generation of high amounts of heavy polluted effluents characterized by extremely variable contaminants degree, leading to sensible complexity in treatment. In this work, batch membrane processes in series comprising ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO) are used to purify the effluents exiting both the two-phase and tree-phase extraction processes to a grade compatible to the discharge in municipal sewer systems in Spain and Italy. However, one main problem in applying this technology to wastewater management issues is given by membrane fouling. In the last years, the threshold flux theory was introduced as a key tool to understand fouling problems, and threshold flux measurement can give valuable information regarding optimal membrane process design and operation. In the present manuscript, mathematical approach of threshold flux conditions for membranes operation is addressed, also implementing proper pretreatment processes such as pH-T flocculation and UV/TiO2 photocatalysis with ferromagnetic-core nanoparticles in order to reduce membranes fouling. Both influence the organic matter content as well as the particle size distribution of the solutes surviving in the wastewater stream, leading, when properly applied, to reduced fouling, higher rejection and recovery values, thus enhancing the economic feasibility of the process. Copyright © 2013 Elsevier B.V. All rights reserved.
Imaging performance of a Timepix detector based on semi-insulating GaAs
NASA Astrophysics Data System (ADS)
Zaťko, B.; Zápražný, Z.; Jakůbek, J.; Šagátová, A.; Boháček, P.; Sekáčová, M.; Korytár, D.; Nečas, V.; Žemlička, J.; Mora, Y.; Pichotka, M.
2018-01-01
This work focused on a Timepix chip [1] coupled with a bulk semi-insulating GaAs sensor. The sensor consisted of a matrix of 256 × 256 pixels with a pitch of 55 μm bump-bonded to a Timepix ASIC. The sensor was processed on a 350 μm-thick SI GaAs wafer. We carried out detector adjustment to optimize its performance. This included threshold equalization with setting up parameters of the Timepix chip, such as Ikrum, Pream, Vfbk, and so on. The energy calibration of the GaAs Timepix detector was realized using a 241Am radioisotope in two Timepix detector modes: time-over-threshold and threshold scan. An energy resolution of 4.4 keV in FWHM (Full Width at Half Maximum) was observed for 59.5 keV γ-photons using threshold scan mode. The X-ray imaging quality of the GaAs Timepix detector was tested using various samples irradiated by an X-ray source with a focal spot size smaller than 8 μm and accelerating voltage up to 80 kV. A 700 μm × 700 μm gold testing object (X-500-200-16Au with Siemens star) fabricated with high precision was used for the spatial resolution testing at different values of X-ray image magnification (up to 45). The measured spatial resolution of our X-ray imaging system was about 4 μm.
Rapid and Reliable Damage Proxy Map from InSAR Coherence
NASA Technical Reports Server (NTRS)
Yun, Sang-Ho; Fielding, Eric; Simons, Mark; Agram, Piyush; Rosen, Paul; Owen, Susan; Webb, Frank
2012-01-01
Future radar satellites will visit SoCal within a day after a disaster event. Data acquisition latency in 2015-2020 is 8 to approx. 15 hours. Data transfer latency that often involves human/agency intervention far exceeds the data acquisition latency. Need interagency cooperation to establish automatic pipeline for data transfer. The algorithm is tested with ALOS PALSAR data of Pasadena, California. Quantitative quality assessment is being pursued: Meeting with Pasadena City Hall computer engineers for a complete list of demolition/construction project 1. Estimate the probability of detection and probability of false alarm 2. Estimate the optimal threshold value.
Fidelity matters: the birth of entanglement in the mixing of Gaussian states.
Olivares, Stefano; Paris, Matteo G A
2011-10-21
We address the interaction of two Gaussian states through bilinear exchange Hamiltonians and analyze the correlations exhibited by the resulting bipartite systems. We demonstrate that entanglement arises if and only if the fidelity between the two input Gaussian states falls under a threshold value depending only on their purities, first moments, and the strength of the coupling. Our result clarifies the role of quantum fluctuations (squeezing) as a prerequisite for entanglement generation and provides a tool to optimize the generation of entanglement in linear systems of interest for quantum technology. © 2011 American Physical Society
Objective lens simultaneously optimized for pupil ghosting, wavefront delivery and pupil imaging
NASA Technical Reports Server (NTRS)
Olczak, Eugene G (Inventor)
2011-01-01
An objective lens includes multiple optical elements disposed between a first end and a second end, each optical element oriented along an optical axis. Each optical surface of the multiple optical elements provides an angle of incidence to a marginal ray that is above a minimum threshold angle. This threshold angle minimizes pupil ghosts that may enter an interferometer. The objective lens also optimizes wavefront delivery and pupil imaging onto an optical surface under test.
2010-01-01
Background The origin and stability of cooperation is a hot topic in social and behavioural sciences. A complicated conundrum exists as defectors have an advantage over cooperators, whenever cooperation is costly so consequently, not cooperating pays off. In addition, the discovery that humans and some animal populations, such as lions, are polymorphic, where cooperators and defectors stably live together -- while defectors are not being punished--, is even more puzzling. Here we offer a novel explanation based on a Threshold Public Good Game (PGG) that includes the interaction of individual and group level selection, where individuals can contribute to multiple collective actions, in our model group hunting and group defense. Results Our results show that there are polymorphic equilibria in Threshold PGGs; that multi-level selection does not select for the most cooperators per group but selects those close to the optimum number of cooperators (in terms of the Threshold PGG). In particular for medium cost values division of labour evolves within the group with regard to the two types of cooperative actions (hunting vs. defense). Moreover we show evidence that spatial population structure promotes cooperation in multiple PGGs. We also demonstrate that these results apply for a wide range of non-linear benefit function types. Conclusions We demonstrate that cooperation can be stable in Threshold PGG, even when the proportion of so called free riders is high in the population. A fundamentally new mechanism is proposed how laggards, individuals that have a high tendency to defect during one specific group action can actually contribute to the fitness of the group, by playing part in an optimal resource allocation in Threshold Public Good Games. In general, our results show that acknowledging a multilevel selection process will open up novel explanations for collective actions. PMID:21044340
Quintana, Penelope J E; Matt, Georg E; Chatfield, Dale; Zakarian, Joy M; Fortmann, Addie L; Hoh, Eunha
2013-09-01
Secondhand smoke contains a mixture of pollutants that can persist in air, dust, and on surfaces for months or longer. This persistent residue is known as thirdhand smoke (THS). Here, we detail a simple method of wipe sampling for nicotine as a marker of accumulated THS on surfaces. We analyzed findings from 5 real-world studies to investigate the performance of wipe sampling for nicotine on surfaces in homes, cars, and hotels in relation to smoking behavior and smoking restrictions. The intraclass correlation coefficient for side-by-side samples was 0.91 (95% CI: 0.87-0.94). Wipe sampling for nicotine reliably distinguished between private homes, private cars, rental cars, and hotels with and without smoking bans and was significantly positively correlated with other measures of tobacco smoke contamination such as air and dust nicotine. The sensitivity and specificity of possible threshold values (0.1, 1, and 10 μg/m(2)) were evaluated for distinguishing between nonsmoking and smoking environments. Sensitivity was highest at a threshold of 0.1 μg/m(2), with 74%-100% of smoker environments showing nicotine levels above threshold. Specificity was highest at a threshold of 10 μg/m(2), with 81%-100% of nonsmoker environments showing nicotine levels below threshold. The optimal threshold will depend on the desired balance of sensitivity and specificity and on the types of smoking and nonsmoking environments. Surface wipe sampling for nicotine is a reliable, valid, and relatively simple collection method to quantify THS contamination on surfaces across a wide range of field settings and to distinguish between nonsmoking and smoking environments.
Vuralli, Doga; Evren Boran, H; Cengiz, Bulent; Coskun, Ozlem; Bolay, Hayrunnisa
2016-10-01
Migraine headache attacks have been shown to be accompanied by significant prolongation of somatosensory temporal discrimination threshold values, supporting signs of disrupted sensorial processing in migraine. Chronic migraine is one of the most debilitating and challenging headache disorders with no available biomarker. We aimed to test the diagnostic value of somatosensory temporal discrimination for chronic migraine in this prospective, controlled study. Fifteen chronic migraine patients and 15 healthy controls completed the study. Chronic migraine patients were evaluated twice, during a headache and headache-free period. Somatosensory temporal discrimination threshold values were evaluated in both hands. Duration of migraine and chronic migraine, headache intensity, clinical features accompanying headache such as nausea, photophobia, phonophobia and osmophobia, and pressure pain thresholds were also recorded. In the chronic migraine group, somatosensory temporal discrimination threshold values on the headache day (138.8 ± 21.8 ms for the right hand and 141.2 ± 17.4 ms for the left hand) were significantly higher than somatosensory temporal discrimination threshold values on the headache free day (121.5 ± 13.8 ms for the right hand and 122.8 ± 12.6 ms for the left hand, P = .003 and P < .0001, respectively) and somatosensory temporal discrimination thresholds of healthy volunteers (35.4 ± 5.5 ms for the right hand and 36.4 ± 5.4 ms for the left hand, P < .0001 and P < .0001, respectively). Somatosensory temporal discrimination threshold values of chronic migraine patients on the headache free day were significantly prolonged compared to somatosensory temporal discrimination threshold values of the control group (121.5 ± 13.8 ms vs 35.4 ± 5.5 ms for the right hand, P < .0001 and 122.8 ± 12.6 ms vs 36.4 ± 5.4 ms for the left hand, P < .0001). Somatosensory temporal discrimination threshold values of the hand contralateral to the headache lateralization (153.3 ± 13.7 ms) were significantly higher (P < .0001) than the ipsilateral hand (118.2 ± 11.9 ms) in chronic migraine patients when headache was lateralized. The headache intensity of chronic migraine patients rated with visual analog score was positively correlated with the contralateral somatosensory temporal discrimination threshold values. Somatosensory temporal discrimination thresholds persist elevated during the headache-free intervals in patients with chronic migraine. By providing evidence for the first time for unremitting disruption of central sensory processing, somatosensory temporal discrimination test stands out as a promising neurophysiological biomarker for chronic migraine. © 2016 American Headache Society.
Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD.
Bhandari, A K; Soni, V; Kumar, A; Singh, G K
2014-07-01
This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
An adaptive threshold detector and channel parameter estimator for deep space optical communications
NASA Technical Reports Server (NTRS)
Arabshahi, P.; Mukai, R.; Yan, T. -Y.
2001-01-01
This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.
Optimal control of population recovery--the role of economic restoration threshold.
Lampert, Adam; Hastings, Alan
2014-01-01
A variety of ecological systems around the world have been damaged in recent years, either by natural factors such as invasive species, storms and global change or by direct human activities such as overfishing and water pollution. Restoration of these systems to provide ecosystem services entails significant economic benefits. Thus, choosing how and when to restore in an optimal fashion is important, but has not been well studied. Here we examine a general model where population growth can be induced or accelerated by investing in active restoration. We show that the most cost-effective method to restore an ecosystem dictates investment until the population approaches an 'economic restoration threshold', a density above which the ecosystem should be left to recover naturally. Therefore, determining this threshold is a key general approach for guiding efficient restoration management, and we demonstrate how to calculate this threshold for both deterministic and stochastic ecosystems. © 2013 John Wiley & Sons Ltd/CNRS.
Optimal threshold estimation for binary classifiers using game theory.
Sanchez, Ignacio Enrique
2016-01-01
Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.
Holzgreve, Adrien; Brendel, Matthias; Gu, Song; Carlsen, Janette; Mille, Erik; Böning, Guido; Mastrella, Giorgia; Unterrainer, Marcus; Gildehaus, Franz J; Rominger, Axel; Bartenstein, Peter; Kälin, Roland E; Glass, Rainer; Albert, Nathalie L
2016-01-01
Noninvasive tumor growth monitoring is of particular interest for the evaluation of experimental glioma therapies. This study investigates the potential of positron emission tomography (PET) using O-(2-(18)F-fluoroethyl)-L-tyrosine ([(18)F]-FET) to determine tumor growth in a murine glioblastoma (GBM) model-including estimation of the biological tumor volume (BTV), which has hitherto not been investigated in the pre-clinical context. Fifteen GBM-bearing mice (GL261) and six control mice (shams) were investigated during 5 weeks by PET followed by autoradiographic and histological assessments. [(18)F]-FET PET was quantitated by calculation of maximum and mean standardized uptake values within a universal volume-of-interest (VOI) corrected for healthy background (SUVmax/BG, SUVmean/BG). A partial volume effect correction (PVEC) was applied in comparison to ex vivo autoradiography. BTVs obtained by predefined thresholds for VOI definition (SUV/BG: ≥1.4; ≥1.6; ≥1.8; ≥2.0) were compared to the histologically assessed tumor volume (n = 8). Finally, individual "optimal" thresholds for BTV definition best reflecting the histology were determined. In GBM mice SUVmax/BG and SUVmean/BG clearly increased with time, however at high inter-animal variability. No relevant [(18)F]-FET uptake was observed in shams. PVEC recovered signal loss of SUVmean/BG assessment in relation to autoradiography. BTV as estimated by predefined thresholds strongly differed from the histology volume. Strikingly, the individual "optimal" thresholds for BTV assessment correlated highly with SUVmax/BG (ρ = 0.97, p < 0.001), allowing SUVmax/BG-based calculation of individual thresholds. The method was verified by a subsequent validation study (n = 15, ρ = 0.88, p < 0.01) leading to extensively higher agreement of BTV estimations when compared to histology in contrast to predefined thresholds. [(18)F]-FET PET with standard SUV measurements is feasible for glioma imaging in the GBM mouse model. PVEC is beneficial to improve accuracy of [(18)F]-FET PET SUV quantification. Although SUVmax/BG and SUVmean/BG increase during the disease course, these parameters do not correlate with the respective tumor size. For the first time, we propose a histology-verified method allowing appropriate individual BTV estimation for volumetric in vivo monitoring of tumor growth with [(18)F]-FET PET and show that standardized thresholds from routine clinical practice seem to be inappropriate for BTV estimation in the GBM mouse model.
Study of blur discrimination for 3D stereo viewing
NASA Astrophysics Data System (ADS)
Subedar, Mahesh; Karam, Lina J.
2014-03-01
Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.
1992-03-01
In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less
Control of Smart Building Using Advanced SCADA
NASA Astrophysics Data System (ADS)
Samuel, Vivin Thomas
For complete control of the building, a proper SCADA implementation and the optimization strategy has to be build. For better communication and efficiency a proper channel between the Communication protocol and SCADA has to be designed. This paper concentrate mainly between the communication protocol, and the SCADA implementation, for a better optimization and energy savings is derived to large scale industrial buildings. The communication channel used in order to completely control the building remotely from a distant place. For an efficient result we consider the temperature values and the power ratings of the equipment so that while controlling the equipment, we are setting a threshold values for FDD technique implementation. Building management system became a vital source for any building to maintain it and for safety purpose. Smart buildings, refers to various distinct features, where the complete automation system, office building controls, data center controls. ELC's are used to communicate the load values of the building to the remote server from a far location with the help of an Ethernet communication channel. Based on the demand fluctuation and the peak voltage, the loads operate differently increasing the consumption rate thus results in the increase in the annual consumption bill. In modern days, saving energy and reducing the consumption bill is most essential for any building for a better and long operation. The equipment - monitored regularly and optimization strategy is implemented for cost reduction automation system. Thus results in the reduction of annual cost reduction and load lifetime increase.
Optimizing computer-aided colonic polyp detection for CT colonography by evolving the Pareto front1
Li, Jiang; Huang, Adam; Yao, Jack; Liu, Jiamin; Van Uitert, Robert L.; Petrick, Nicholas; Summers, Ronald M.
2009-01-01
A multiobjective genetic algorithm is designed to optimize a computer-aided detection (CAD) system for identifying colonic polyps. Colonic polyps appear as elliptical protrusions on the inner surface of the colon. Curvature-based features for colonic polyp detection have proved to be successful in several CT colonography (CTC) CAD systems. Our CTC CAD program uses a sequential classifier to form initial polyp detections on the colon surface. The classifier utilizes a set of thresholds on curvature-based features to cluster suspicious colon surface regions into polyp candidates. The thresholds were previously chosen experimentally by using feature histograms. The chosen thresholds were effective for detecting polyps sized 10 mm or larger in diameter. However, many medium-sized polyps, 6–9 mm in diameter, were missed in the initial detection procedure. In this paper, the task of finding optimal thresholds as a multiobjective optimization problem was formulated, and a genetic algorithm to solve it was utilized by evolving the Pareto front of the Pareto optimal set. The new CTC CAD system was tested on 792 patients. The sensitivities of the optimized system improved significantly, from 61.68% to 74.71% with an increase of 13.03% (95% CI [6.57%, 19.5%], p=7.78×10−5) for the size category of 6–9 mm polyps, from 65.02% to 77.4% with an increase of 12.38% (95% CI [6.23%, 18.53%], p=7.95×10−5) for polyps 6 mm or larger, and from 82.2% to 90.58% with an increase of 8.38% (95%CI [0.75%, 16%], p=0.03) for polyps 8 mm or larger at comparable false positive rates. The sensitivities of the optimized system are nearly equivalent to those of expert radiologists. PMID:19235388
Is ``No-Threshold'' a ``Non-Concept''?
NASA Astrophysics Data System (ADS)
Schaeffer, David J.
1981-11-01
A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.
Devic, Slobodan; Mohammed, Huriyyah; Tomic, Nada; Aldelaijan, Saad; De Blois, François; Seuntjens, Jan; Lehnert, Shirley; Faria, Sergio
2016-06-01
Integration of fluorine-18 fludeoxyglucose ((18)F-FDG)-positron emission tomography (PET) functional data into conventional anatomically based gross tumour volume delineation may lead to optimization of dose to biological target volumes (BTV) in radiotherapy. We describe a method for defining tumour subvolumes using (18)F-FDG-PET data, based on the decomposition of differential uptake volume histograms (dUVHs). For 27 patients with histopathologically proven non-small-cell lung carcinoma (NSCLC), background uptake values were sampled within the healthy lung contralateral to a tumour in those image slices containing tumour and then scaled by the ratio of mass densities between the healthy lung and tumour. Signal-to-background (S/B) uptake values within volumes of interest encompassing the tumour were used to reconstruct the dUVHs. These were subsequently decomposed into the minimum number of analytical functions (in the form of differential uptake values as a function of S/B) that yielded acceptable net fits, as assessed by χ(2) values. Six subvolumes consistently emerged from the fitted dUVHs over the sampled volume of interest on PET images. Based on the assumption that each function used to decompose the dUVH may correspond to a single subvolume, the intersection between the two adjacent functions could be interpreted as a threshold value that differentiates them. Assuming that the first two subvolumes spread over the tumour boundary, we concentrated on four subvolumes with the highest uptake values, and their S/B thresholds [mean ± standard deviation (SD)] were 2.88 ± 0.98, 4.05 ± 1.55, 5.48 ± 2.06 and 7.34 ± 2.89 for adenocarcinoma, 3.01 ± 0.71, 4.40 ± 0.91, 5.99 ± 1.31 and 8.17 ± 2.42 for large-cell carcinoma and 4.54 ± 2.11, 6.46 ± 2.43, 8.87 ± 5.37 and 12.11 ± 7.28 for squamous cell carcinoma, respectively. (18)F-FDG-based PET data may potentially be used to identify BTV within the tumour in patients with NSCLC. Using the one-way analysis of variance statistical tests, we found a significant difference among all threshold levels among adenocarcinomas, large-cell carcinoma and squamous cell carcinomas. On the other hand, the observed significant variability in threshold values throughout the patient cohort (expressed as large SDs) can be explained as a consequence of differences in the physiological status of the tumour volume for each patient at the time of the PET/CT scan. This further suggests that patient-specific threshold values for the definition of BTVs could be determined by creation and curve fitting of dUVHs on a patient-by-patient basis. The method of (18)F-FDG-PET-based dUVH decomposition described in this work may lead to BTV segmentation in tumours.
Araki, Tetsuro; Sholl, Lynette M.; Gerbaudo, Victor H.; Hatabu, Hiroto; Nishino, Mizuki
2014-01-01
OBJECTIVE The purpose of this article is to investigate the imaging characteristics of pathologically proven thymic hyperplasia and to identify features that can differentiate true hyperplasia from lymphoid hyperplasia. MATERIALS AND METHODS Thirty-one patients (nine men and 22 women; age range, 20–68 years) with pathologically confirmed thymic hyperplasia (18 true and 13 lymphoid) who underwent preoperative CT (n = 27), PET/CT (n = 5), or MRI (n = 6) were studied. The length and thickness of each thymic lobe and the transverse and anterior-posterior diameters and attenuation of the thymus were measured on CT. Thymic morphologic features and heterogeneity on CT and chemical shift on MRI were evaluated. Maximum standardized uptake values were measured on PET. Imaging features between true and lymphoid hyperplasia were compared. RESULTS No significant differences were observed between true and lymphoid hyperplasia in terms of thymic length, thickness, diameters, morphologic features, and other qualitative features (p > 0.16). The length, thickness, and diameters of thymic hyperplasia were significantly larger than the mean values of normal glands in the corresponding age group (p < 0.001). CT attenuation of lymphoid hyperplasia was significantly higher than that of true hyperplasia among 15 patients with contrast-enhanced CT (median, 47.9 vs 31.4 HU; Wilcoxon p = 0.03). The receiver operating characteristic analysis yielded greater than 41.2 HU as the optimal threshold for differentiating lymphoid hyperplasia from true hyperplasia, with 83% sensitivity and 89% specificity. A decrease of signal intensity on opposed-phase images was present in all four cases with in- and opposed-phase imaging. The mean maximum standardized uptake value was 2.66. CONCLUSION CT attenuation of the thymus was significantly higher in lymphoid hyperplasia than in true hyperplasia, with an optimal threshold of greater than 41.2 HU in this cohort of patients with pathologically confirmed thymic hyperplasia. PMID:24555583
Wang, L Y; Liu, Q; Cheng, X T; Jiang, J J; Wang, H
2017-07-01
We aimed to evaluate the performance of blood pressure-to-height ratio (BPHR) and establish their optimal thresholds for elevated blood pressure (BP) among children aged 6 to 17 years in Chongqing, China. Data were collected from 11 029 children and adolescents aged 6-17 years in 12 schools in Chongqing according to multistage stratified cluster sampling method. The gold standard for elevated BP was defined as systolic blood pressure (SBP) and/or diastolic blood pressure (DBP) ⩾95th percentile for gender, age and height. The diagnostic performance of systolic BPHR (SBPHR) and diastolic BPHR (DBPHR) to screen for elevated BP was evaluated through receiver-operating characteristic curves (including the area under the curve (AUC) and its 95% confidence interval, sensitivity and specificity). The prevalence of elevated BP in children and adolescents in Chongqing was 10.36% by SBP and/or DBP ⩾95th percentile for gender, age and height. The optimal thresholds of SBPHR/DBPHR for identifying elevated BP were 0.86/0.58 for boys and 0.85/0.57 for girls among children aged 6 to 8 years, 0.81/0.53 for boys and 0.80/0.52 for girls among children aged 9 to 11 years and 0.71/0.45 for boys and 0.72/0.47 for girls among adolescents aged 12-17 years, respectively. Across gender and the specified age groups, AUC ranged from 0.82 to 0.88, sensitivity were above 0.94 and the specificities were over 0.7. The positive predictive values ranged from 0.30 to 0.38 and the negative predictive values were ⩾0.99. BPHR, with uniform values across broad age groups (6-8, 9-11 and 12-17 years) for boys and for girls is a simple indicator to screen elevated BP in children and adolescents in Chongqing.
Raja, Harish; Snyder, Melissa R; Johnston, Patrick B; O'Neill, Brian P; Caraballo, Juline N; Balsanek, Joseph G; Peters, Brian E; Decker, Paul A; Pulido, Jose S
2013-01-01
Intraocular cytokines are promising diagnostic biomarkers of vitreoretinal lymphoma. Here, we evaluate the utility of IL-10, IL-6 and IL-10/IL-6 for discriminating lymphoma from uveitis and report the effects of intraocular methotrexate and rituximab on aqueous cytokine levels in eyes with lymphoma. This is a retrospective case series including 10 patients with lymphoma and 7 patients with uveitis. Non-parametric Mann-Whitney analysis was performed to determine statistical significance of difference in interleukin levels between lymphoma and uveitis. Compared to eyes with uveitis, eyes with lymphoma had higher levels of IL-10 (U = 7.0; two-tailed p = 0.004) and IL-10/IL-6 (U = 6.0; two-tailed p = 0.003), whereas IL-6 levels were more elevated, although insignificant, in those patients with uveitis than in lymphoma (U = 15.0; two-tailed p = ns). Using a receiver operating characteristic analysis to identify threshold values diagnostic for lymphoma, optimal sensitivity and specificity improved to 80.0% and 100%, respectively, for IL-10>7.025 pg/ml and 90.0% and 100.0%, respectively, for IL-10/IL-6>0.02718. In patients in whom serial interleukin levels were available, regular intravitreal treatment with methotrexate and rituximab was associated with reduction in IL-10 levels over time. In conclusion, optimal IL-10 and IL-10/IL-6 threshold values are associated with a diagnostic sensitivity ≥80% and specificity of 100%. Therefore, these cytokines may serve as a useful adjunct in the diagnosis of lymphoma. While negative IL-10 and IL-10/IL-6 values do not exclude a diagnosis of lymphoma, elevated levels do appear to be consistent with lymphoma clinically. Moreover, elevated levels of IL-10 in the setting of a clinically quiet eye may point to impending disease recurrence. Lastly, once lymphoma is diagnosed, IL-10 levels can be monitored over time to assess disease activity and therapeutic response.
Optimization of microphysics in the Unified Model, using the Micro-genetic algorithm.
NASA Astrophysics Data System (ADS)
Jang, J.; Lee, Y.; Lee, H.; Lee, J.; Joo, S.
2016-12-01
This study focuses on parameter optimization of microphysics in the Unified Model (UM) using the Micro-genetic algorithm (Micro-GA). We need the optimization of microphysics in UM. Because, Microphysics in the Numerical Weather Prediction (NWP) model is important to Quantitative Precipitation Forecasting (QPF). The Micro-GA searches for optimal parameters on the basis of fitness function. The five parameters are chosen. The target parameters include x1, x2 related to raindrop size distribution, Cloud-rain correlation coefficient, Surface droplet number and Droplet taper height. The fitness function is based on the skill score that is BIAS and Critical Successive Index (CSI). An interface between UM and Micro-GA is developed and applied to three precipitation cases in Korea. The cases are (ⅰ) heavy rainfall in the Southern area because of typhoon NAKRI, (ⅱ) heavy rainfall in the Youngdong area, and (ⅲ) heavy rainfall in the Seoul metropolitan area. When the optimized result is compared to the control result (using the UM default value, CNTL), the optimized result leads to improvements in precipitation forecast, especially for heavy rainfall of the late forecast time. Also, we analyze the skill score of precipitation forecasts in terms of various thresholds of CNTL, Optimized result, and experiments on each optimized parameter for five parameters. Generally, the improvement is maximized when the five optimized parameters are used simultaneously. Therefore, this study demonstrates the ability to improve Korean precipitation forecasts by optimizing microphysics in UM.
Optimizing the clinical utility of PCA3 to diagnose prostate cancer in initial prostate biopsy.
Rubio-Briones, Jose; Borque, Angel; Esteban, Luis M; Casanova, Juan; Fernandez-Serra, Antonio; Rubio, Luis; Casanova-Salas, Irene; Sanz, Gerardo; Domínguez-Escrig, Jose; Collado, Argimiro; Gómez-Ferrer, Alvaro; Iborra, Inmaculada; Ramírez-Backhaus, Miguel; Martínez, Francisco; Calatrava, Ana; Lopez-Guerrero, Jose A
2015-09-11
PCA3 has been included in a nomogram outperforming previous clinical models for the prediction of any prostate cancer (PCa) and high grade PCa (HGPCa) at the initial prostate biopsy (IBx). Our objective is to validate such IBx-specific PCA3-based nomogram. We also aim to optimize the use of this nomogram in clinical practice through the definition of risk groups. Independent external validation. Clinical and biopsy data from a contemporary cohort of 401 men with the same inclusion criteria to those used to build up the reference's nomogram in IBx. The predictive value of the nomogram was assessed by means of calibration curves and discrimination ability through the area under the curve (AUC). Clinical utility of the nomogram was analyzed by choosing thresholds points that minimize the overlapping between probability density functions (PDF) in PCa and no PCa and HGPCa and no HGPCa groups, and net benefit was assessed by decision curves. We detect 28% of PCa and 11 % of HGPCa in IBx, contrasting to the 46 and 20% at the reference series. Due to this, there is an overestimation of the nomogram probabilities shown in the calibration curve for PCa. The AUC values are 0.736 for PCa (C.I.95%:0.68-0.79) and 0.786 for HGPCa (C.I.95%:0.71-0.87) showing an adequate discrimination ability. PDF show differences in the distributions of nomogram probabilities in PCa and not PCa patient groups. A minimization of the overlapping between these curves confirms the threshold probability of harboring PCa >30 % proposed by Hansen is useful to indicate a IBx, but a cut-off > 40% could be better in series of opportunistic screening like ours. Similar results appear in HGPCa analysis. The decision curve also shows a net benefit of 6.31% for the threshold probability of 40%. PCA3 is an useful tool to select patients for IBx. Patients with a calculated probability of having PCa over 40% should be counseled to undergo an IBx if opportunistic screening is required.
Novel methodologies for spectral classification of exon and intron sequences
NASA Astrophysics Data System (ADS)
Kwan, Hon Keung; Kwan, Benjamin Y. M.; Kwan, Jennifer Y. Y.
2012-12-01
Digital processing of a nucleotide sequence requires it to be mapped to a numerical sequence in which the choice of nucleotide to numeric mapping affects how well its biological properties can be preserved and reflected from nucleotide domain to numerical domain. Digital spectral analysis of nucleotide sequences unfolds a period-3 power spectral value which is more prominent in an exon sequence as compared to that of an intron sequence. The success of a period-3 based exon and intron classification depends on the choice of a threshold value. The main purposes of this article are to introduce novel codes for 1-sequence numerical representations for spectral analysis and compare them to existing codes to determine appropriate representation, and to introduce novel thresholding methods for more accurate period-3 based exon and intron classification of an unknown sequence. The main findings of this study are summarized as follows: Among sixteen 1-sequence numerical representations, the K-Quaternary Code I offers an attractive performance. A windowed 1-sequence numerical representation (with window length of 9, 15, and 24 bases) offers a possible speed gain over non-windowed 4-sequence Voss representation which increases as sequence length increases. A winner threshold value (chosen from the best among two defined threshold values and one other threshold value) offers a top precision for classifying an unknown sequence of specified fixed lengths. An interpolated winner threshold value applicable to an unknown and arbitrary length sequence can be estimated from the winner threshold values of fixed length sequences with a comparable performance. In general, precision increases as sequence length increases. The study contributes an effective spectral analysis of nucleotide sequences to better reveal embedded properties, and has potential applications in improved genome annotation.
Austel, Michaela; Hensel, Patrick; Jackson, Dawn; Vidyashankar, Anand; Zhao, Ying; Medleau, Linda
2006-06-01
The purpose of this study was to determine the optimal histamine concentration and 'irritant' allergen threshold concentrations in intradermal testing (IDT) in normal cats. Thirty healthy cats were tested with three different histamine concentrations and four different concentrations of each allergen. The optimal histamine concentration was determined to be 1: 50,000 w/v (0.05 mg mL(-1)). Using this histamine concentration, the 'irritant' threshold concentration for most allergens was above the highest concentrations tested (4,000 PNU mL(-1) for 41 allergens and 700 PNU mL(-1) for human dander). The 'irritant' threshold concentration for flea antigen was determined to be 1:750 w/v. More than 10% of the tested cats showed positive reactions to Dermatophagoides farinae, Dermatophagoides pteronyssinus, housefly, mosquito and moth at every allergen concentration, which suggests that the 'irritant' threshold concentration for these allergens is below 1,000 PNU mL(-1), the lowest allergen concentration tested. Our results confirm previous studies in indicating that allergen and histamine concentrations used in feline IDT may need to be revised.
Olfactory Threshold of Chlorine in Oxygen.
1977-09-01
The odor threshold of chlorine in oxygen was determined. Measurements were conducted in an altitude chamber, which provided an odor-free and noise...free background. Human male volunteers, with no previous olfactory acuity testing experience, served as panelists. Threshold values were affected by...time intervals between trials and by age differences. The mean threshold value for 11 subjects was 0.08 ppm obtained by positive responses to the lowest detectable level of chlorine in oxygen, 50% of the time. (Author)
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2016-02-01
A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.
Finite mixture modeling for vehicle crash data with application to hotspot identification.
Park, Byung-Jung; Lord, Dominique; Lee, Chungwon
2014-10-01
The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.
Boyes, Allison; D'Este, Catherine; Carey, Mariko; Lecathelinais, Christophe; Girgis, Afaf
2013-01-01
Use of the Distress Thermometer (DT) as a screening tool is increasing across the cancer trajectory. This study examined the accuracy and optimal cut-off score of the DT compared to the Hospital Anxiety and Depression Scale (HADS) for detecting possible cases of psychological morbidity among adults in early survivorship. This study is a cross-sectional survey of 1,323 adult cancer survivors recruited from two state-based cancer registries in Australia. Participants completed the DT and the HADS at 6 months post-diagnosis. Compared to the HADS subscale threshold ≥8, the DT performed well in discriminating between cases and non-cases of anxiety, depression and comorbid anxiety-depression with an area under the curve of 0.85, 0.84 and 0.87, respectively. A DT cut-off score of ≥2 was best for clinical use (sensitivity, 87-95 %; specificity, 60-68 %), ≥4 was best for research use (sensitivity, 67-82 %; specificity, 81-88 %) and ≥3 was the best balance between sensitivity (77-88 %) and specificity (72-79 %) for detecting cases of anxiety, depression and comorbid anxiety-depression. The DT demonstrated a high level of precision in identifying non-cases of psychological morbidity at all possible thresholds (negative predictive value, 77-99 %). The recommended DT cut-off score of ≥4 was not supported for universal use among recent cancer survivors. The optimal DT threshold depends upon whether the tool is being used in the clinical or research setting. The DT may best serve to initially identify non-cases as part of a two-stage screening process. The performance of the DT against 'gold standard' clinical interview should be evaluated with cancer survivors.
Diagnostic accuracy of FEV1/forced vital capacity ratio z scores in asthmatic patients.
Lambert, Allison; Drummond, M Bradley; Wei, Christine; Irvin, Charles; Kaminsky, David; McCormack, Meredith; Wise, Robert
2015-09-01
The FEV1/forced vital capacity (FVC) ratio is used as a criterion for airflow obstruction; however, the test characteristics of spirometry in the diagnosis of asthma are not well established. The accuracy of a test depends on the pretest probability of disease. We wanted to estimate the FEV1/FVC ratio z score threshold with optimal accuracy for the diagnosis of asthma for different pretest probabilities. Asthmatic patients enrolled in 4 trials from the Asthma Clinical Research Centers were included in this analysis. Measured and predicted FEV1/FVC ratios were obtained, with calculation of z scores for each participant. Across a range of asthma prevalences and z score thresholds, the overall diagnostic accuracy was calculated. One thousand six hundred eight participants were included (mean age, 39 years; 71% female; 61% white). The mean FEV1 percent predicted value was 83% (SD, 15%). In a symptomatic population with 50% pretest probability of asthma, optimal accuracy (68%) is achieved with a z score threshold of -1.0 (16th percentile), corresponding to a 6 percentage point reduction from the predicted ratio. However, in a screening population with a 5% pretest probability of asthma, the optimum z score is -2.0 (second percentile), corresponding to a 12 percentage point reduction from the predicted ratio. These findings were not altered by markers of disease control. Reduction of the FEV1/FVC ratio can support the diagnosis of asthma; however, the ratio is neither sensitive nor specific enough for diagnostic accuracy. When interpreting spirometric results, consideration of the pretest probability is an important consideration in the diagnosis of asthma based on airflow limitation. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Chen, Hung-Yuan; Chiu, Yen-Ling; Hsu, Shih-Ping; Pai, Mei-Fen; Ju-YehYang; Lai, Chun-Fu; Lu, Hui-Min; Huang, Shu-Chen; Yang, Shao-Yu; Wen, Su-Yin; Chiu, Hsien-Ching; Hu, Fu-Chang; Peng, Yu-Sen; Jee, Shiou-Hwa
2013-01-01
Background Uremic pruritus is a common and intractable symptom in patients on chronic hemodialysis, but factors associated with the severity of pruritus remain unclear. This study aimed to explore the associations of metabolic factors and dialysis adequacy with the aggravation of pruritus. Methods We conducted a 5-year prospective cohort study on patients with maintenance hemodialysis. A visual analogue scale (VAS) was used to assess the intensity of pruritus. Patient demographic and clinical characteristics, laboratory parameters, dialysis adequacy (assessed by Kt/V), and pruritus intensity were recorded at baseline and follow-up. Change score analysis of the difference score of VAS between baseline and follow-up was performed using multiple linear regression models. The optimal threshold of Kt/V, which is associated with the aggravation of uremic pruritus, was determined by generalized additive models and receiver operating characteristic analysis. Results A total of 111 patients completed the study. Linear regression analysis showed that lower Kt/V and use of low-flux dialyzer were significantly associated with the aggravation of pruritus after adjusting for the baseline pruritus intensity and a variety of confounding factors. The optimal threshold value of Kt/V for pruritus was 1.5 suggested by both generalized additive models and receiver operating characteristic analysis. Conclusions Hemodialysis with the target of Kt/V ≥1.5 and use of high-flux dialyzer may reduce the intensity of pruritus in patients on chronic hemodialysis. Further clinical trials are required to determine the optimal dialysis dose and regimen for uremic pruritus. PMID:23940749
Dynamic remapping decisions in multi-phase parallel computations
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Reynolds, P. F., Jr.
1986-01-01
The effectiveness of any given mapping of workload to processors in a parallel system is dependent on the stochastic behavior of the workload. Program behavior is often characterized by a sequence of phases, with phase changes occurring unpredictably. During a phase, the behavior is fairly stable, but may become quite different during the next phase. Thus a workload assignment generated for one phase may hinder performance during the next phase. We consider the problem of deciding whether to remap a paralled computation in the face of uncertainty in remapping's utility. Fundamentally, it is necessary to balance the expected remapping performance gain against the delay cost of remapping. This paper treats this problem formally by constructing a probabilistic model of a computation with at most two phases. We use stochastic dynamic programming to show that the remapping decision policy which minimizes the expected running time of the computation has an extremely simple structure: the optimal decision at any step is followed by comparing the probability of remapping gain against a threshold. This theoretical result stresses the importance of detecting a phase change, and assessing the possibility of gain from remapping. We also empirically study the sensitivity of optimal performance to imprecise decision threshold. Under a wide range of model parameter values, we find nearly optimal performance if remapping is chosen simply when the gain probability is high. These results strongly suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change; precise quantification of the decision model parameters is not necessary.
NASA Astrophysics Data System (ADS)
Smetanin, S. N.; Jelínek, M., Jr.; Kubeček, V.; Jelínková, H.
2015-09-01
Optimal conditions of low-threshold collinear parametric Raman comb generation in calcite (CaCO3) are experimentally investigated under 20 ps laser pulse excitation, in agreement with the theoretical study. The collinear parametric Raman generation of the highest number of Raman components in the short calcite crystals corresponding to the optimal condition of Stokes-anti-Stokes coupling was achieved. At the excitation wavelength of 1064 nm, using the optimum-length crystal resulted in the effective multi-octave frequency Raman comb generation containing up to five anti-Stokes and more than four Stokes components (from 674 nm to 1978 nm). The 532 nm pumping resulted in the frequency Raman comb generation from the 477 nm 2nd anti-Stokes up to the 692 nm 4th Stokes component. Using the crystal with a non-optimal length leads to the Stokes components generation only with higher thresholds because of the cascade-like stimulated Raman scattering with suppressed parametric coupling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, F; Shandong Cancer Hospital and Insititute, Jinan, Shandong; Bowsher, J
2014-06-01
Purpose: PET imaging with F18-FDG is utilized for treatment planning, treatment assessment, and prognosis. A region of interest (ROI) encompassing the tumor may be determined on the PET image, often by a threshold T on the PET standard uptake values (SUVs). Several studies have shown prognostic value for relevant ROI properties including maximum SUV value (SUVmax), metabolic tumor volume (MTV), and total glycolytic activity (TGA). The choice of threshold T may affect mean SUV value (SUVmean), MTV, and TGA. Recently spatial resolution modeling (SRM) has been introduced on many PET systems. SRM may also affect these ROI properties. The purposemore » of this work is to investigate the relative influence of SRM and threshold choice T on SUVmean, MTV, TGA, and SUVmax. Methods: For 9 anal cancer patients, 18F-FDG PET scans were performed prior to treatment. PET images were reconstructed by 2 iterations of Ordered Subsets Expectation Maximization (OSEM), with and without SRM. ROI contours were generated by 5 different SUV threshold values T: 2.5, 3.0, 30%, 40%, and 50% of SUVmax. Paired-samples t tests were used to compare SUVmean, MTV, and TGA (a) for SRM on versus off and (b) between each pair of threshold values T. SUVmax was also compared for SRM on versus off. Results: For almost all (57/60) comparisons of 2 different threshold values, SUVmean, MTV, and TGA showed statistically significant variation. For comparison of SRM on versus off, there were no statistically significant changes in SUVmax and TGA, but there were statistically significant changes in MTV for T=2.5 and T=3.0 and in SUVmean for all T. Conclusion: The near-universal statistical significance of threshold choice T suggests that, regarding harmonization across sites, threshold choice may be a greater concern than choice of SRM. However, broader study is warranted, e.g. other iterations of OSEM should be considered.« less
Dynamical decoupling of local transverse random telegraph noise in a two-qubit gate
NASA Astrophysics Data System (ADS)
D'Arrigo, A.; Falci, G.; Paladino, E.
2015-10-01
Achieving high-fidelity universal two-qubit gates is a central requisite of any implementation of quantum information processing. The presence of spurious fluctuators of various physical origin represents a limiting factor for superconducting nanodevices. Operating qubits at optimal points, where the qubit-fluctuator interaction is transverse with respect to the single qubit Hamiltonian, considerably improved single qubit gates. Further enhancement has been achieved by dynamical decoupling (DD). In this article we investigate DD of transverse random telegraph noise acting locally on each of the qubits forming an entangling gate. Our analysis is based on the exact numerical solution of the stochastic Schrödinger equation. We evaluate the gate error under local periodic, Carr-Purcell and Uhrig DD sequences. We find that a threshold value of the number, n, of pulses exists above which the gate error decreases with a sequence-specific power-law dependence on n. Below threshold, DD may even increase the error with respect to the unconditioned evolution, a behaviour reminiscent of the anti-Zeno effect.
A mathematical study of a model for childhood diseases with non-permanent immunity
NASA Astrophysics Data System (ADS)
Moghadas, S. M.; Gumel, A. B.
2003-08-01
Protecting children from diseases that can be prevented by vaccination is a primary goal of health administrators. Since vaccination is considered to be the most effective strategy against childhood diseases, the development of a framework that would predict the optimal vaccine coverage level needed to prevent the spread of these diseases is crucial. This paper provides this framework via qualitative and quantitative analysis of a deterministic mathematical model for the transmission dynamics of a childhood disease in the presence of a preventive vaccine that may wane over time. Using global stability analysis of the model, based on constructing a Lyapunov function, it is shown that the disease can be eradicated from the population if the vaccination coverage level exceeds a certain threshold value. It is also shown that the disease will persist within the population if the coverage level is below this threshold. These results are verified numerically by constructing, and then simulating, a robust semi-explicit second-order finite-difference method.
Characterizing air quality data from complex network perspective.
Fan, Xinghua; Wang, Li; Xu, Huihui; Li, Shasha; Tian, Lixin
2016-02-01
Air quality depends mainly on changes in emission of pollutants and their precursors. Understanding its characteristics is the key to predicting and controlling air quality. In this study, complex networks were built to analyze topological characteristics of air quality data by correlation coefficient method. Firstly, PM2.5 (particulate matter with aerodynamic diameter less than 2.5 μm) indexes of eight monitoring sites in Beijing were selected as samples from January 2013 to December 2014. Secondly, the C-C method was applied to determine the structure of phase space. Points in the reconstructed phase space were considered to be nodes of the network mapped. Then, edges were determined by nodes having the correlation greater than a critical threshold. Three properties of the constructed networks, degree distribution, clustering coefficient, and modularity, were used to determine the optimal value of the critical threshold. Finally, by analyzing and comparing topological properties, we pointed out that similarities and difference in the constructed complex networks revealed influence factors and their different roles on real air quality system.
Use of an Atrial Lead with Very Short Tip-To-Ring Spacing Avoids Oversensing of Far-Field R-Wave
Kolb, Christof; Nölker, Georg; Lennerz, Carsten; Jetter, Hansmartin; Semmler, Verena; Pürner, Klaus; Gutleben, Klaus-Jürgen; Reents, Tilko; Lang, Klaus; Lotze, Ulrich
2012-01-01
Objective The AVOID-FFS (Avoidance of Far-Field R-wave Sensing) study aimed to investigate whether an atrial lead with a very short tip-to-ring spacing without optimization of pacemaker settings shows equally low incidence of far-field R-wave sensing (FFS) when compared to a conventional atrial lead in combination with optimization of the programming. Methods Patients receiving a dual chamber pacemaker were randomly assigned to receive an atrial lead with a tip-to-ring spacing of 1.1 mm or a lead with a conventional tip-to-ring spacing of 10 mm. Postventricular atrial blanking (PVAB) was programmed to the shortest possible value of 60 ms in the study group, and to an individually determined optimized value in the control group. Atrial sensing threshold was programmed to 0.3 mV in both groups. False positive mode switch caused by FFS was evaluated at one and three months post implantation. Results A total of 204 patients (121 male; age 73±10 years) were included in the study. False positive mode switch caused by FFS was detected in one (1%) patient of the study group and two (2%) patients of the control group (p = 0.62). Conclusion The use of an atrial electrode with a very short tip-to-ring spacing avoids inappropriate mode switch caused by FFS without the need for individual PVAB optimization. Trial Registration ClinicalTrials.gov NCT00512915 PMID:22745661
Use of an atrial lead with very short tip-to-ring spacing avoids oversensing of far-field R-wave.
Kolb, Christof; Nölker, Georg; Lennerz, Carsten; Jetter, Hansmartin; Semmler, Verena; Pürner, Klaus; Gutleben, Klaus-Jürgen; Reents, Tilko; Lang, Klaus; Lotze, Ulrich
2012-01-01
The AVOID-FFS (Avoidance of Far-Field R-wave Sensing) study aimed to investigate whether an atrial lead with a very short tip-to-ring spacing without optimization of pacemaker settings shows equally low incidence of far-field R-wave sensing (FFS) when compared to a conventional atrial lead in combination with optimization of the programming. Patients receiving a dual chamber pacemaker were randomly assigned to receive an atrial lead with a tip-to-ring spacing of 1.1 mm or a lead with a conventional tip-to-ring spacing of 10 mm. Postventricular atrial blanking (PVAB) was programmed to the shortest possible value of 60 ms in the study group, and to an individually determined optimized value in the control group. Atrial sensing threshold was programmed to 0.3 mV in both groups. False positive mode switch caused by FFS was evaluated at one and three months post implantation. A total of 204 patients (121 male; age 73±10 years) were included in the study. False positive mode switch caused by FFS was detected in one (1%) patient of the study group and two (2%) patients of the control group (p = 0.62). The use of an atrial electrode with a very short tip-to-ring spacing avoids inappropriate mode switch caused by FFS without the need for individual PVAB optimization. ClinicalTrials.gov NCT00512915.
Zinn, Caryn; Rush, Amy; Johnson, Rebecca
2018-01-01
Objective The low-carbohydrate, high-fat (LCHF) diet is becoming increasingly employed in clinical dietetic practice as a means to manage many health-related conditions. Yet, it continues to remain contentious in nutrition circles due to a belief that the diet is devoid of nutrients and concern around its saturated fat content. This work aimed to assess the micronutrient intake of the LCHF diet under two conditions of saturated fat thresholds. Design In this descriptive study, two LCHF meal plans were designed for two hypothetical cases representing the average Australian male and female weight-stable adult. National documented heights, a body mass index of 22.5 to establish weight and a 1.6 activity factor were used to estimate total energy intake using the Schofield equation. Carbohydrate was limited to <130 g, protein was set at 15%–25% of total energy and fat supplied the remaining calories. One version of the diet aligned with the national saturated fat guideline threshold of <10% of total energy and the other included saturated fat ad libitum. Primary outcomes The primary outcomes included all micronutrients, which were assessed using FoodWorks dietary analysis software against national Australian/New Zealand nutrient reference value (NRV) thresholds. Results All of the meal plans exceeded the minimum NRV thresholds, apart from iron in the female meal plans, which achieved 86%–98% of the threshold. Saturated fat intake was logistically unable to be reduced below the 10% threshold for the male plan but exceeded the threshold by 2 g (0.6%). Conclusion Despite macronutrient proportions not aligning with current national dietary guidelines, a well-planned LCHF meal plan can be considered micronutrient replete. This is an important finding for health professionals, consumers and critics of LCHF nutrition, as it dispels the myth that these diets are suboptimal in their micronutrient supply. As with any diet, for optimal nutrient achievement, meals need to be well formulated. PMID:29439004
Perez, Claudio A; Cohn, Theodore E; Medina, Leonel E; Donoso, José R
2007-08-31
Stochastic resonance (SR) is the counterintuitive phenomenon in which noise enhances detection of sub-threshold stimuli. The SR psychophysical threshold theory establishes that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. The aim of this study was to test the SR theory by comparing detection results from two different randomly-presented stimulus conditions. In the first condition, optimal noise was present during the whole attention interval; in the second, the optimal noise was restricted to the same time interval as the stimulus. SR threshold theory predicts no difference between the two conditions because noise helps the sub-threshold stimulus to reach threshold in both cases. The psychophysical experimental method used a 300 ms rectangular force pulse as a stimulus within an attention interval of 1.5 s, applied to the index finger of six human subjects in the two distinct conditions. For all subjects we show that in the condition in which the noise was present only when synchronized with the stimulus, detection was better (p<0.05) than in the condition in which the noise was delivered throughout the attention interval. These results provide the first direct evidence that SR threshold theory is incomplete and that a new phenomenon has been identified, which we call Coincidence-Enhanced Stochastic Resonance (CESR). We propose that CESR might occur because subject uncertainty is reduced when noise points at the same temporal window as the stimulus.
Thresholds for the cost-effectiveness of interventions: alternative approaches.
Marseille, Elliot; Larson, Bruce; Kazi, Dhruv S; Kahn, James G; Rosen, Sydney
2015-02-01
Many countries use the cost-effectiveness thresholds recommended by the World Health Organization's Choosing Interventions that are Cost-Effective project (WHO-CHOICE) when evaluating health interventions. This project sets the threshold for cost-effectiveness as the cost of the intervention per disability-adjusted life-year (DALY) averted less than three times the country's annual gross domestic product (GDP) per capita. Highly cost-effective interventions are defined as meeting a threshold per DALY averted of once the annual GDP per capita. We argue that reliance on these thresholds reduces the value of cost-effectiveness analyses and makes such analyses too blunt to be useful for most decision-making in the field of public health. Use of these thresholds has little theoretical justification, skirts the difficult but necessary ranking of the relative values of locally-applicable interventions and omits any consideration of what is truly affordable. The WHO-CHOICE thresholds set such a low bar for cost-effectiveness that very few interventions with evidence of efficacy can be ruled out. The thresholds have little value in assessing the trade-offs that decision-makers must confront. We present alternative approaches for applying cost-effectiveness criteria to choices in the allocation of health-care resources.
Defining ADHD symptom persistence in adulthood: optimizing sensitivity and specificity.
Sibley, Margaret H; Swanson, James M; Arnold, L Eugene; Hechtman, Lily T; Owens, Elizabeth B; Stehli, Annamarie; Abikoff, Howard; Hinshaw, Stephen P; Molina, Brooke S G; Mitchell, John T; Jensen, Peter S; Howard, Andrea L; Lakes, Kimberley D; Pelham, William E
2017-06-01
Longitudinal studies of children diagnosed with ADHD report widely ranging ADHD persistence rates in adulthood (5-75%). This study documents how information source (parent vs. self-report), method (rating scale vs. interview), and symptom threshold (DSM vs. norm-based) influence reported ADHD persistence rates in adulthood. Five hundred seventy-nine children were diagnosed with DSM-IV ADHD-Combined Type at baseline (ages 7.0-9.9 years) 289 classmates served as a local normative comparison group (LNCG), 476 and 241 of whom respectively were evaluated in adulthood (Mean Age = 24.7). Parent and self-reports of symptoms and impairment on rating scales and structured interviews were used to investigate ADHD persistence in adulthood. Persistence rates were higher when using parent rather than self-reports, structured interviews rather than rating scales (for self-report but not parent report), and a norm-based (NB) threshold of 4 symptoms rather than DSM criteria. Receiver-Operating Characteristics (ROC) analyses revealed that sensitivity and specificity were optimized by combining parent and self-reports on a rating scale and applying a NB threshold. The interview format optimizes young adult self-reporting when parent reports are not available. However, the combination of parent and self-reports from rating scales, using an 'or' rule and a NB threshold optimized the balance between sensitivity and specificity. With this definition, 60% of the ADHD group demonstrated symptom persistence and 41% met both symptom and impairment criteria in adulthood. © 2016 Association for Child and Adolescent Mental Health.
Viewer. Reaction Q-Values and Thresholds This tool computes reaction Q-values and thresholds using , uncertainties, and correlations using 30 energy ranges. Simple tables of reaction uncertainties are also
Thresholds of Extinction: Simulation Strategies in Environmental Values Education.
ERIC Educational Resources Information Center
Glew, Frank
1990-01-01
Describes a simulation exercise for campers and an accompanying curriculum unit--"Thresholds of Extinction"--that addresses the issues of endangered species. Uses this context to illustrate steps in the process of values development: awareness, gathering data, resolution (decision making), responsibility (acting on values), and…
NASA Astrophysics Data System (ADS)
Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande
2017-11-01
This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.
A framework for optimizing phytosanitary thresholds in seed systems
USDA-ARS?s Scientific Manuscript database
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps ...
The stability of color discrimination threshold determined using pseudoisochromatic test plates
NASA Astrophysics Data System (ADS)
Zutere, B.; Jurasevska Luse, K.; Livzane, A.
2014-09-01
Congenital red-green color vision deficiency is one of the most common genetic disorders. A previously printed set of pseudoisochromatic plates (KAMS test, 2012) was created for individual discrimination threshold determination in case of mild congenital red-green color vision deficiency using neutral colors (colors confused with gray). The diagnostics of color blind subjects was performed with Richmond HRR (4th edition, 2002) test, Oculus HMC anomaloscope, and further the examination was made using the KAMS test. 4 male subjects aged 20 to 24 years old participated in the study: all of them were diagnosed with deuteranomalia. Due to the design of the plates, the threshold of every subject in each trial was defined as the plate total color difference value ΔE at which the stimulus was detected 75% of the time, so the just-noticeable difference (jnd) was calculated in CIE LAB DeltaE (ΔE) units. Authors performed repeated discrimination threshold measurements (5 times) for all four subjects under controlled illumination conditions. Psychophysical data were taken by sampling an observer's performance on a psychophysical task at a number of different stimulus saturation levels. Results show that a total color difference value ΔE threshold exists for each individual tested with the KAMS pseudoisochromatic plates, this threshold value does not change significantly in multiple measurements. Deuteranomal threshold values aquired using greenish plates of KAMS test are significantly higher than thresholds acquired using reddish plates. A strong positive correlation (R=0.94) exists between anomaloscope matching range (MR) and deuteranomal thresholds aquired by the KAMS test and (R=0.81) between error score in the Richmond HRR test and thresholds aquired by the KAMS test.
Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O
2017-06-01
In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.
Kaur, Taranjit; Saini, Barjinder Singh; Gupta, Savita
2018-03-01
In the present paper, a hybrid multilevel thresholding technique that combines intuitionistic fuzzy sets and tsallis entropy has been proposed for the automatic delineation of the tumor from magnetic resonance images having vague boundaries and poor contrast. This novel technique takes into account both the image histogram and the uncertainty information for the computation of multiple thresholds. The benefit of the methodology is that it provides fast and improved segmentation for the complex tumorous images with imprecise gray levels. To further boost the computational speed, the mutation based particle swarm optimization is used that selects the most optimal threshold combination. The accuracy of the proposed segmentation approach has been validated on simulated, real low-grade glioma tumor volumes taken from MICCAI brain tumor segmentation (BRATS) challenge 2012 dataset and the clinical tumor images, so as to corroborate its generality and novelty. The designed technique achieves an average Dice overlap equal to 0.82010, 0.78610 and 0.94170 for three datasets. Further, a comparative analysis has also been made between the eight existing multilevel thresholding implementations so as to show the superiority of the designed technique. In comparison, the results indicate a mean improvement in Dice by an amount equal to 4.00% (p < 0.005), 9.60% (p < 0.005) and 3.58% (p < 0.005), respectively in contrast to the fuzzy tsallis approach.
Shakir, Nabeel A; George, Arvin K; Siddiqui, M Minhaj; Rothwax, Jason T; Rais-Bahrami, Soroush; Stamatakis, Lambros; Su, Daniel; Okoro, Chinonyerem; Raskolnikov, Dima; Walton-Diaz, Annerleim; Simon, Richard; Turkbey, Baris; Choyke, Peter L; Merino, Maria J; Wood, Bradford J; Pinto, Peter A
2014-12-01
Prostate specific antigen sensitivity increases with lower threshold values but with a corresponding decrease in specificity. Magnetic resonance imaging/ultrasound targeted biopsy detects prostate cancer more efficiently and of higher grade than standard 12-core transrectal ultrasound biopsy but the optimal population for its use is not well defined. We evaluated the performance of magnetic resonance imaging/ultrasound targeted biopsy vs 12-core biopsy across a prostate specific antigen continuum. We reviewed the records of all patients enrolled in a prospective trial who underwent 12-core transrectal ultrasound and magnetic resonance imaging/ultrasound targeted biopsies from August 2007 through February 2014. Patients were stratified by each of 4 prostate specific antigen cutoffs. The greatest Gleason score using either biopsy method was compared in and across groups as well as across the population prostate specific antigen range. Clinically significant prostate cancer was defined as Gleason 7 (4 + 3) or greater. Univariate and multivariate analyses were performed. A total of 1,003 targeted and 12-core transrectal ultrasound biopsies were performed, of which 564 diagnosed prostate cancer for a 56.2% detection rate. Targeted biopsy led to significantly more upgrading to clinically significant disease compared to 12-core biopsy. This trend increased more with increasing prostate specific antigen, specifically in patients with prostate specific antigen 4 to 10 and greater than 10 ng/ml. Prostate specific antigen 5.2 ng/ml or greater captured 90% of upgrading by targeted biopsy, corresponding to 64% of patients who underwent multiparametric magnetic resonance imaging and subsequent fusion biopsy. Conversely a greater proportion of clinically insignificant disease was detected by 12-core vs targeted biopsy overall. These differences persisted when controlling for potential confounders on multivariate analysis. Prostate cancer upgrading with targeted biopsy increases with an increasing prostate specific antigen cutoff. Above a prostate specific antigen threshold of 5.2 ng/ml most upgrading to clinically significant disease was achieved by targeted biopsy. In our population this corresponded to potentially sparing biopsy in 36% of patients who underwent multiparametric magnetic resonance imaging. Below this value 12-core biopsy detected more clinically insignificant cancer. Thus, the diagnostic usefulness of targeted biopsy is optimized in patients with prostate specific antigen 5.2 ng/ml or greater. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
de Lemos Zingano, Bianca; Guarnieri, Ricardo; Diaz, Alexandre Paim; Schwarzbold, Marcelo Liborio; Bicalho, Maria Alice Horta; Claudino, Lucia Sukys; Markowitsch, Hans J; Wolf, Peter; Lin, Katia; Walz, Roger
2015-09-01
This study aimed to evaluate the diagnostic accuracy of the Hamilton Rating Scale for Depression (HRSD), the Beck Depression Inventory (BDI), the Hospital Anxiety and Depression Scale (HADS), and the Hospital Anxiety and Depression Scale-Depression subscale (HADS-D) as diagnostic tests for depressive disorder in drug-resistant mesial temporal lobe epilepsy with hippocampal sclerosis (MTLE-HS). One hundred three patients with drug-resistant MTLE-HS were enrolled. All patients underwent a neurological examination, interictal and ictal video-electroencephalogram (V-EEG) analyses, and magnetic resonance imaging (MRI). Psychiatric interviews were based on DSM-IV-TR criteria and ILAE Commission of Psychobiology classification as a gold standard; HRSD, BDI, HADS, and HADS-D were used as psychometric diagnostic tests, and receiver operating characteristic (ROC) curves were used to determine the optimal threshold scores. For all the scales, the areas under the curve (AUCs) were approximately 0.8, and they were able to identify depression in this sample. A threshold of ≥9 on the HRSD and a threshold of ≥8 on the HADS-D showed a sensitivity of 70% and specificity of 80%. A threshold of ≥19 on the BDI and HADS-D total showed a sensitivity of 55% and a specificity of approximately 90%. The instruments showed a negative predictive value of approximately 87% and a positive predictive value of approximately 65% for the BDI and HADS total and approximately 60% for the HRSD and HADS-D. HRSD≥9 and HADS-D≥8 had the best balance between sensitivity (approximately 70%) and specificity (approximately 80%). However, with these thresholds, these diagnostic tests do not appear useful in identifying depressive disorder in this population with epilepsy, and their specificity (approximately 80%) and PPV (approximately 55%) were lower than those of the other scales. We believe that the BDI and HADS total are valid diagnostic tests for depressive disorder in patients with MTLE-HS, as both scales showed acceptable (though not high) specificity and PPV for this type of study. Copyright © 2015 Elsevier Inc. All rights reserved.
Topology-optimized silicon photonic wire mode (de)multiplexer
NASA Astrophysics Data System (ADS)
Frellsen, Louise F.; Frandsen, Lars H.; Ding, Yunhong; Elesin, Yuriy; Sigmund, Ole; Yvind, Kresten
2015-02-01
We have designed and for the first time experimentally verified a topology optimized mode (de)multiplexer, which demultiplexes the fundamental and the first order mode of a double mode photonic wire to two separate single mode waveguides (and multiplexes vice versa). The device has a footprint of ~4.4 μm x ~2.8 μm and was fabricated for different design resolutions and design threshold values to verify the robustness of the structure to fabrication tolerances. The multiplexing functionality was confirmed by recording mode profiles using an infrared camera and vertical grating couplers. All structures were experimentally found to maintain functionality throughout a 100 nm wavelength range limited by available laser sources and insertion losses were generally lower than 1.3 dB. The cross talk was around -12 dB and the extinction ratio was measured to be better than 8 dB.
NASA Astrophysics Data System (ADS)
Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang
2018-05-01
In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.
NASA Astrophysics Data System (ADS)
Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.
2005-04-01
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
Gauging the likelihood of stable cavitation from ultrasound contrast agents
NASA Astrophysics Data System (ADS)
Bader, Kenneth B.; Holland, Christy K.
2013-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Gauging the likelihood of stable cavitation from ultrasound contrast agents.
Bader, Kenneth B; Holland, Christy K
2013-01-07
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Gauging the likelihood of stable cavitation from ultrasound contrast agents
Bader, Kenneth B; Holland, Christy K
2015-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs. PMID:23221109
Optimally achieving milk bulk tank somatic cell count thresholds.
Troendle, Jason A; Tauer, Loren W; Gröhn, Yrjo T
2017-01-01
High somatic cell count in milk leads to reduced shelf life in fluid milk and lower processed yields in manufactured dairy products. As a result, farmers are often penalized for high bulk tank somatic cell count or paid a premium for low bulk tank somatic cell count. Many countries also require all milk from a farm to be lower than a specified regulated somatic cell count. Thus, farms often cull cows that have high somatic cell count to meet somatic cell count thresholds. Rather than naïvely cull the highest somatic cell count cows, a mathematical programming model was developed that determines the cows to be culled from the herd by maximizing the net present value of the herd, subject to meeting any specified bulk tank somatic cell count level. The model was applied to test-day cows on 2 New York State dairy farms. Results showed that the net present value of the herd was increased by using the model to meet the somatic cell count restriction compared with naïvely culling the highest somatic cell count cows. Implementation of the model would be straightforward in dairy management decision software. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Middle Electrode in a Vertical Transistor Structure Using an Sn Layer by Thermal Evaporation
NASA Astrophysics Data System (ADS)
Nogueira, Gabriel Leonardo; da Silva Ozório, Maiza; da Silva, Marcelo Marques; Morais, Rogério Miranda; Alves, Neri
2018-05-01
We report a process for performing the middle electrode for a vertical field effect transistor (VOFET) by the evaporation of a tin (Sn) layer. Bare aluminum oxide (Al2O3), obtained by anodization, and Al2O3 covered with a polymethylmethacrylate (PMMA) layer were used as the gate dielectric. We measured the electrical resistance of Sn while the evaporation was carried out to find the best condition to prepare the middle electrode, that is, good lateral conduction associated with openings that give permeability to the electric field in a vertical direction. This process showed that 55 nm Sn thick is suitable for use in a VOFET, being easier to achieve optimal thickness when the Sn is evaporated onto PMMA than onto bare Al2O3. The addition of a PMMA layer on the Al2O3 surface modifies the morphology of the Sn layer, resulting in a lowering of the threshold voltage. The values of threshold voltage and electric field, VTH = - 8 V and ETH = 354.5 MV/m respectively, were calculated using an Al2O3 film 20 nm thick covered with a 14 nm PMMA layer as gate dielectric, while for bare Al2O3 these values were VTH = - 10 V and ETH = 500 MV/m.
Improved LIDT values for dielectric dispersive compensating mirrors applying ternary composites
NASA Astrophysics Data System (ADS)
Willemsen, T.; Schlichting, S.; Gyamfi, M.; Jupé, M.; Ehlers, H.; Morgner, U.; Ristau, D.
2016-12-01
The present contribution is addressed to an improved method to fabricate dielectric dispersive compensating mirrors (CMs) with an increased laser induced damage threshold (LIDT) by the use of ternary composite layers. Taking advantage of a novel in-situ phase monitor system, it is possible to control the sensitive deposition process more precisely. The study is initiated by a design synthesis, to achieve optimum reflection and GDD values for a conventional high low stack (HL)n. Afterwards the field intensity is analyzed, and layers affected by highest electric field intensities are exchanged by ternary composites of TaxSiyOz. Both designs have similar target specifications whereby one design is using ternary composites and the other one is distinguished by a (HL)n. The first layers of the stack are switched applying in-situ optical broad band monitoring in conjunction with a forward re-optimization algorithm, which also manipulates the layers remaining for deposition at each switching event. To accomplish the demanded GDD-spectra, the last layers are controlled by a novel in-situ white light interferometer operating in the infrared spectral range. Finally the CMs are measured in a 10.000 on 1 procedure according to ISO 21254 applying pulses with a duration of 130 fs at a central wavelength of 775 nm to determine the laser induced damage threshold.
Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee
2014-02-01
Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.
Zhang, Huai-zhu; Lin, Jun; Zhang, Huai-Zhu
2014-06-01
In the present paper, the outlier detection methods for determination of oil yield in oil shale using near-infrared (NIR) diffuse reflection spectroscopy was studied. During the quantitative analysis with near-infrared spectroscopy, environmental change and operator error will both produce outliers. The presence of outliers will affect the overall distribution trend of samples and lead to the decrease in predictive capability. Thus, the detection of outliers are important for the construction of high-quality calibration models. The methods including principal component analysis-Mahalanobis distance (PCA-MD) and resampling by half-means (RHM) were applied to the discrimination and elimination of outliers in this work. The thresholds and confidences for MD and RHM were optimized using the performance of partial least squares (PLS) models constructed after the elimination of outliers, respectively. Compared with the model constructed with the data of full spectrum, the values of RMSEP of the models constructed with the application of PCA-MD with a threshold of a value equal to the sum of average and standard deviation of MD, RHM with the confidence level of 85%, and the combination of PCA-MD and RHM, were reduced by 48.3%, 27.5% and 44.8%, respectively. The predictive ability of the calibration model has been improved effectively.
Ex vivo Mueller polarimetric imaging of the uterine cervix: a first statistical evaluation
NASA Astrophysics Data System (ADS)
Rehbinder, Jean; Haddad, Huda; Deby, Stanislas; Teig, Benjamin; Nazac, André; Novikova, Tatiana; Pierangelo, Angelo; Moreau, François
2016-07-01
Early detection through screening plays a major role in reducing the impact of cervical cancer on patients. When detected before the invasive stage, precancerous lesions can be eliminated with very limited surgery. Polarimetric imaging is a potential alternative to the standard screening methods currently used. In a previous proof-of-concept study, significant contrasts have been found in polarimetric images acquired for healthy and precancerous regions of excised cervical tissue. To quantify the ability of the technique to differentiate between healthy and precancerous tissue, polarimetric images of seventeen cervical conization specimens (cone-shaped or cylindrical wedges from the uterine cervix) are compared with results from histopathological diagnoses, which is considered to be the "gold standard." The sensitivity and specificity of the technique are calculated for images acquired at wavelengths of 450, 550, and 600 nm, aiming to differentiate between high-grade cervical intraepithelial neoplasia (CIN 2-3) and healthy squamous epithelium. To do so, a sliding threshold for the scalar retardance parameter was used for the sample zones, as labeled after histological diagnosis. An optimized value of ˜83% is achieved for both sensitivity and specificity for images acquired at 450 nm and for a threshold scalar retardance value of 10.6 deg. This study paves the way for an application of polarimetry in the clinic.
SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M
Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, JM; Samei, E; Departments of Physics, Electrical and Computer Engineering, and Biomedical Engineering, and Medical Physics Graduate Program, Duke University, Durham, NC
2016-06-15
Purpose: Recent legislative and accreditation requirements have driven rapid development and implementation of CT radiation dose monitoring solutions. Institutions must determine how to improve quality, safety, and consistency of their clinical performance. The purpose of this work was to design a strategy and meaningful characterization of results from an in-house, clinically-deployed dose monitoring solution. Methods: A dose monitoring platform was designed by our imaging physics group that focused on extracting protocol parameters, dose metrics, and patient demographics and size. Compared to most commercial solutions, which focus on individual exam alerts and global thresholds, the program sought to characterize overall consistencymore » and targeted thresholds based on eight analytic interrogations. Those were based on explicit questions related to protocol application, national benchmarks, protocol and size-specific dose targets, operational consistency, outliers, temporal trends, intra-system variability, and consistent use of electronic protocols. Using historical data since the start of 2013, 95% and 99% intervals were used to establish yellow and amber parameterized dose alert thresholds, respectively, as a function of protocol, scanner, and size. Results: Quarterly reports have been generated for three hospitals for 3 quarters of 2015 totaling 27880, 28502, 30631 exams, respectively. Four adult and two pediatric protocols were higher than external institutional benchmarks. Four protocol dose levels were being inconsistently applied as a function of patient size. For the three hospitals, the minimum and maximum amber outlier percentages were [1.53%,2.28%], [0.76%,1.8%], [0.94%,1.17%], respectively. Compared with the electronic protocols, 10 protocols were found to be used with some inconsistency. Conclusion: Dose monitoring can satisfy requirements with global alert thresholds and patient dose records, but the real value is in optimizing patient-specific protocols, balancing image quality trade-offs that dose-reduction strategies promise, and improving the performance and consistency of a clinical operation. Data plots that capture patient demographics and scanner performance demonstrate that value.« less
Polymer Composite Containing Carbon Nanotubes and their Applications.
Park, Sung-Hoon; Bae, Joonwon
2017-07-10
Carbon nanotubes (CNTs) are attractive nanostructures in this regard, primarily due to their high aspect ratio coupled with high thermal and electrical conductivities. Consequently, CNT polymer composites have been extensively investigated for various applications, owing to their light weight and processibility. However, there have been several issues affecting the utilization of CNTs, such as aggregation (bundling) which leads to a non-uniform dispersion and poor interfacial bonding of the CNTs with the polymer, resulting in variation in composite performance, along with the additional issue of high cost of CNTs. In this article, recent research and patents for polymer composites containing carbon nanomaterial are presented and summarized. In addition, a rationale for optimally designed carbon nanotube polymer composites and their applications are suggested. Above the electrical percolation threshold, a transition from insulator to conductor occurs. The percolation threshold values of CNT composite are dependent on filler shape, intrinsic properties of filler, type of polymer, CNT dispersion condition and so on. Different values of percolation threshold CNT polymer composites have been summarized. The difference in percolation threshold and conductivity of CNT composites could be explained by the degree of effective interactions between nanotubes and polymer matrix. The reaction between surface functional groups of CNTs and polymer could contribute to better dispersion of CNTs in polymer matrix. Consequently, it increased the number of electrical networks of CNTs in polymer, resulting in an enhancement of composite conductivity. In addition, to exfoliate nanotubes from heavy bundles, ultrasonication with proper solvent and three roll milling processes were used. Potential reactions of covalent bonding between functionalized CNTs and polymer were suggested based on the above rationale. Through the use of CNT functionalization, high aspect ratio CNTs, and proper fabrication, uniform dispersion of nanotubes in polymer can be achieved leading to considerable improvement in electrical conductivity and electromagnetic interference (EMI) shielding properties. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Novel threshold pressure sensors based on nonlinear dynamics of MEMS resonators
NASA Astrophysics Data System (ADS)
Hasan, Mohammad H.; Alsaleem, Fadi M.; Ouakad, Hassen M.
2018-06-01
Triggering an alarm in a car for low air-pressure in the tire or tripping an HVAC compressor if the refrigerant pressure is lower than a threshold value are examples for applications where measuring the amount of pressure is not as important as determining if the pressure has exceeded a threshold value for an action to occur. Unfortunately, current technology still relies on analog pressure sensors to perform this functionality by adding a complex interface (extra circuitry, controllers, and/or decision units). In this paper, we demonstrate two new smart tunable-threshold pressure switch concepts that can reduce the complexity of a threshold pressure sensor. The first concept is based on the nonlinear subharmonic resonance of a straight double cantilever microbeam with a proof mass and the other concept is based on the snap-through bi-stability of a clamped-clamped MEMS shallow arch. In both designs, the sensor operation concept is simple. Any actuation performed at a certain pressure lower than a threshold value will activate a nonlinear dynamic behavior (subharmonic resonance or snap-through bi-stability) yielding a large output that would be interpreted as a logic value of ONE, or ON. Once the pressure exceeds the threshold value, the nonlinear response ceases to exist, yielding a small output that would be interpreted as a logic value of ZERO, or OFF. A lumped, single degree of freedom model for the double cantilever beam, that is validated using experimental data, and a continuous beam model for the arch beam, are used to simulate the operation range of the proposed sensors by identifying the relationship between the excitation signal and the critical cut-off pressure.
Optimizing the motion of a folding molecular motor in soft matter.
Rajonson, Gabriel; Ciobotarescu, Simona; Teboul, Victor
2018-04-18
We use molecular dynamics simulations to investigate the displacement of a periodically folding molecular motor in a viscous environment. Our aim is to find significant parameters to optimize the displacement of the motor. We find that the choice of a massy host or of small host molecules significantly increase the motor displacements. While in the same environment, the motor moves with hopping solid-like motions while the host moves with diffusive liquid-like motions, a result that originates from the motor's larger size. Due to hopping motions, there are thresholds on the force necessary for the motor to reach stable positions in the medium. These force thresholds result in a threshold in the size of the motor to induce a significant displacement, that is followed by plateaus in the motor displacement.
NASA Astrophysics Data System (ADS)
Avice, J.; Piombini, H.; Boscher, C.; Belleville, P.; Vaudel, G.; Brotons, G.; Ruello, P.; Gusev, V.
2017-11-01
The MegaJoule Laser (LMJ) for inertial confinement fusion experiments is currently in operation at CEA-CESTA in France. All the lenses are coated by an antireflective (AR) layer to optimize the light power transmission. This AR layer is manufactured by sol-gel process, a soft chemical process, associated with a liquid phase coating technique to realize thin film of metal oxide. These optical components are hardened into ammoniac vapors in order to mechanically reinforce the AR coating and to make them more handling. This hardening induces a thickness reduction of the layer so an increase of the stiffness and sometimes a crazing of the layer. As these optical components undergo a high-power laser beam, so, it is important to verify if the AR properties (optical and mechanical) influence the value of the threshold laser damage. A series of coated samples have been manufactured having variable elastic moduli to discuss this point. In that purpose, a homemade Laser Induced Damage Threshold (LIDT) setup has been developed to test the layers under laser flux. We describe the used methods and different results are given. Preliminary results obtained on several coated samples with variable elastic moduli are presented. We show that whatever are the elastic stiffness of the AR coating, an overall decrease of the threshold appears with no noticeable effect of the mechanical properties of the AR coatings. Some possible explanations are given.
Collective organization in aerotactic motion
NASA Astrophysics Data System (ADS)
Mazza, Marco G.
Some bacteria exhibit interesting behavior in the presence of an oxygen concentration. They perform an aerotactic motion along the gradient until they reach their optimal oxygen concentration. But they often organize collectively by forming dense regions, called 'bands', that travel towards the oxygen source. We have developed a model of swimmers with stochastic interaction rules moving in proximity of an air bubble. We perform molecular dynamics simulations and also solve advection-diffusion equations that reproduce the aerotactic behavior of mono-flagellated, facultative anaerobic bacteria. If the oxygen concentration in the system sinks locally below a threshold value, the formation of a migrating aerotactic band toward the bubble can be observed.
Policy tree optimization for adaptive management of water resources systems
NASA Astrophysics Data System (ADS)
Herman, Jonathan; Giuliani, Matteo
2017-04-01
Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points" that suggest the need of updating the policy. However, there remains a need for a general method to optimize the choice of the signposts to be used and their threshold values. This work contributes a general framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. Given a set of feature variables (e.g., reservoir level, inflow observations, inflow forecasts), the resulting policy defines both the optimal reservoir operations and the conditions under which such operations should be triggered. We demonstrate the approach using Folsom Reservoir (California) as a case study, in which operating policies must balance the risk of both floods and droughts. Numerical results show that the tree-based policies outperform the ones designed via Dynamic Programming. In addition, they display good adaptive capacity to the changing climate, successfully adapting the reservoir operations across a large set of uncertain climate scenarios.
Simulation and optimization of deep violet InGaN double quantum well laser
NASA Astrophysics Data System (ADS)
Alahyarizadeh, Gh.; Ghazai, A. J.; Rahmani, R.; Mahmodi, H.; Hassan, Z.
2012-03-01
The performance characteristics of a deep violet InGaN double quantum well laser diode (LD) such as threshold current ( Ith), external differential quantum efficiency (DQE) and output power have been investigated using the Integrated System Engineering Technical Computer Aided Design (ISE-TCAD) software. As well as its operating parameters such as internal quantum efficiency ( ηi), internal loss ( αi) and transparency threshold current density ( J0) have been studied. Since, we are interested to investigate the mentioned characteristics and parameters independent of well and barrier thickness, therefore to reach a desired output wavelength, the indium mole fraction of wells and barriers has been varied consequently. The indium mole fractions of well and barrier layers have been considered 0.08 and 0.0, respectively. Some important parameters such as Al mole fraction of the electronic blocking layer (EBL) and cavity length which affect performance characteristics were also investigated. The optimum values of the Al mole fraction and cavity length in this study are 0.15 and 400 μm, respectively. The lowest threshold current, the highest DQE and output power which obtained at the emission wavelength of 391.5 nm are 43.199 mA, 44.99% and 10.334 mW, respectively.
Cadence (steps/min) and intensity during ambulation in 6-20 year olds: the CADENCE-kids study.
Tudor-Locke, Catrine; Schuna, John M; Han, Ho; Aguiar, Elroy J; Larrivee, Sandra; Hsia, Daniel S; Ducharme, Scott W; Barreira, Tiago V; Johnson, William D
2018-02-26
Steps/day is widely utilized to estimate the total volume of ambulatory activity, but it does not directly reflect intensity, a central tenet of public health guidelines. Cadence (steps/min) represents an overlooked opportunity to describe the intensity of ambulatory activity. We sought to establish thresholds linking directly observed cadence with objectively measured intensity in 6-20 year olds. One hundred twenty participants completed multiple 5-min bouts on a treadmill, from 13.4 m/min (0.80 km/h) to 134.0 m/min (8.04 km/h). The protocol was terminated when participants naturally transitioned to running, or if they chose to not continue. Steps were visually counted and intensity was objectively measured using a portable metabolic system. Youth metabolic equivalents (METy) were calculated for 6-17 year olds, with moderate intensity defined as ≥4 and < 6 METy, and vigorous intensity as ≥6 METy. Traditional METs were calculated for 18-20 year olds, with moderate intensity defined as ≥3 and < 6 METs, and vigorous intensity defined as ≥6 METs. Optimal cadence thresholds for moderate and vigorous intensity were identified using segmented random coefficients models and receiver operating characteristic (ROC) curves. Participants were on average (± SD) aged 13.1 ± 4.3 years, weighed 55.8 ± 22.3 kg, and had a BMI z-score of 0.58 ± 1.21. Moderate intensity thresholds (from regression and ROC analyses) ranged from 128.4 steps/min among 6-8 year olds to 87.3 steps/min among 18-20 year olds. Comparable values for vigorous intensity ranged from 157.7 steps/min among 6-8 year olds to 119.3 steps/min among 18-20 year olds. Considering both regression and ROC approaches, heuristic cadence thresholds (i.e., evidence-based, practical, rounded) ranged from 125 to 90 steps/min for moderate intensity, and 155 to 125 steps/min for vigorous intensity, with higher cadences for younger age groups. Sensitivities and specificities for these heuristic thresholds ranged from 77.8 to 99.0%, indicating fair to excellent classification accuracy. These heuristic cadence thresholds may be used to prescribe physical activity intensity in public health recommendations. In the research and clinical context, these heuristic cadence thresholds have apparent value for accelerometer-based analytical approaches to determine the intensity of ambulatory activity.
Jiang, Wei-Jie; Jin, Fan; Zhou, Li-Ming
2016-06-01
To investigate the effects of the DNA fragmentation index (DFI) and malformation rate (SMR) of optimized sperm on embryonic development and early spontaneous abortion in conventional in vitro fertilization and embryo transfer (IVF-ET). We selected 602 cycles of conventional IVF-ET for pure oviductal infertility that had achieved clinical pregnancies, including 505 cycles with ongoing pregnancy and 97 cycles with early spontaneous abortion. On the day of ovum retrieval, we examined the DNA integrity and morphology of the rest of the optimized sperm using the SCD and Diff-Quik methods, established the joint predictor (JP) by logistic equation, and assessed the value of DFI and SMR in predicting early spontaneous abortion using the ROC curve. The DFI, SMR, and high-quality embryo rate were (15.91±3.69)%, (82.85±10.24)%, and 46.53% (342/735) in the early spontaneous abortion group and (9.30±4.22)%, (77.32±9.19)%, and 56.43% (2263/4010) respectively in the ongoing pregnancy group, all with statistically significant differences between the two groups (P<0.05 ). Both the DFI and SMR were the risk factors of early spontaneous abortion (OR = 5.96 and 1.66; both P< 0.01). The areas under the ROC curve for DFI, SMR and JP were 0.893±0.019, 0.685±0.028, and 0.898±0.018, respectively. According to the Youden index, the optimal cut-off values of the DFI and SMR obtained for the prediction of early spontaneous abortion were approximately 15% and 80%. The DFI was correlated positively with SMR (r= 0.31, P<0.01) but the high-quality embryo rate negatively with both the DFI (r= -0.45, P<0.01) and SMR (r= -0.22, P<0.01). The DFI and SMR of optimized sperm are closely associated with embryonic development in IVF. The DFI has a certain value for predicting early spontaneous abortion with a threshold of approximately 15%, but SMR may have a lower predictive value.
Epicardial fat thickness: threshold values and lifestyle association in male adolescents.
Cena, H; Fonte, M L; Casali, P M; Maffoni, S; Roggi, C; Biino, G
2015-04-01
Obese adolescents with high proportion of visceral fat are at higher risk of developing the metabolic syndrome. The study aims to investigate if echocardiographic epicardial fat thickness (EF) could be predictive of visceral obesity (VO) early in life and to provide EF threshold values specific for male adolescents. Further aim was to investigate the association between EF, lifestyle and metabolic disease familiarity. Anthropometric data were collected from 102 normal weight and overweight, healthy male adolescents (mean age: 14.91 ± 1.98 years); bioelectrical impedance analysis and transthoracic echocardiogram were performed in the same sample. Each participant fulfilled a validated self-administered lifestyle questionnaire. We found higher EF values in sedentary adolescents (P < 0.05), in those who never eat fruit and vegetables (P < 0.05), and in those with overweight mothers (P < 0.05). The strongest independent predictor of EF was waist circumference (P < 0.0001). Using the waist to height ratio as a marker of VO, logistic regression analysis revealed that 1 mm EF gain is responsible for seven times higher VO risk (P < 0.0001). Receiver Operating Characteristic (ROC) analysis showed that the optimal cut-off for EF thickness associated to youth VO is 3.2 mm. Ultrasonography EF measurement might be a second-level assessment tool, useful to detect early cardiometabolic damage stage. © 2014 The Authors. Pediatric Obesity © 2014 World Obesity.
2012-01-01
values of EAFP, EAFN, and EAF, can be compared with three user-defined threshold values, TAFP, TAFN, and TAF . These threshold values determine the update...values were chosen as TAFP = E0AFP + 0.02, TAFN = E0AFN + 0.02, and TAF = E0AF + 0.02). We called the value of 0.02 the margin of error tolerance. In
Effects of spot parameters in pencil beam scanning treatment planning.
Kraan, Aafke Christine; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M
2018-01-01
Spot size σ (in air at isocenter), interspot spacing d, and spot charge q influence dose delivery efficiency and plan quality in Intensity Modulated Proton Therapy (IMPT) treatment planning. The choice and range of parameters varies among different manufacturers. The goal of this work is to demonstrate the influence of the spot parameters on dose quality and delivery in IMPT treatment plans, to show their interdependence, and to make practitioners aware of the spot parameter values for a certain facility. Our study could help as a guideline to make the trade-off between treatment quality and time in existing PBS centers and in future systems. We created plans for seven patients and a phantom, with different tumor sites and volumes, and compared the effect of small-, medium-, and large-spot widths (σ = 2.5, 5, and 10 mm) and interspot distances (1σ, 1.5σ, and 1.75σ) on dose, spot charge, and treatment time. Moreover, we quantified how postplanning charge threshold cuts affect plan quality and the total number of spots to deliver, for different spot widths and interspot distances. We show the effect of a minimum charge (or MU) cutoff value for a given proton delivery system. Spot size had a strong influence on dose: larger spots resulted in more protons delivered outside the target region. We observed dose differences of 2-13 Gy (RBE) between 2.5 mm and 10 mm spots, where the amount of extra dose was due to dose penumbra around the target region. Interspot distance had little influence on dose quality for our patient group. Both parameters strongly influence spot charge in the plans and thus the possible impact of postplanning charge threshold cuts. If such charge thresholds are not included in the treatment planning system (TPS), it is important that the practitioner validates that a given combination of lower charge threshold, interspot spacing, and spot size does not result in a plan degradation. Low average spot charge occurs for small spots, small interspot distances, many beam directions, and low fractional dose values. The choice of spot parameters values is a trade-off between accelerator and beam line design, plan quality, and treatment efficiency. We recommend the use of small spot sizes for better organ-at-risk sparing and lateral interspot distances of 1.5σ to avoid long treatment times. We note that plan quality is influenced by the charge cutoff. Our results show that the charge cutoff can be sufficiently large (i.e., 10 6 protons) to accommodate limitations on beam delivery systems. It is, therefore, not necessary per se to include the charge cutoff in the treatment planning optimization such that Pareto navigation (e.g., as practiced at our institution) is not excluded and optimal plans can be obtained without, perhaps, a bias from the charge cutoff. We recommend that the impact of a minimum charge cut impact is carefully verified for the spot sizes and spot distances applied or that it is accommodated in the TPS. © 2017 American Association of Physicists in Medicine.
Cascaded systems analysis of photon counting detectors
Xu, J.; Zbijewski, W.; Gang, G.; Stayman, J. W.; Taguchi, K.; Lundqvist, M.; Fredenberg, E.; Carrino, J. A.; Siewerdsen, J. H.
2014-01-01
Purpose: Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). Methods: A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1–7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. Results: The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. Conclusions: The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems. PMID:25281959
Cascaded systems analysis of photon counting detectors.
Xu, J; Zbijewski, W; Gang, G; Stayman, J W; Taguchi, K; Lundqvist, M; Fredenberg, E; Carrino, J A; Siewerdsen, J H
2014-10-01
Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1-7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems.
A Framework for Optimizing Phytosanitary Thresholds in Seed Systems.
Choudhury, Robin Alan; Garrett, Karen A; Klosterman, Steven J; Subbarao, Krishna V; McRoberts, Neil
2017-10-01
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps limit the movement of pathogen inoculum through seed, but the costs associated with rejected seed lots can be prohibitive for voluntary implementation of phytosanitary thresholds. In this paper, we outline a framework to optimize thresholds for seedborne pathogens, balancing the cost of rejected seed lots and benefit of reduced inoculum levels. The method requires relatively small amounts of data, and the accuracy and robustness of the analysis improves over time as data accumulate from seed testing. We demonstrate the method first and illustrate it with a case study of seedborne oospores of Peronospora effusa, the causal agent of spinach downy mildew. A seed lot threshold of 0.23 oospores per seed could reduce the overall number of oospores entering the production system by 90% while removing 8% of seed lots destined for distribution. Alternative mitigation strategies may result in lower economic losses to seed producers, but have uncertain efficacy. We discuss future challenges and prospects for implementing this approach.
Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-08-01
Orthodontic tooth movement is a complex procedure that occurs due to various biomechanical changes in the periodontium. Optimal orthodontic forces yield maximum tooth movement whereas if the forces fall beyond the optimal threshold it can cause deleterious effects. Among various types of tooth movements intrusion and lingual root torque are associated with causing root resoprtion, especially with the incisors. Therefore in this study, the stress patterns in the periodontal ligament (PDL) were evaluated with intrusion and lingual root torque using finite element method (FEM). A three-dimensional (3D) FEM model of the maxillary incisors was generated using SOLIDWORKS modeling software. Stresses in the PDL were evaluated with intrusive and lingual root torque movements by a 3D FEM using ANSYS software using linear stress analysis. It was observed that with the application of intrusive load compressive stresses were distributed at the apex whereas tensile stress was seen at the cervical margin. With the application of lingual root torque maximum compressive stress was distributed at the apex and tensile stress was distributed throughout the PDL. For intrusive and lingual root torque movements stress values over the PDL was within the range of optimal stress value as proposed by Lee, with a given force system by Proffit as optimum forces for orthodontic tooth movement using linear properties.
Hemanth, M; deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-01-01
Background: Orthodontic tooth movement is a complex procedure that occurs due to various biomechanical changes in the periodontium. Optimal orthodontic forces yield maximum tooth movement whereas if the forces fall beyond the optimal threshold it can cause deleterious effects. Among various types of tooth movements intrusion and lingual root torque are associated with causing root resoprtion, especially with the incisors. Therefore in this study, the stress patterns in the periodontal ligament (PDL) were evaluated with intrusion and lingual root torque using finite element method (FEM). Materials and Methods: A three-dimensional (3D) FEM model of the maxillary incisors was generated using SOLIDWORKS modeling software. Stresses in the PDL were evaluated with intrusive and lingual root torque movements by a 3D FEM using ANSYS software using linear stress analysis. Results: It was observed that with the application of intrusive load compressive stresses were distributed at the apex whereas tensile stress was seen at the cervical margin. With the application of lingual root torque maximum compressive stress was distributed at the apex and tensile stress was distributed throughout the PDL. Conclusion: For intrusive and lingual root torque movements stress values over the PDL was within the range of optimal stress value as proposed by Lee, with a given force system by Proffit as optimum forces for orthodontic tooth movement using linear properties. PMID:26464555
Optimal resource diffusion for suppressing disease spreading in multiplex networks
NASA Astrophysics Data System (ADS)
Chen, Xiaolong; Wang, Wei; Cai, Shimin; Stanley, H. Eugene; Braunstein, Lidia A.
2018-05-01
Resource diffusion is a ubiquitous phenomenon, but how it impacts epidemic spreading has received little study. We propose a model that couples epidemic spreading and resource diffusion in multiplex networks. The spread of disease in a physical contact layer and the recovery of the infected nodes are both strongly dependent upon resources supplied by their counterparts in the social layer. The generation and diffusion of resources in the social layer are in turn strongly dependent upon the state of the nodes in the physical contact layer. Resources diffuse preferentially or randomly in this model. To quantify the degree of preferential diffusion, a bias parameter that controls the resource diffusion is proposed. We conduct extensive simulations and find that the preferential resource diffusion can change phase transition type of the fraction of infected nodes. When the degree of interlayer correlation is below a critical value, increasing the bias parameter changes the phase transition from double continuous to single continuous. When the degree of interlayer correlation is above a critical value, the phase transition changes from multiple continuous to first discontinuous and then to hybrid. We find hysteresis loops in the phase transition. We also find that there is an optimal resource strategy at each fixed degree of interlayer correlation under which the threshold reaches a maximum and the disease can be maximally suppressed. In addition, the optimal controlling parameter increases as the degree of inter-layer correlation increases.
2016-07-02
beams Superresolution machining Threshold effect of ablation means that structure diameter is less than the beam diameter fs pulses at 800 nm yield 200...Approved for public release: distribution unlimited. Applications of Bessel beams Superresolution machining Threshold effect of ablation means that... Superresolution machining Threshold effect of ablation means that structure diameter is less than the beam diameter fs pulses at 800 nm yield 200 nm
Quantitative somatosensory testing of the penis: optimizing the clinical neurological examination.
Bleustein, Clifford B; Eckholdt, Haftan; Arezzo, Joseph C; Melman, Arnold
2003-06-01
Quantitative somatosensory testing, including vibration, pressure, spatial perception and thermal thresholds of the penis, has demonstrated neuropathy in patients with a history of erectile dysfunction of all etiologies. We evaluated which measurement of neurological function of the penis was best at predicting erectile dysfunction and examined the impact of location on the penis for quantitative somatosensory testing measurements. A total of 107 patients were evaluated. All patients were required to complete the erectile function domain of the International Index of Erectile Function (IIEF) questionnaire, of whom 24 had no complaints of erectile dysfunction and scored within the "normal" range on the IIEF. Patients were subsequently tested on ventral middle penile shaft, proximal dorsal midline penile shaft and glans penis (with foreskin retracted) for vibration, pressure, spatial perception, and warm and cold thermal thresholds. Mixed models repeated measures analysis of variance controlling for age, diabetes and hypertension revealed that method of measurement (quantitative somatosensory testing) was predictive of IIEF score (F = 209, df = 4,1315, p <0.001), while site of measurement on the penis was not. To determine the best method of measurement, we used hierarchical regression, which revealed that warm temperature was the best predictor of erectile dysfunction with pseudo R(2) = 0.19, p <0.0007. There was no significant improvement in predicting erectile dysfunction when another test was added. Using 37C and greater as the warm thermal threshold yielded a sensitivity of 88.5%, specificity 70.0% and positive predictive value 85.5%. Quantitative somatosensory testing using warm thermal threshold measurements taken at the glans penis can be used alone to assess the neurological status of the penis. Warm thermal thresholds alone offer a quick, noninvasive accurate method of evaluating penile neuropathy in an office setting.
Nadkarni, Tanvi N; Andreoli, Matthew J; Nair, Veena A; Yin, Peng; Young, Brittany M; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S; Field, Aaron S; Baskaya, Mustafa K; Moritz, Chad H; Meyerand, M Elizabeth; Prabhakaran, Vivek
2015-01-01
Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits.
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
NASA Astrophysics Data System (ADS)
Bénichou, O.; Bhat, U.; Krapivsky, P. L.; Redner, S.
2018-02-01
We introduce the frugal foraging model in which a forager performs a discrete-time random walk on a lattice in which each site initially contains S food units. The forager metabolizes one unit of food at each step and starves to death when it last ate S steps in the past. Whenever the forager eats, it consumes all food at its current site and this site remains empty forever (no food replenishment). The crucial property of the forager is that it is frugal and eats only when encountering food within at most k steps of starvation. We compute the average lifetime analytically as a function of the frugality threshold and show that there exists an optimal strategy, namely, an optimal frugality threshold k* that maximizes the forager lifetime.
Optimization of the design of Gas Cherenkov Detectors for ICF diagnosis
NASA Astrophysics Data System (ADS)
Liu, Bin; Hu, Huasi; Han, Hetong; Lv, Huanwen; Li, Lan
2018-07-01
A design method, which combines a genetic algorithm (GA) with Monte-Carlo simulation, is established and applied to two different types of Cherenkov detectors, namely, Gas Cherenkov Detector (GCD) and Gamma Reaction History (GRH). For accelerating the optimization program, open Message Passing Interface (MPI) is used in the Geant4 simulation. Compared with the traditional optical ray-tracing method, the performances of these detectors have been improved with the optimization method. The efficiency for GCD system, with a threshold of 6.3 MeV, is enhanced by ∼20% and time response improved by ∼7.2%. For the GRH system, with threshold of 10 MeV, the efficiency is enhanced by ∼76% in comparison with previously published results.
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
NASA Astrophysics Data System (ADS)
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
NASA Astrophysics Data System (ADS)
Zhu, Yanli; Chen, Haiqiang
2017-05-01
In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.
NASA Astrophysics Data System (ADS)
Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo
2014-05-01
This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
Salicylate-induced changes in auditory thresholds of adolescent and adult rats.
Brennan, J F; Brown, C A; Jastreboff, P J
1996-01-01
Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.
Branion-Calles, Michael C; Nelson, Trisalyn A; Henderson, Sarah B
2015-11-19
There is no safe concentration of radon gas, but guideline values provide threshold concentrations that are used to map areas at higher risk. These values vary between different regions, countries, and organizations, which can lead to differential classification of risk. For example the World Health Organization suggests a 100 Bq m(-3)value, while Health Canada recommends 200 Bq m(-3). Our objective was to describe how different thresholds characterized ecological radon risk and their visual association with lung cancer mortality trends in British Columbia, Canada. Eight threshold values between 50 and 600 Bq m(-3) were identified, and classes of radon vulnerability were defined based on whether the observed 95(th) percentile radon concentration was above or below each value. A balanced random forest algorithm was used to model vulnerability, and the results were mapped. We compared high vulnerability areas, their estimated populations, and differences in lung cancer mortality trends stratified by smoking prevalence and sex. Classification accuracy improved as the threshold concentrations decreased and the area classified as high vulnerability increased. Majority of the population lived within areas of lower vulnerability regardless of the threshold value. Thresholds as low as 50 Bq m(-3) were associated with higher lung cancer mortality, even in areas with low smoking prevalence. Temporal trends in lung cancer mortality were increasing for women, while decreasing for men. Radon contributes to lung cancer in British Columbia. The results of the study contribute evidence supporting the use of a reference level lower than the current guideline of 200 Bq m(-3) for the province.
2015-01-01
Transboundary industrial pollution requires international actions to control its formation and effects. In this paper, we present a stochastic differential game to model the transboundary industrial pollution problems with emission permits trading. More generally, the process of emission permits price is assumed to be stochastic and to follow a geometric Brownian motion (GBM). We make use of stochastic optimal control theory to derive the system of Hamilton-Jacobi-Bellman (HJB) equations satisfied by the value functions for the cooperative and the noncooperative games, respectively, and then propose a so-called fitted finite volume method to solve it. The efficiency and the usefulness of this method are illustrated by the numerical experiments. The two regions’ cooperative and noncooperative optimal emission paths, which maximize the regions’ discounted streams of the net revenues, together with the value functions, are obtained. Additionally, we can also obtain the threshold conditions for the two regions to decide whether they cooperate or not in different cases. The effects of parameters in the established model on the results have been also examined. All the results demonstrate that the stochastic emission permits prices can motivate the players to make more flexible strategic decisions in the games. PMID:26402322
Chang, Shuhua; Wang, Xinyu; Wang, Zheng
2015-01-01
Transboundary industrial pollution requires international actions to control its formation and effects. In this paper, we present a stochastic differential game to model the transboundary industrial pollution problems with emission permits trading. More generally, the process of emission permits price is assumed to be stochastic and to follow a geometric Brownian motion (GBM). We make use of stochastic optimal control theory to derive the system of Hamilton-Jacobi-Bellman (HJB) equations satisfied by the value functions for the cooperative and the noncooperative games, respectively, and then propose a so-called fitted finite volume method to solve it. The efficiency and the usefulness of this method are illustrated by the numerical experiments. The two regions' cooperative and noncooperative optimal emission paths, which maximize the regions' discounted streams of the net revenues, together with the value functions, are obtained. Additionally, we can also obtain the threshold conditions for the two regions to decide whether they cooperate or not in different cases. The effects of parameters in the established model on the results have been also examined. All the results demonstrate that the stochastic emission permits prices can motivate the players to make more flexible strategic decisions in the games.
Prognostic value of inflammation-based scores in patients with osteosarcoma
Liu, Bangjian; Huang, Yujing; Sun, Yuanjue; Zhang, Jianjun; Yao, Yang; Shen, Zan; Xiang, Dongxi; He, Aina
2016-01-01
Systemic inflammation responses have been associated with cancer development and progression. C-reactive protein (CRP), Glasgow prognostic score (GPS), neutrophil-lymphocyte ratio (NLR), platelet-lymphocyte ratio (PLR), lymphocyte-monocyte ratio (LMR), and neutrophil-platelet score (NPS) have been shown to be independent risk factors in various types of malignant tumors. This retrospective analysis of 162 osteosarcoma cases was performed to estimate their predictive value of survival in osteosarcoma. All statistical analyses were performed by SPSS statistical software. Receiver operating characteristic (ROC) analysis was generated to set optimal thresholds; area under the curve (AUC) was used to show the discriminatory abilities of inflammation-based scores; Kaplan-Meier analysis was performed to plot the survival curve; cox regression models were employed to determine the independent prognostic factors. The optimal cut-off points of NLR, PLR, and LMR were 2.57, 123.5 and 4.73, respectively. GPS and NLR had a markedly larger AUC than CRP, PLR and LMR. High levels of CRP, GPS, NLR, PLR, and low level of LMR were significantly associated with adverse prognosis (P < 0.05). Multivariate Cox regression analyses revealed that GPS, NLR, and occurrence of metastasis were top risk factors associated with death of osteosarcoma patients. PMID:28008988
The effect of decentralized behavioral decision making on system-level risk.
Kaivanto, Kim
2014-12-01
Certain classes of system-level risk depend partly on decentralized lay decision making. For instance, an organization's network security risk depends partly on its employees' responses to phishing attacks. On a larger scale, the risk within a financial system depends partly on households' responses to mortgage sales pitches. Behavioral economics shows that lay decisionmakers typically depart in systematic ways from the normative rationality of expected utility (EU), and instead display heuristics and biases as captured in the more descriptively accurate prospect theory (PT). In turn, psychological studies show that successful deception ploys eschew direct logical argumentation and instead employ peripheral-route persuasion, manipulation of visceral emotions, urgency, and familiar contextual cues. The detection of phishing emails and inappropriate mortgage contracts may be framed as a binary classification task. Signal detection theory (SDT) offers the standard normative solution, formulated as an optimal cutoff threshold, for distinguishing between good/bad emails or mortgages. In this article, we extend SDT behaviorally by rederiving the optimal cutoff threshold under PT. Furthermore, we incorporate the psychology of deception into determination of SDT's discriminability parameter. With the neo-additive probability weighting function, the optimal cutoff threshold under PT is rendered unique under well-behaved sampling distributions, tractable in computation, and transparent in interpretation. The PT-based cutoff threshold is (i) independent of loss aversion and (ii) more conservative than the classical SDT cutoff threshold. Independently of any possible misalignment between individual-level and system-level misclassification costs, decentralized behavioral decisionmakers are biased toward underdetection, and system-level risk is consequently greater than in analyses predicated upon normative rationality. © 2014 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Moore, E. N.; Altick, P. L.
1972-01-01
The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.
Grant, Wally; Curthoys, Ian
2017-09-01
Vestibular otolithic organs are recognized as transducers of head acceleration and they function as such up to their corner frequency or undamped natural frequency. It is well recognized that these organs respond to frequencies above their corner frequency up to the 2-3 kHz range (Curthoys et al., 2016). A mechanics model for the transduction of these organs is developed that predicts the response below the undamped natural frequency as an accelerometer and above that frequency as a seismometer. The model is converted to a transfer function using hair cell bundle deflection. Measured threshold acceleration stimuli are used along with threshold deflections for threshold transfer function values. These are compared to model predicted values, both below and above their undamped natural frequency. Threshold deflection values are adjusted to match the model transfer function. The resulting threshold deflection values were well within in measure threshold bundle deflection ranges. Vestibular Evoked Myogenic Potentials (VEMPs) today routinely uses stimulus frequencies of 500 and 1000 Hz, and otoliths have been established incontrovertibly by clinical and neural evidence as the stimulus source. The mechanism for stimulus at these frequencies above the undamped natural frequency of otoliths is presented where otoliths are utilizing a seismometer mode of response for VEMP transduction. Copyright © 2017 Elsevier B.V. All rights reserved.
LEDs on the threshold for use in projection systems: challenges, limitations and applications
NASA Astrophysics Data System (ADS)
Moffat, Bryce Anton
2006-02-01
The use of coloured LEDs as light sources in digital projectors depends on an optimal combination of optical, electrical and thermal parameters to meet the performance and cost targets needed to enable these products to compete in the marketplace. This paper describes the system design methodology for a digital micromirror display (DMD) based optical engine using LEDs as the light source, starting at the basic physical and geometrical parameters of the DMD and other optical elements through characterization of the LEDs to optimizing the system performance by determining optimal driving conditions. The main challenge in using LEDs is the luminous flux density, which is just at the threshold of acceptance in projection systems and thus only a fully optimized optical system with a uniformly bright set of LEDs can be used. As a result of this work we have developed two applications: a compact pocket projector and a rear projection television.
Gharipour, Mojgan; Sadeghi, Masoumeh; Dianatkhah, Minoo; Nezafati, Pouya; Talaie, Mohammad; Oveisgharan, Shahram; Golshahi, Jafar
2016-01-01
High triglyceride (TG) and low high-density lipoprotein cholesterol (HDL-C) are important cardiovascular risk factors. The exact prognostic value of the TG/HDL-C ratio, a marker for cardiovascular events, is currently unknown among Iranians so this study sought to determine the optimal cutoff point for the TG/HDL-C ratio in predicting cardiovascular disease events in the Iranian population. The Isfahan Cohort Study (ICS) is an ongoing, longitudinal, population-based study that was originally conducted on adults aged ≥ 35 years, living in urban and rural areas of three districts in central Iran. After 10 years of follow-up, 5431 participants were re-evaluated using a standard protocol similar to the one used for baseline. At both measurements, participants underwent medical interviews, physical examinations, and fasting blood measurements. "High-risk" subjects were defined by the discrimination power of indices, which were assessed using receiver operating characteristic (ROC) analysis; the optimal cutoff point value for each index was then derived. The mean age of the participants was 50.7 ± 11.6 years. The TG/HDL-C ratio, at a threshold of 3.68, was used to screen for cardiovascular events among the study population. Subjects were divided into two groups ("low" and "high" risk) according to the TG/HDL-C concentration ratio at baseline. A slightly higher number of high-risk individuals were identified using the European cutoff points of 63.7% in comparison with the ICS cutoff points of 49.5%. The unadjusted hazard ratio (HR) was greatest in high-risk individuals identified by the ICS cutoff points (HR = 1.54, 95% CI [1.33-1.79]) vs European cutoff points (HR = 1.38, 95% [1.17-1.63]). There were no remarkable changes after adjusting for differences in sex and age (HR = 1.58, 95% CI [1.36-1.84] vs HR = 1.44, 95% CI [1.22-1.71]) for the ICS and European cutoff points, respectively. The threshold of TG/HDL ≥ 3.68 is the optimal cutoff point for predicting cardiovascular events in Iranian individuals. Copyright © 2016 National Lipid Association. Published by Elsevier Inc. All rights reserved.
Threshold-based insulin-pump interruption for reduction of hypoglycemia.
Bergenstal, Richard M; Klonoff, David C; Garg, Satish K; Bode, Bruce W; Meredith, Melissa; Slover, Robert H; Ahmann, Andrew J; Welsh, John B; Lee, Scott W; Kaufman, Francine R
2013-07-18
The threshold-suspend feature of sensor-augmented insulin pumps is designed to minimize the risk of hypoglycemia by interrupting insulin delivery at a preset sensor glucose value. We evaluated sensor-augmented insulin-pump therapy with and without the threshold-suspend feature in patients with nocturnal hypoglycemia. We randomly assigned patients with type 1 diabetes and documented nocturnal hypoglycemia to receive sensor-augmented insulin-pump therapy with or without the threshold-suspend feature for 3 months. The primary safety outcome was the change in the glycated hemoglobin level. The primary efficacy outcome was the area under the curve (AUC) for nocturnal hypoglycemic events. Two-hour threshold-suspend events were analyzed with respect to subsequent sensor glucose values. A total of 247 patients were randomly assigned to receive sensor-augmented insulin-pump therapy with the threshold-suspend feature (threshold-suspend group, 121 patients) or standard sensor-augmented insulin-pump therapy (control group, 126 patients). The changes in glycated hemoglobin values were similar in the two groups. The mean AUC for nocturnal hypoglycemic events was 37.5% lower in the threshold-suspend group than in the control group (980 ± 1200 mg per deciliter [54.4 ± 66.6 mmol per liter] × minutes vs. 1568 ± 1995 mg per deciliter [87.0 ± 110.7 mmol per liter] × minutes, P<0.001). Nocturnal hypoglycemic events occurred 31.8% less frequently in the threshold-suspend group than in the control group (1.5 ± 1.0 vs. 2.2 ± 1.3 per patient-week, P<0.001). The percentages of nocturnal sensor glucose values of less than 50 mg per deciliter (2.8 mmol per liter), 50 to less than 60 mg per deciliter (3.3 mmol per liter), and 60 to less than 70 mg per deciliter (3.9 mmol per liter) were significantly reduced in the threshold-suspend group (P<0.001 for each range). After 1438 instances at night in which the pump was stopped for 2 hours, the mean sensor glucose value was 92.6 ± 40.7 mg per deciliter (5.1 ± 2.3 mmol per liter). Four patients (all in the control group) had a severe hypoglycemic event; no patients had diabetic ketoacidosis. This study showed that over a 3-month period the use of sensor-augmented insulin-pump therapy with the threshold-suspend feature reduced nocturnal hypoglycemia, without increasing glycated hemoglobin values. (Funded by Medtronic MiniMed; ASPIRE ClinicalTrials.gov number, NCT01497938.).
Threshold matrix for digital halftoning by genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero
1998-10-01
Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.
Digital audio watermarking using moment-preserving thresholding
NASA Astrophysics Data System (ADS)
Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong
2007-09-01
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
NASA Astrophysics Data System (ADS)
Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao
2018-06-01
Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4.0 °C at most stations. The results of this study provide information for climate change research and hydrological process simulations in the SRB, as well as provide reference information for discriminating precipitation phase in other regions.
Guan, Yue; Shi, Hua; Chen, Ying; Liu, Song; Li, Weifeng; Jiang, Zhuoran; Wang, Huanhuan; He, Jian; Zhou, Zhengyang; Ge, Yun
2016-01-01
The aim of this study was to explore the application of whole-lesion histogram analysis of apparent diffusion coefficient (ADC) values of cervical cancer. A total of 54 women (mean age, 53 years) with cervical cancers underwent 3-T diffusion-weighted imaging with b values of 0 and 800 s/mm prospectively. Whole-lesion histogram analysis of ADC values was performed. Paired sample t test was used to compare differences in ADC histogram parameters between cervical cancers and normal cervical tissues. Receiver operating characteristic curves were constructed to identify the optimal threshold of each parameter. All histogram parameters in this study including ADCmean, ADCmin, ADC10%-ADC90%, mode, skewness, and kurtosis of cervical cancers were significantly lower than those of normal cervical tissues (all P < 0.0001). ADC90% had the largest area under receiver operating characteristic curve of 0.996. Whole-lesion histogram analysis of ADC maps is useful in the assessment of cervical cancer.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
A new function for estimating local rainfall thresholds for landslide triggering
NASA Astrophysics Data System (ADS)
Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.
2009-04-01
The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.
Guo, Jun; Yao, Chengjun; Chen, Hong; Zhuang, Dongxiao; Tang, Weijun; Ren, Guang; Wang, Yin; Wu, Jinsong; Huang, Fengping; Zhou, Liangfu
2012-08-01
The marginal delineation of gliomas cannot be defined by conventional imaging due to their infiltrative growth pattern. Here we investigate the relationship between changes in glioma metabolism by proton magnetic resonance spectroscopic imaging ((1)H-MRSI) and histopathological findings in order to determine an optimal threshold value of choline/N-acetyl-aspartate (Cho/NAA) that can be used to define the extent of glioma spread. Eighteen patients with different grades of glioma were examined using (1)H-MRSI. Needle biopsies were performed under the guidance of neuronavigation prior to craniotomy. Intraoperative magnetic resonance imaging (MRI) was performed to evaluate the accuracy of sampling. Haematoxylin and eosin, and immunohistochemical staining with IDH1, MIB-1, p53, CD34 and glial fibrillary acidic protein (GFAP) antibodies were performed on all samples. Logistic regression analysis was used to determine the relationship between Cho/NAA and MIB-1, p53, CD34, and the degree of tumour infiltration. The clinical threshold ratio distinguishing tumour tissue in high-grade (grades III and IV) glioma (HGG) and low-grade (grade II) glioma (LGG) was calculated. In HGG, higher Cho/NAA ratios were associated with a greater probability of higher MIB-1 counts, stronger CD34 expression, and tumour infiltration. Ratio threshold values of 0.5, 1.0, 1.5 and 2.0 appeared to predict the specimens containing the tumour with respective probabilities of 0.38, 0.60, 0.79, 0.90 in HGG and 0.16, 0.39, 0.67, 0.87 in LGG. HGG and LGG exhibit different spectroscopic patterns. Using (1)H-MRSI to guide the extent of resection has the potential to improve the clinical outcome of glioma surgery.
Sarikaya, B; Lohman, B; Mckinney, A M; Gadani, S; Irfan, M; Lucato, L
2012-01-01
Objectives Previous evidence supports a direct relationship between the calcium burden (volume) on post-contrast CT with the percent internal carotid artery (ICA) stenosis at the carotid bifurcation. We sought to further investigate this relationship by comparing non-enhanced CT (NECT) and digital subtraction angiography (DSA). Methods 50 patients (aged 41–82 years) were retrospectively identified who had undergone cervical NECT and DSA. A 64-multidetector array CT (MDCT) scanner was utilised and the images reviewed using preset window widths/levels (30/300) optimised to calcium, with the volumes measured via three-dimensional reconstructive software. Stenosis measurements were performed on DSA and luminal diameter stenoses >40% were considered “significant”. Volume thresholds of 0.01, 0.03, 0.06, 0.09 and 0.12 cm3 were utilised and Pearson'S correlation coefficient (r) was calculated to correlate the calcium volume with percent stenosis. Results Of 100 carotid bifurcations, 88 were available and of these 7 were significantly stenotic. The NECT calcium volume moderately correlated with percent stenosis on DSA r=0.53 (p<0.01). A moderate–strong correlation was found between the square root of calcium volume on NECT with percent stenosis on DSA (r=0.60, p<0.01). Via a receiver operating characteristic curve, 0.06 cm3 was determined to be the best threshold (sensitivity 100%, specificity 90.1%, negative predictive value 100% and positive predictive value 46.7%) for detecting significant stenoses. Conclusion This preliminary investigation confirms a correlation between carotid bifurcation calcium volume and percent ICA stenosis and is promising for the optimal threshold for stenosis detection. Future studies could utilise calcium volumes to create a “score” that could predict high grade stenosis. PMID:21896662
Missing value imputation strategies for metabolomics data.
Armitage, Emily Grace; Godzien, Joanna; Alonso-Herranz, Vanesa; López-Gonzálvez, Ángeles; Barbas, Coral
2015-12-01
The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k-means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a "gray area" and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k-means nearest neighbor and the best approximation of positioning real zeros. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dependence of Interfacial Excess on the Threshold Value of the Isoconcentration Surface
NASA Technical Reports Server (NTRS)
Yoon, Kevin E.; Noebe, Ronald D.; Hellman, Olof C.; Seidman, David N.
2004-01-01
The proximity histogram (or proxigram for short) is used for analyzing data collected by a three-dimensional atom probe microscope. The interfacial excess of Re (2.41 +/- 0.68 atoms/sq nm) is calculated by employing a proxigram in a completely geometrically independent way for gamma/gamma' interfaces in Rene N6, a third-generation single-crystal Ni-based superalloy. A possible dependence of interfacial excess on the variation of the threshold value of an isoconcentration surface is investigated using the data collected for Rene N6 alloy. It is demonstrated that the dependence of the interfacial excess value on the threshold value of the isoconcentration surface is weak.
Designing a Pediatric Severe Sepsis Screening Tool
Sepanski, Robert J.; Godambe, Sandip A.; Mangum, Christopher D.; Bovat, Christine S.; Zaritsky, Arno L.; Shah, Samir H.
2014-01-01
We sought to create a screening tool with improved predictive value for pediatric severe sepsis (SS) and septic shock that can be incorporated into the electronic medical record and actively screen all patients arriving at a pediatric emergency department (ED). “Gold standard” SS cases were identified using a combination of coded discharge diagnosis and physician chart review from 7,402 children who visited a pediatric ED over 2 months. The tool’s identification of SS was initially based on International Consensus Conference on Pediatric Sepsis (ICCPS) parameters that were refined by an iterative, virtual process that allowed us to propose successive changes in sepsis detection parameters in order to optimize the tool’s predictive value based on receiver operating characteristics (ROC). Age-specific normal and abnormal values for heart rate (HR) and respiratory rate (RR) were empirically derived from 143,603 children seen in a second pediatric ED over 3 years. Univariate analyses were performed for each measure in the tool to assess its association with SS and to characterize it as an “early” or “late” indicator of SS. A split-sample was used to validate the final, optimized tool. The final tool incorporated age-specific thresholds for abnormal HR and RR and employed a linear temperature correction for each category. The final tool’s positive predictive value was 48.7%, a significant, nearly threefold improvement over the original ICCPS tool. False positive systemic inflammatory response syndrome identifications were nearly sixfold lower. PMID:24982852
Vernon, John A; Goldberg, Robert; Golec, Joseph
2009-01-01
In this article we describe how reimbursement cost-effectiveness thresholds, per unit of health benefit, whether set explicitly or observed implicitly via historical reimbursement decisions, serve as a signal to firms about the commercial viability of their R&D projects (including candidate products for in-licensing). Traditional finance methods for R&D project valuations, such as net present value analyses (NPV), incorporate information from these payer reimbursement signals to help determine which R&D projects should be continued and which should be terminated (in the case of the latter because they yield an NPV < 0). Because the influence these signals have for firm R&D investment decisions is so significant, we argue that it is important for reimbursement thresholds to reflect the economic value of the unit of health benefit being considered for reimbursement. Thresholds set too low (below the economic value of the health benefit) will result in R&D investment levels that are too low relative to the economic value of R&D (on the margin). Similarly, thresholds set too high (above the economic value of the health benefit) will result in inefficiently high levels of R&D spending. The US in particular, which represents approximately half of the global pharmaceutical market (based on sales), and which seems poised to begin undertaking cost effectiveness in a systematic way, needs to exert caution in setting policies that explicitly or implicitly establish cost-effectiveness reimbursement thresholds for healthcare products and technologies, such as pharmaceuticals.
Predicting reactivity threshold in children with anaphylaxis to peanut.
Reier-Nilsen, T; Michelsen, M M; Lødrup Carlsen, K C; Carlsen, K-H; Mowinckel, P; Nygaard, U C; Namork, E; Borres, M P; Håland, G
2018-04-01
Peanut allergy necessitates dietary restrictions, preferably individualized by determining reactivity threshold through an oral food challenge (OFC). However, risk of systemic reactions often precludes OFC in children with severe peanut allergy. We aimed to determine whether clinical and/or immunological characteristics were associated with reactivity threshold in children with anaphylaxis to peanut and secondarily, to investigate whether these characteristics were associated with severity of the allergic reaction during OFC. A double-blinded placebo-controlled food challenge (DBPCFC) with peanut was performed in 96 5- to 15-year-old children with a history of severe allergic reactions to peanut and/or sensitization to peanut (skin prick test [SPT] ≥3 mm or specific immunoglobulin E [s-IgE] ≥0.35 kUA/L). Investigations preceding the DBPCFC included a structured interview, SPT, lung function measurements, serological immunology assessment (IgE, IgG and IgG 4 ), basophil activation test (BAT) and conjunctival allergen provocation test (CAPT). International standards were used to define anaphylaxis and grade the allergic reaction during OFC. During DBPCFC, all 96 children (median age 9.3, range 5.1-15.2) reacted with anaphylaxis (moderate objective symptoms from at least two organ systems). Basophil activation (CD63 + basophils ≥15%), peanut SPT and the ratio of peanut s-IgE/total IgE were significantly associated with reactivity threshold and lowest observed adverse events level (LOAEL) (all P < .04). Basophil activation best predicted very low threshold level (<3 mg of peanut protein), with an optimal cut-off of 75.8% giving a 93.5% negative predictive value. None of the characteristics were significantly associated with the severity of allergic reaction. In children with anaphylaxis to peanut, basophil activation, peanut SPT and the ratio of peanut s-IgE/total IgE were associated with reactivity threshold and LOAEL, but not with allergy reaction severity. © 2017 John Wiley & Sons Ltd.
Low-threshold field emission in planar cathodes with nanocarbon materials
NASA Astrophysics Data System (ADS)
Zhigalov, V.; Petukhov, V.; Emelianov, A.; Timoshenkov, V.; Chaplygin, Yu.; Pavlov, A.; Shamanaev, A.
2016-12-01
Nanocarbon materials are of great interest as field emission cathodes due to their low threshold voltage. In this work current-voltage characteristics of nanocarbon electrodes were studied. Low-threshold emission was found in planar samples where field enhancement is negligible (<10). Electron work function values, calculated by Fowler-Nordheim theory, are anomalous low (<1 eV) and come into collision with directly measured work function values in fabricated planar samples (4.1-4.4 eV). Non-applicability of Fowler-Nordheim theory for the nanocarbon materials was confirmed. The reasons of low-threshold emission in nanocarbon materials are discussed.
Estimating economic thresholds for pest control: an alternative procedure.
Ramirez, O A; Saunders, J L
1999-04-01
An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.
Detectability Thresholds and Optimal Algorithms for Community Structure in Dynamic Networks
NASA Astrophysics Data System (ADS)
Ghasemian, Amir; Zhang, Pan; Clauset, Aaron; Moore, Cristopher; Peel, Leto
2016-07-01
The detection of communities within a dynamic network is a common means for obtaining a coarse-grained view of a complex system and for investigating its underlying processes. While a number of methods have been proposed in the machine learning and physics literature, we lack a theoretical analysis of their strengths and weaknesses, or of the ultimate limits on when communities can be detected. Here, we study the fundamental limits of detecting community structure in dynamic networks. Specifically, we analyze the limits of detectability for a dynamic stochastic block model where nodes change their community memberships over time, but where edges are generated independently at each time step. Using the cavity method, we derive a precise detectability threshold as a function of the rate of change and the strength of the communities. Below this sharp threshold, we claim that no efficient algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this threshold. The first uses belief propagation, which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the belief propagation equations. These results extend our understanding of the limits of community detection in an important direction, and introduce new mathematical tools for similar extensions to networks with other types of auxiliary information.
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
Method and apparatus for analog pulse pile-up rejection
De Geronimo, Gianluigi
2013-12-31
A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.
Method and apparatus for analog pulse pile-up rejection
De Geronimo, Gianluigi
2014-11-18
A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0
NASA Astrophysics Data System (ADS)
Millán-Otoya, Jorge E.; Boettcher, Stefan
2014-03-01
Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3
NASA Astrophysics Data System (ADS)
Stollsteiner, P.; Bessiere, H.; Nicolas, J.; Allier, D.; Berthet, O.
2015-04-01
This article is based on a BRGM study on piezometric indicators, threshold values of discharge and groundwater levels for the assessment of potentially-exploitable water resources of chalky watersheds. A method for estimating low water levels based on groundwater levels is presented from three examples representing chalk aquifers with different cycles: annual, combined and interannual. The first is located in Picardy and the two others in the Champagne-Ardennes region. Piezometers with annual cycles, used in these examples, are supposed to be representative of the aquifer hydro-dynamics. Except for multi-annual systems, the analysis between discharge measurements at a hydrometric station and groundwater levels measured at a piezometer representative of the main aquifer, leads to relatively precise and satisfactory relationships within a chalky context. These relationships may be useful for monitoring, validation, extension or reconstruction of the low water flow data. On the one hand, they allow definition of the piezometric levels corresponding to the different alert thresholds of river discharges. On the other hand, they clarify the proportions of low surface water flow from runoff or drainage of the aquifer. Finally, these correlations give an assessment of the minimum flow for the coming weeks. However, these correlations cannot be used to optimize the value of the exploitable water resource because it seems to be difficult to integrate the value of the effective rainfall that could occur during the draining period. Moreover, in the case of multi-annual systems, the solution is to attempt a comprehensive system modelling and, if it is satisfactory, using the simulated values to get rid of parasites or running the model for forecasting purposes.
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Limitations and opportunities for the social cost of carbon (Invited)
NASA Astrophysics Data System (ADS)
Rose, S. K.
2010-12-01
Estimates of the marginal value of carbon dioxide-the social cost of carbon (SCC)-were recently adopted by the U.S. Government in order to satisfy requirements to value estimated GHG changes of new federal regulations. However, the development and use of SCC estimates of avoided climate change impacts comes with significant challenges and controversial decisions. Fortunately, economics can provide some guidance for conceptually appropriate estimates. At the same time, economics defaults to a benefit-cost decision framework to identify socially optimal policies. However, not all current policy decisions are benefit-cost based, nor depend on monetized information, or even have the same threshold for information. While a conceptually appropriate SCC is a useful metric, how far can we take it? This talk discusses potential applications of the SCC, limitations based on the state of research and methods, as well as opportunities for among other things consistency with climate risk management and research and decision-making tools.
Pre-impact fall detection system using dynamic threshold and 3D bounding box
NASA Astrophysics Data System (ADS)
Otanasap, Nuth; Boonbrahm, Poonpong
2017-02-01
Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.
The conventional tuning fork as a quantitative tool for vibration threshold.
Alanazy, Mohammed H; Alfurayh, Nuha A; Almweisheer, Shaza N; Aljafen, Bandar N; Muayqil, Taim
2018-01-01
This study was undertaken to describe a method for quantifying vibration when using a conventional tuning fork (CTF) in comparison to a Rydel-Seiffer tuning fork (RSTF) and to provide reference values. Vibration thresholds at index finger and big toe were obtained in 281 participants. Spearman's correlations were performed. Age, weight, and height were analyzed for their covariate effects on vibration threshold. Reference values at the fifth percentile were obtained by quantile regression. The correlation coefficients between CTF and RSTF values at finger/toe were 0.59/0.64 (P = 0.001 for both). Among covariates, only age had a significant effect on vibration threshold. Reference values for CTF at finger/toe for the age groups 20-39 and 40-60 years were 7.4/4.9 and 5.8/4.6 s, respectively. Reference values for RSTF at finger/toe for the age groups 20-39 and 40-60 years were 6.9/5.5 and 6.2/4.7, respectively. CTF provides quantitative values that are as good as those provided by RSTF. Age-stratified reference data are provided. Muscle Nerve 57: 49-53, 2018. © 2017 Wiley Periodicals, Inc.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
2016-01-01
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986
[Electronic indicators for determining the optimum moment for inseminating cows and sheep].
Kostov, I; Bodurov, N; Todorova, I
1984-01-01
Three types of electron indicators to measure the electric resistance of the vagina with cows and ewes are described. The first type of such indicators gives a direct scale record of resistance as measured in ohm units. The remaining two types present a simpler construction, and are intended for practical purposes, pointing to the threshold value of resistance. It has been found by means of these indicators that the optimal time for insemination in the case of cows is when the electric resistance is lower than 250 omega; for ewes--when the resistance is lower than 550 omega. The indicators rate totally designed with Bulgarian transistors and integral schemes.
ERIC Educational Resources Information Center
Johnson, Brittney; McCracken, I. Moriah
2016-01-01
In 2015, threshold concepts formed the foundation of two disciplinary documents: The "ACRL Framework for Information Literacy" (2015) and "Naming What We Know: Threshold Concepts of Writing Studies" (2015). While there is no consensus in the fields about the value of threshold concepts in teaching, reading the six Frames in the…
Snik, A; Cremers, C
2004-02-01
Typically, an implantable hearing device consists of a transducer that is coupled to the ossicular chain and electronics. The coupling is of major importance. The Vibrant Soundbridge (VSB) is such an implantable device; normally, the VSB transducer is fixed to the ossicular chain by means of a special clip that is crimped around the long process of the incus. In addition to crimping, bone cement was used to optimize the fixation in six patients. Long-term results were compared to those of five controls with crimp fixation alone. To assess the effect of bone cement (SerenoCem, Corinthian Medical Ltd, Nottingham, UK) on hearing thresholds, long-term post-surgery thresholds were compared to pre-surgery thresholds. Bone cement did not have any negative effect. Next, to test the hypothesis that aided thresholds might be better with the use of bone cement, aided thresholds were studied. After correction for the severity of hearing loss, only a small difference was found between the two groups at one frequency, viz. 2 kHz. It was concluded that there was no negative effect of using bone cement; however, there is also no reason to use bone cement in VSB users on a regular basis.
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
Using machine learning to examine medication adherence thresholds and risk of hospitalization.
Lo-Ciganic, Wei-Hsuan; Donohue, Julie M; Thorpe, Joshua M; Perera, Subashan; Thorpe, Carolyn T; Marcum, Zachary A; Gellad, Walid F
2015-08-01
Quality improvement efforts are frequently tied to patients achieving ≥80% medication adherence. However, there is little empirical evidence that this threshold optimally predicts important health outcomes. To apply machine learning to examine how adherence to oral hypoglycemic medications is associated with avoidance of hospitalizations, and to identify adherence thresholds for optimal discrimination of hospitalization risk. A retrospective cohort study of 33,130 non-dual-eligible Medicaid enrollees with type 2 diabetes. We randomly selected 90% of the cohort (training sample) to develop the prediction algorithm and used the remaining (testing sample) for validation. We applied random survival forests to identify predictors for hospitalization and fit survival trees to empirically derive adherence thresholds that best discriminate hospitalization risk, using the proportion of days covered (PDC). Time to first all-cause and diabetes-related hospitalization. The training and testing samples had similar characteristics (mean age, 48 y; 67% female; mean PDC=0.65). We identified 8 important predictors of all-cause hospitalizations (rank in order): prior hospitalizations/emergency department visit, number of prescriptions, diabetes complications, insulin use, PDC, number of prescribers, Elixhauser index, and eligibility category. The adherence thresholds most discriminating for risk of all-cause hospitalization varied from 46% to 94% according to patient health and medication complexity. PDC was not predictive of hospitalizations in the healthiest or most complex patient subgroups. Adherence thresholds most discriminating of hospitalization risk were not uniformly 80%. Machine-learning approaches may be valuable to identify appropriate patient-specific adherence thresholds for measuring quality of care and targeting nonadherent patients for intervention.
Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan
2015-06-01
[Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds.
Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan
2015-01-01
[Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds. PMID:26180348
NASA Astrophysics Data System (ADS)
Zhao, Libo; Xia, Yong; Hebibul, Rahman; Wang, Jiuhong; Zhou, Xiangyang; Hu, Yingjie; Li, Zhikang; Luo, Guoxi; Zhao, Yulong; Jiang, Zhuangde
2018-03-01
This paper presents an experimental study using image processing to investigate width and width uniformity of sub-micrometer polyethylene oxide (PEO) lines fabricated by near-filed electrospinning (NFES) technique. An adaptive thresholding method was developed to determine the optimal gray values to accurately extract profiles of printed lines from original optical images. And it was proved with good feasibility. The mechanism of the proposed thresholding method was believed to take advantage of statistic property and get rid of halo induced errors. Triangular method and relative standard deviation (RSD) were introduced to calculate line width and width uniformity, respectively. Based on these image processing methods, the effects of process parameters including substrate speed (v), applied voltage (U), nozzle-to-collector distance (H), and syringe pump flow rate (Q) on width and width uniformity of printed lines were discussed. The research results are helpful to promote the NFES technique for fabricating high resolution micro and sub-micro lines and also helpful to optical image processing at sub-micro level.
Health hazards of ultrafine metal and metal oxide powders
NASA Technical Reports Server (NTRS)
Boylen, G. W., Jr.; Chamberlin, R. I.; Viles, F. J.
1969-01-01
Study reveals that suggested threshold limit values are from two to fifty times lower than current recommended threshold limit values. Proposed safe limits of exposure to the ultrafine dusts are based on known toxic potential of various materials as determined in particle size ranges.
48 CFR 41.401 - Monthly and annual review.
Code of Federal Regulations, 2010 CFR
2010-10-01
... values exceeding the simplified acquisition threshold, on an annual basis. Annual reviews of accounts with annual values at or below the simplified acquisition threshold shall be conducted when deemed... services to each facility under the utility's most economical, applicable rate and to examine competitive...
Spirtas, R; Steinberg, M; Wands, R C; Weisburger, E K
1986-01-01
The Chemical Substances Threshold Limit Value Committee of the American Conference of Governmental Industrial Hygienists has refined its procedures for evaluating carcinogens. Types of epidemiologic and toxicologic evidence used are reviewed and a discussion is presented on how the Committee evaluates data on carcinogenicity. Although it has not been conclusively determined whether biological thresholds exist for all types of carcinogens, the Committee will continue to develop guidelines for permissible exposures to carcinogens. The Committee will continue to use the safety factor approach to setting Threshold Limit Values for carcinogens, despite its shortcomings. A compilation has been developed for lists of substances considered to be carcinogenic by several scientific groups. The Committee will use this information to help to identify and classify carcinogens for its evaluation. PMID:3752326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spirtas, R.; Steinberg, M.; Wands, R.C.
1986-10-01
The Chemical Substances Threshold Limit Value Committee of the American Conference of Governmental Industrial Hygienists has refined its procedures for evaluating carcinogens. Types of epidemiologic and toxicologic evidence used are reviewed and a discussion is presented on how the Committee evaluates data on carcinogenicity. Although it has not been conclusively determined whether biological thresholds exist for all types of carcinogens, the Committee will continue to develop guidelines for permissible exposures to carcinogens. The Committee will continue to use the safety factor approach to setting Threshold Limit Values for carcinogens, despite its shortcomings. A compilation has been developed for lists ofmore » substances considered to be carcinogenic by several scientific groups. The Committee will use this information to help to identify and classify carcinogens for its evaluation.« less
Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex
2016-01-01
Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561
Artz, Andrew S.; Logan, Brent; Zhu, Xiaochun; Akpek, Gorgun; Bufarull, Rodrigo Martino; Gupta, Vikas; Lazarus, Hillard M.; Litzow, Mark; Loren, Alison; Majhail, Navneet S.; Maziarz, Richard T.; McCarthy, Philip; Popat, Uday; Saber, Wael; Spellman, Stephen; Ringden, Olle; Wickrema, Amittha; Pasquini, Marcelo C.; Cooke, Kenneth R.
2016-01-01
We sought to confirm the prognostic importance of simple clinically available biomarkers of C-reactive protein, serum albumin, and ferritin prior to allogeneic hematopoietic cell transplantation. The study population consisted of 784 adults with acute myeloid leukemia in remission or myelodysplastic syndromes undergoing unrelated donor transplant reported to the Center for International Blood and Marrow Transplant Research. C-reactive protein and ferritin were centrally quantified by ELISA from cryopreserved plasma whereas each center provided pre-transplant albumin. In multivariate analysis, transplant-related mortality was associated with the pre-specified thresholds of C-reactive protein more than 10 mg/L (P=0.008) and albumin less than 3.5 g/dL (P=0.01) but not ferritin more than 2500 ng/mL. Only low albumin independently influenced overall mortality. Optimal thresholds affecting transplant-related mortality were defined as: C-reactive protein more than 3.67 mg/L, log(ferritin), and albumin less than 3.4 g/dL. A 3-level biomarker risk group based on these values separated risks of transplant-related mortality: low risk (reference), intermediate (HR=1.66, P=0.015), and high risk (HR=2.7, P<0.001). One-year survival was 74%, 67% and 56% for low-, intermediate- and high-risk groups. Routinely available pre-transplant biomarkers independently risk-stratify for transplant-related mortality and survival. PMID:27662010
NASA Astrophysics Data System (ADS)
Panagoulia, D.; Trichakis, I.
2012-04-01
Considering the growing interest in simulating hydrological phenomena with artificial neural networks (ANNs), it is useful to figure out the potential and limits of these models. In this study, the main objective is to examine how to improve the ability of an ANN model to simulate extreme values of flow utilizing a priori knowledge of threshold values. A three-layer feedforward ANN was trained by using the back propagation algorithm and the logistic function as activation function. By using the thresholds, the flow was partitioned in low (x < μ), medium (μ ≤ x ≤ μ + 2σ) and high (x > μ + 2σ) values. The employed ANN model was trained for high flow partition and all flow data too. The developed methodology was implemented over a mountainous river catchment (the Mesochora catchment in northwestern Greece). The ANN model received as inputs pseudo-precipitation (rain plus melt) and previous observed flow data. After the training was completed the bootstrapping methodology was applied to calculate the ANN confidence intervals (CIs) for a 95% nominal coverage. The calculated CIs included only the uncertainty, which comes from the calibration procedure. The results showed that an ANN model trained specifically for high flows, with a priori knowledge of the thresholds, can simulate these extreme values much better (RMSE is 31.4% less) than an ANN model trained with all data of the available time series and using a posteriori threshold values. On the other hand the width of CIs increases by 54.9% with a simultaneous increase by 64.4% of the actual coverage for the high flows (a priori partition). The narrower CIs of the high flows trained with all data may be attributed to the smoothing effect produced from the use of the full data sets. Overall, the results suggest that an ANN model trained with a priori knowledge of the threshold values has an increased ability in simulating extreme values compared with an ANN model trained with all the data and a posteriori knowledge of the thresholds.
Raja, Harish; Snyder, Melissa R.; Johnston, Patrick B.; O’Neill, Brian P.; Caraballo, Juline N.; Balsanek, Joseph G.; Peters, Brian E.; Decker, Paul A.; Pulido, Jose S.
2013-01-01
Intraocular cytokines are promising diagnostic biomarkers of vitreoretinal lymphoma. Here, we evaluate the utility of IL-10, IL-6 and IL-10/IL-6 for discriminating lymphoma from uveitis and report the effects of intraocular methotrexate and rituximab on aqueous cytokine levels in eyes with lymphoma. This is a retrospective case series including 10 patients with lymphoma and 7 patients with uveitis. Non-parametric Mann-Whitney analysis was performed to determine statistical significance of difference in interleukin levels between lymphoma and uveitis. Compared to eyes with uveitis, eyes with lymphoma had higher levels of IL-10 (U = 7.0; two-tailed p = 0.004) and IL-10/IL-6 (U = 6.0; two-tailed p = 0.003), whereas IL-6 levels were more elevated, although insignificant, in those patients with uveitis than in lymphoma (U = 15.0; two-tailed p = ns). Using a receiver operating characteristic analysis to identify threshold values diagnostic for lymphoma, optimal sensitivity and specificity improved to 80.0% and 100%, respectively, for IL-10>7.025 pg/ml and 90.0% and 100.0%, respectively, for IL-10/IL-6>0.02718. In patients in whom serial interleukin levels were available, regular intravitreal treatment with methotrexate and rituximab was associated with reduction in IL-10 levels over time. In conclusion, optimal IL-10 and IL-10/IL-6 threshold values are associated with a diagnostic sensitivity ≥80% and specificity of 100%. Therefore, these cytokines may serve as a useful adjunct in the diagnosis of lymphoma. While negative IL-10 and IL-10/IL-6 values do not exclude a diagnosis of lymphoma, elevated levels do appear to be consistent with lymphoma clinically. Moreover, elevated levels of IL-10 in the setting of a clinically quiet eye may point to impending disease recurrence. Lastly, once lymphoma is diagnosed, IL-10 levels can be monitored over time to assess disease activity and therapeutic response. PMID:23750271
Devlin, Michelle; Painting, Suzanne; Best, Mike
2007-01-01
The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.
Nadkarni, Tanvi N.; Andreoli, Matthew J.; Nair, Veena A.; Yin, Peng; Young, Brittany M.; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S.; Field, Aaron S.; Baskaya, Mustafa K.; Moritz, Chad H.; Meyerand, M. Elizabeth; Prabhakaran, Vivek
2014-01-01
Background and purpose Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Materials and methods Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Results Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Conclusion Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits. PMID:25685705
Vemer, Pepijn; Rutten-van Mölken, Maureen P M H
2011-10-01
Recently, several checklists systematically assessed factors that affect the transferability of cost-effectiveness (CE) studies between jurisdictions. The role of the threshold value for a QALY has been given little consideration in these checklists, even though the importance of a factor as a cause of between country differences in CE depends on this threshold. In this paper, we study the impact of the willingness-to-pay (WTP) per QALY on the importance of transferability factors in the case of smoking cessation support (SCS). We investigated, for several values of the WTP, how differences between six countries affect the incremental net monetary benefit (INMB) of SCS. The investigated factors were demography, smoking prevalence, mortality, epidemiology and costs of smoking-related diseases, resource use and unit costs of SCS, utility weights and discount rates. We found that when the WTP decreased, factors that mainly affect health outcomes became less important and factors that mainly effect costs became more important. With a WTP below
NASA Technical Reports Server (NTRS)
Smith, Paul L.; VonderHaar, Thomas H.
1996-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.
NASA Astrophysics Data System (ADS)
Zhang, Fengtian; Wang, Chao; Yuan, Mingquan; Tang, Bin; Xiong, Zhuang
2017-12-01
Most of the MEMS inertial switches developed in recent years are intended for shock and impact sensing with a threshold value above 50 g. In order to follow the requirement of detecting linear acceleration signal at low-g level, a silicon based MEMS inertial switch with a threshold value of 5 g was designed, fabricated and characterized. The switch consisted of a large proof mass, supported by circular spiral springs. An analytical model of the structure stiffness of the proposed switch was derived and verified by finite-element simulation. The structure fabrication was based on a customized double-buried layer silicon-on-insulator wafer and encapsulated by glass wafers. The centrifugal experiment and nanoindentation experiment were performed to measure the threshold value as well as the structure stiffness. The actual threshold values were measured to be 0.1-0.3 g lower than the pre-designed value of 5 g due to the dimension loss during non-contact lithography processing. Concerning the reliability assessment, a series of environmental experiments were conducted and the switches remained operational without excessive errors. However, both the random vibration and the shock tests indicate that the metal particles generated during collision of contact parts might affect the contact reliability and long-time stability. According to the conclusion reached in this report, an attentive study on switch contact behavior should be included in future research.
Self-Organization on Social Media: Endo-Exo Bursts and Baseline Fluctuations
Oka, Mizuki; Hashimoto, Yasuhiro; Ikegami, Takashi
2014-01-01
A salient dynamic property of social media is bursting behavior. In this paper, we study bursting behavior in terms of the temporal relation between a preceding baseline fluctuation and the successive burst response using a frequency time series of 3,000 keywords on Twitter. We found that there is a fluctuation threshold up to which the burst size increases as the fluctuation increases and that above the threshold, there appears a variety of burst sizes. We call this threshold the critical threshold. Investigating this threshold in relation to endogenous bursts and exogenous bursts based on peak ratio and burst size reveals that the bursts below this threshold are endogenously caused and above this threshold, exogenous bursts emerge. Analysis of the 3,000 keywords shows that all the nouns have both endogenous and exogenous origins of bursts and that each keyword has a critical threshold in the baseline fluctuation value to distinguish between the two. Having a threshold for an input value for activating the system implies that Twitter is an excitable medium. These findings are useful for characterizing how excitable a keyword is on Twitter and could be used, for example, to predict the response to particular information on social media. PMID:25329610
Cost-effectiveness thresholds: methods for setting and examples from around the world.
Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano
2018-06-01
Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.
Threshold Values for Identification of Contamination Predicted by Reduced-Order Models
Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; ...
2014-12-31
The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.
Processing circuitry for single channel radiation detector
NASA Technical Reports Server (NTRS)
Holland, Samuel D. (Inventor); Delaune, Paul B. (Inventor); Turner, Kathryn M. (Inventor)
2009-01-01
Processing circuitry is provided for a high voltage operated radiation detector. An event detector utilizes a comparator configured to produce an event signal based on a leading edge threshold value. A preferred event detector does not produce another event signal until a trailing edge threshold value is satisfied. The event signal can be utilized for counting the number of particle hits and also for controlling data collection operation for a peak detect circuit and timer. The leading edge threshold value is programmable such that it can be reprogrammed by a remote computer. A digital high voltage control is preferably operable to monitor and adjust high voltage for the detector.
Black, R.W.; Moran, P.W.; Frankforter, J.D.
2011-01-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).
Black, Robert W; Moran, Patrick W; Frankforter, Jill D
2011-04-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.
NASA Astrophysics Data System (ADS)
Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola
2018-03-01
We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.
NASA Astrophysics Data System (ADS)
Loutas, T. H.; Bourikas, A.
2017-12-01
We revisit the optimal sensor placement of engineering structures problem with an emphasis on in-plane dynamic strain measurements and to the direction of modal identification as well as vibration-based damage detection for structural health monitoring purposes. The approach utilized is based on the maximization of a norm of the Fisher Information Matrix built with numerically obtained mode shapes of the structure and at the same time prohibit the sensorization of neighbor degrees of freedom as well as those carrying similar information, in order to obtain a satisfactory coverage. A new convergence criterion of the Fisher Information Matrix (FIM) norm is proposed in order to deal with the issue of choosing an appropriate sensor redundancy threshold, a concept recently introduced but not further investigated concerning its choice. The sensor configurations obtained via a forward sequential placement algorithm are sub-optimal in terms of FIM norm values but the selected sensors are not allowed to be placed in neighbor degrees of freedom providing thus a better coverage of the structure and a subsequent better identification of the experimental mode shapes. The issue of how service induced damage affects the initially nominated as optimal sensor configuration is also investigated and reported. The numerical model of a composite sandwich panel serves as a representative aerospace structure upon which our investigations are based.
To what extent can ecosystem services motivate protecting biodiversity?
Dee, Laura E; De Lara, Michel; Costello, Christopher; Gaines, Steven D
2017-08-01
Society increasingly focuses on managing nature for the services it provides people rather than for the existence of particular species. How much biodiversity protection would result from this modified focus? Although biodiversity contributes to ecosystem services, the details of which species are critical, and whether they will go functionally extinct in the future, are fraught with uncertainty. Explicitly considering this uncertainty, we develop an analytical framework to determine how much biodiversity protection would arise solely from optimising net value from an ecosystem service. Using stochastic dynamic programming, we find that protecting a threshold number of species is optimal, and uncertainty surrounding how biodiversity produces services makes it optimal to protect more species than are presumed critical. We define conditions under which the economically optimal protection strategy is to protect all species, no species, and cases in between. We show how the optimal number of species to protect depends upon different relationships between species and services, including considering multiple services. Our analysis provides simple criteria to evaluate when managing for particular ecosystem services could warrant protecting all species, given uncertainty. Evaluating this criterion with empirical estimates from different ecosystems suggests that optimising some services will be more likely to protect most species than others. © 2017 John Wiley & Sons Ltd/CNRS.