NASA Technical Reports Server (NTRS)
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Mate choice when males are in patches: optimal strategies and good rules of thumb.
Hutchinson, John M C; Halupka, Konrad
2004-11-07
In standard mate-choice models, females encounter males sequentially and decide whether to inspect the quality of another male or to accept a male already inspected. What changes when males are clumped in patches and there is a significant cost to travel between patches? We use stochastic dynamic programming to derive optimum strategies under various assumptions. With zero costs to returning to a male in the current patch, the optimal strategy accepts males above a quality threshold which is constant whenever one or more males in the patch remain uninspected; this threshold drops when inspecting the last male in the patch, so returns may occur only then and are never to a male in a previously inspected patch. With non-zero within-patch return costs, such a two-threshold rule still performs extremely well, but a more gradual decline in acceptance threshold is optimal. Inability to return at all need not decrease performance by much. The acceptance threshold should also decline if it gets harder to discover the last males in a patch. Optimal strategies become more complex when mean male quality varies systematically between patches or years, and females estimate this in a Bayesian manner through inspecting male qualities. It can then be optimal to switch patch before inspecting all males on a patch, or, exceptionally, to return to an earlier patch. We compare performance of various rules of thumb in these environments and in ones without a patch structure. A two-threshold rule performs excellently, as do various simplifications of it. The best-of-N rule outperforms threshold rules only in non-patchy environments with between-year quality variation. The cutoff rule performs poorly.
Value of information and pricing new healthcare interventions.
Willan, Andrew R; Eckermann, Simon
2012-06-01
Previous application of value-of-information methods to optimal clinical trial design have predominantly taken a societal decision-making perspective, implicitly assuming that healthcare costs are covered through public expenditure and trial research is funded by government or donation-based philanthropic agencies. In this paper, we consider the interaction between interrelated perspectives of a societal decision maker (e.g. the National Institute for Health and Clinical Excellence [NICE] in the UK) charged with the responsibility for approving new health interventions for reimbursement and the company that holds the patent for a new intervention. We establish optimal decision making from societal and company perspectives, allowing for trade-offs between the value and cost of research and the price of the new intervention. Given the current level of evidence, there exists a maximum (threshold) price acceptable to the decision maker. Submission for approval with prices above this threshold will be refused. Given the current level of evidence and the decision maker's threshold price, there exists a minimum (threshold) price acceptable to the company. If the decision maker's threshold price exceeds the company's, then current evidence is sufficient since any price between the thresholds is acceptable to both. On the other hand, if the decision maker's threshold price is lower than the company's, then no price is acceptable to both and the company's optimal strategy is to commission additional research. The methods are illustrated using a recent example from the literature.
McKenzie, Elizabeth M.; Balter, Peter A.; Stingo, Francesco C.; Jones, Jimmy; Followill, David S.; Kry, Stephen F.
2014-01-01
Purpose: The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. Methods: The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Results: Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. Conclusions: IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans. PMID:25471949
McKenzie, Elizabeth M; Balter, Peter A; Stingo, Francesco C; Jones, Jimmy; Followill, David S; Kry, Stephen F
2014-12-01
The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans.
[Loudness optimized registration of compound action potential in cochlear implant recipients].
Berger, Klaus; Hocke, Thomas; Hessel, Horst
2017-11-01
Background Postoperative measurements of compound action potentials are not always possible due to the insufficient acceptance of the CI-recipients. This study investigated the impact of different parameters on the acceptance of the measurements. Methods Compound action potentials of 16 CI recipients were measured with different pulse-widths. Recipients performed a loudness rating at the potential thresholds with the different sequences. Results Compound action potentials obtained with higher pulse-widths were rated softer than those obtained with smaller pulse-widths. Conclusions Compound action potentials measured with higher pulse-widths generate a gap between loudest acceptable presentation level and potential threshold. This gap contributes to a higher acceptance of postoperative measurements. Georg Thieme Verlag KG Stuttgart · New York.
Joint optimization of maintenance, buffers and machines in manufacturing lines
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Nourelfath, Mustapha
2018-01-01
This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.
Nascimento, D L; Nascimento, F S
2012-11-01
The ability to discriminate nestmates from non-nestmates in insect societies is essential to protect colonies from conspecific invaders. The acceptance threshold hypothesis predicts that organisms whose recognition systems classify recipients without errors should optimize the balance between acceptance and rejection. In this process, cuticular hydrocarbons play an important role as cues of recognition in social insects. The aims of this study were to determine whether guards exhibit a restrictive level of rejection towards chemically distinct individuals, becoming more permissive during the encounters with either nestmate or non-nestmate individuals bearing chemically similar profiles. The study demonstrates that Melipona asilvai (Hymenoptera: Apidae: Meliponini) guards exhibit a flexible system of nestmate recognition according to the degree of chemical similarity between the incoming forager and its own cuticular hydrocarbons profile. Guards became less restrictive in their acceptance rates when they encounter non-nestmates with highly similar chemical profiles, which they probably mistake for nestmates, hence broadening their acceptance level.
LEDs on the threshold for use in projection systems: challenges, limitations and applications
NASA Astrophysics Data System (ADS)
Moffat, Bryce Anton
2006-02-01
The use of coloured LEDs as light sources in digital projectors depends on an optimal combination of optical, electrical and thermal parameters to meet the performance and cost targets needed to enable these products to compete in the marketplace. This paper describes the system design methodology for a digital micromirror display (DMD) based optical engine using LEDs as the light source, starting at the basic physical and geometrical parameters of the DMD and other optical elements through characterization of the LEDs to optimizing the system performance by determining optimal driving conditions. The main challenge in using LEDs is the luminous flux density, which is just at the threshold of acceptance in projection systems and thus only a fully optimized optical system with a uniformly bright set of LEDs can be used. As a result of this work we have developed two applications: a compact pocket projector and a rear projection television.
Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.
Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A
2013-02-01
The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on understanding the distributional characteristics of such uncertainty. Our approach provides a tool to improve decision making. © 2013 Society for Conservation Biology.
Convergence of decision rules for value-based pricing of new innovative drugs.
Gandjour, Afschin
2015-04-01
Given the high costs of innovative new drugs, most European countries have introduced policies for price control, in particular value-based pricing (VBP) and international reference pricing. The purpose of this study is to describe how profit-maximizing manufacturers would optimally adjust their launch sequence to these policies and how VBP countries may best respond. To decide about the launching sequence, a manufacturer must consider a tradeoff between price and sales volume in any given country as well as the effect of price in a VBP country on the price in international reference pricing countries. Based on the manufacturer's rationale, it is best for VBP countries in Europe to implicitly collude in the long term and set cost-effectiveness thresholds at the level of the lowest acceptable VBP country. This way, international reference pricing countries would also converge towards the lowest acceptable threshold in Europe.
Abe, Toshikazu; Tokuda, Yasuharu; Cook, E Francis
2011-01-01
Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR) for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA) have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648) who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance) or two (moderate cerebral disability) of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605) to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC); and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043), 209/2,292 (9.1%) in all patients with the acceptable time intervals and 1,388/2,706 (52.1%) in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.
Mujagic, Samir; Sarkander, Jana; Erber, Barbara; Erber, Joachim
2010-01-01
The experiments analyze different forms of learning and 24-h retention in the field and in the laboratory in bees that accept sucrose with either low (=3%) or high (>/=30% or >/=50%) concentrations. In the field we studied color learning at a food site and at the hive entrance. In the laboratory olfactory conditioning of the proboscis extension response (PER) was examined. In the color learning protocol at a feeder, bees with low sucrose acceptance thresholds (=3%) show significantly faster and better acquisition than bees with high thresholds (>/=50%). Retention after 24 h is significantly different between the two groups of bees and the choice reactions converge. Bees with low and high acceptance thresholds in the field show no differences in the sucrose sensitivity PER tests in the laboratory. Acceptance thresholds in the field are thus a more sensitive behavioral measure than PER responsiveness in the laboratory. Bees with low acceptance thresholds show significantly better acquisition and 24-h retention in olfactory learning in the laboratory compared to bees with high thresholds. In the learning protocol at the hive entrance bees learn without sucrose reward that a color cue signals an open entrance. In this experiment, bees with high sucrose acceptance thresholds showed significantly better learning and reversal learning than bees with low thresholds. These results demonstrate that sucrose acceptance thresholds affect only those forms of learning in which sucrose serves as the reward. The results also show that foraging behavior in the field is a good predictor for learning behavior in the field and in the laboratory.
Image quality, threshold contrast and mean glandular dose in CR mammography
NASA Astrophysics Data System (ADS)
Jakubiak, R. R.; Gamba, H. R.; Neves, E. B.; Peixoto, J. E.
2013-09-01
In many countries, computed radiography (CR) systems represent the majority of equipment used in digital mammography. This study presents a method for optimizing image quality and dose in CR mammography of patients with breast thicknesses between 45 and 75 mm. Initially, clinical images of 67 patients (group 1) were analyzed by three experienced radiologists, reporting about anatomical structures, noise and contrast in low and high pixel value areas, and image sharpness and contrast. Exposure parameters (kV, mAs and target/filter combination) used in the examinations of these patients were reproduced to determine the contrast-to-noise ratio (CNR) and mean glandular dose (MGD). The parameters were also used to radiograph a CDMAM (version 3.4) phantom (Artinis Medical Systems, The Netherlands) for image threshold contrast evaluation. After that, different breast thicknesses were simulated with polymethylmethacrylate layers and various sets of exposure parameters were used in order to determine optimal radiographic parameters. For each simulated breast thickness, optimal beam quality was defined as giving a target CNR to reach the threshold contrast of CDMAM images for acceptable MGD. These results were used for adjustments in the automatic exposure control (AEC) by the maintenance team. Using optimized exposure parameters, clinical images of 63 patients (group 2) were evaluated as described above. Threshold contrast, CNR and MGD for such exposure parameters were also determined. Results showed that the proposed optimization method was effective for all breast thicknesses studied in phantoms. The best result was found for breasts of 75 mm. While in group 1 there was no detection of the 0.1 mm critical diameter detail with threshold contrast below 23%, after the optimization, detection occurred in 47.6% of the images. There was also an average MGD reduction of 7.5%. The clinical image quality criteria were attended in 91.7% for all breast thicknesses evaluated in both patient groups. Finally, this study also concluded that the use of the AEC of the x-ray unit based on the constant dose to the detector may bring some difficulties to CR systems to operate under optimal conditions. More studies must be performed, so that the compatibility between systems and optimization methodologies can be evaluated, as well as this optimization method. Most methods are developed for phantoms, so comparative studies including clinical images must be developed.
Spirometry in 3-5-year-old children with asthma.
Nève, Véronique; Edmé, Jean-Louis; Devos, Patrick; Deschildre, Antoine; Thumerelle, Caroline; Santos, Clarisse; Methlin, Catherine-Marie; Matran, Murielle; Matran, Régis
2006-08-01
Spirometry with incentive games was applied to 207 2-5-year-old preschool children (PSC) with asthma in order to refine the quality-control criteria proposed by Aurora et al. (Am J Respir Crit Care Med 2004;169:1152-159). The data set in our study was much larger compared to that in Aurora et al. (Am J Respir Crit Care Med 2004;169:1152-159), where 42 children with cystic fibrosis and 37 healthy control were studied. At least two acceptable maneuvers were obtained in 178 (86%) children. Data were focused on 3-5-year-old children (n = 171). The proportion of children achieving a larger number of thresholds for each quality-control criterion (backward-extrapolated volume (Vbe), Vbe in percent of forced vital capacity (FVC, Vbe/FVC), time-to-peak expiratory flow (time-to-PEF), and difference (Delta) between the two FVCs (DeltaFVC), forced expiratory volume in 1 sec (DeltaFEV(1)), and forced expiratory volume in 0.5 sec (DeltaFEV(0.5)) from the two "best" curves) was calculated, and cumulative plots were obtained. The optimal threshold was determined for all ages by derivative function of rate of success-threshold curves, close to the inflexion point. The following thresholds were defined for acceptability: Vbe
Hozo, Iztok; Schell, Michael J; Djulbegovic, Benjamin
2008-07-01
The absolute truth in research is unobtainable, as no evidence or research hypothesis is ever 100% conclusive. Therefore, all data and inferences can in principle be considered as "inconclusive." Scientific inference and decision-making need to take into account errors, which are unavoidable in the research enterprise. The errors can occur at the level of conclusions that aim to discern the truthfulness of research hypothesis based on the accuracy of research evidence and hypothesis, and decisions, the goal of which is to enable optimal decision-making under present and specific circumstances. To optimize the chance of both correct conclusions and correct decisions, the synthesis of all major statistical approaches to clinical research is needed. The integration of these approaches (frequentist, Bayesian, and decision-analytic) can be accomplished through formal risk:benefit (R:B) analysis. This chapter illustrates the rational choice of a research hypothesis using R:B analysis based on decision-theoretic expected utility theory framework and the concept of "acceptable regret" to calculate the threshold probability of the "truth" above which the benefit of accepting a research hypothesis outweighs its risks.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
NASA Astrophysics Data System (ADS)
Tajuddin, Wan Ahmad
1994-02-01
Ease in finding the configuration at the global energy minimum in a symmetric neural network is important for combinatorial optimization problems. We carry out a comprehensive survey of available strategies for seeking global minima by comparing their performances in the binary representation problem. We recall our previous comparison of steepest descent with analog dynamics, genetic hill-climbing, simulated diffusion, simulated annealing, threshold accepting and simulated tunneling. To this, we add comparisons to other strategies including taboo search and one with field-ordered updating.
Eaton, Mitchell J.; Martin, Julien; Nichols, James D.; McIntyre, Carol; McCluskie, Maggie C.; Schmutz, Joel A.; Lubow, Bruce L.; Runge, Michael C.; Edited by Guntenspergen, Glenn R.
2014-01-01
In this chapter, we demonstrate the application of the various classes of thresholds, detailed in earlier chapters and elsewhere, via an actual but simplified natural resource management case study. We intend our example to provide the reader with the ability to recognize and apply the theoretical concepts of utility, ecological and decision thresholds to management problems through a formalized decision-analytic process. Our case study concerns the management of human recreational activities in Alaska’s Denali National Park, USA, and the possible impacts of such activities on nesting Golden Eagles, Aquila chrysaetos. Managers desire to allow visitors the greatest amount of access to park lands, provided that eagle nesting-site occupancy is maintained at a level determined to be acceptable by the managers themselves. As these two management objectives are potentially at odds, we treat minimum desired occupancy level as a utility threshold which, then, serves to guide the selection of annual management alternatives in the decision process. As human disturbance is not the only factor influencing eagle occupancy, we model nesting-site dynamics as a function of both disturbance and prey availability. We incorporate uncertainty in these dynamics by considering several hypotheses, including a hypothesis that site occupancy is affected only at a threshold level of prey abundance (i.e., an ecological threshold effect). By considering competing management objectives and accounting for two forms of thresholds in the decision process, we are able to determine the optimal number of annual nesting-site restrictions that will produce the greatest long-term benefits for both eagles and humans. Setting a utility threshold of 75 occupied sites, out of a total of 90 potential nesting sites, the optimization specified a decision threshold at approximately 80 occupied sites. At the point that current occupancy falls below 80 sites, the recommended decision is to begin restricting access to humans; above this level, it is recommended that all eagle territories be opened to human recreation. We evaluated the sensitivity of the decision threshold to uncertainty in system dynamics and to management objectives (i.e., to the utility threshold).
Color difference threshold determination for acrylic denture base resins.
Ren, Jiabao; Lin, Hong; Huang, Qingmei; Liang, Qifan; Zheng, Gang
2015-01-01
This study aimed to set evaluation indicators, i.e., perceptibility and acceptability color difference thresholds, of color stability for acrylic denture base resins for a spectrophotometric assessing method, which offered an alternative to the visual method described in ISO 20795-1:2013. A total of 291 disk specimens 50±1 mm in diameter and 0.5±0.1 mm thick were prepared (ISO 20795-1:2013) and processed through radiation tests in an accelerated aging chamber (ISO 7491:2000) for increasing times of 0 to 42 hours. Color alterations were measured with a spectrophotometer and evaluated using the CIE L*a*b* colorimetric system. Color differences were calculated through the CIEDE2000 color difference formula. Thirty-two dental professionals without color vision deficiencies completed perceptibility and acceptability assessments under controlled conditions in vitro. An S-curve fitting procedure was used to analyze the 50:50% perceptibility and acceptability thresholds. Furthermore, perceptibility and acceptability against the differences of the three color attributes, lightness, chroma, and hue, were also investigated. According to the S-curve fitting procedure, the 50:50% perceptibility threshold was 1.71ΔE00 (r(2)=0.88) and the 50:50% acceptability threshold was 4.00 ΔE00 (r(2)=0.89). Within the limitations of this study, 1.71/4.00 ΔE00 could be used as perceptibility/acceptability thresholds for acrylic denture base resins.
Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin
2007-04-01
This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.
Dental ceramics: a CIEDE2000 acceptability thresholds for lightness, chroma and hue differences.
Perez, María Del Mar; Ghinea, Razvan; Herrera, Luis Javier; Ionescu, Ana Maria; Pomares, Héctor; Pulgar, Rosa; Paravina, Rade D
2011-12-01
To determine the visual 50:50% acceptability thresholds for lightness, chroma and hue for dental ceramics using CIEDE2000(K(L):K(C):K(H)) formula, and to evaluate the formula performance using different parametric factors. A 30-observer panel evaluated three subsets of ceramic samples: lightness subset (|ΔL'/ΔE(00)| ≥ 0.9), chroma subset (|ΔC'/ΔE(00)| ≥ 0.9) and hue subset (|ΔH'/ΔE(00)| ≥ 0.9). A Takagi-Sugeno-Kang Fuzzy Approximation was used as fitting procedure, and the 50:50% acceptability thresholds were calculated. A t-test was used in statistical analysis of the thresholds values. The performance of the CIEDE2000(1:1:1) and CIEDE2000(2:1:1) colour difference formulas against visual results was tested using PF/3 performance factor. The 50:50% CIEDE2000 acceptability thresholds were ΔL' = 2.92 (95% CI 1.22-4.96; r(2) = 0.76), ΔC' = 2.52 (95% CI 1.31-4.19; r(2) = 0.71) and ΔH' = 1.90 (95% CI 1.63-2.15; r(2) = 0.88). The 50:50% acceptability threshold for colour difference (ΔE') for CIEDE2000(1:1:1) was 1.87, whilst corresponding value for CIEDE2000(2:1:1) was 1.78. The PF/3 values were 139.86 for CIEDE2000(1:1:1), and 132.31 for CIEDE2000(2:1:1). There was a statistically significant difference amongst CIEDE2000 50:50% acceptability thresholds for lightness, chroma and hue differences for dental ceramics. The CIEDE2000(2:1:1) formula performed better than CIEDE2000(1:1:1). Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2018-01-01
Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. PMID:27097901
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2016-06-01
A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
The nearest neighbor and the bayes error rates.
Loizou, G; Maybank, S J
1987-02-01
The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.
Meeting the challenges of developing LED-based projection displays
NASA Astrophysics Data System (ADS)
Geißler, Enrico
2006-04-01
The main challenge in developing a LED-based projection system is to meet the brightness requirements of the market. Therefore a balanced combination of optical, electrical and thermal parameters must be reached to achieve these performance and cost targets. This paper describes the system design methodology for a digital micromirror display (DMD) based optical engine using LEDs as the light source, starting at the basic physical and geometrical parameters of the DMD and other optical elements through characterization of the LEDs to optimizing the system performance by determining optimal driving conditions. LEDs have a luminous flux density which is just at the threshold of acceptance in projection systems and thus only a fully optimized optical system with a matched set of LEDs can be used. This work resulted in two projection engines, one for a compact pocket projector and the other for a rear projection television, both of which are currently in commercialization.
Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding
Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing
2017-01-01
The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305
Acceptance threshold theory can explain occurrence of homosexual behaviour.
Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra
2015-01-01
Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.
2014-01-01
Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020
Optimization of airport security process
NASA Astrophysics Data System (ADS)
Wei, Jianan
2017-05-01
In order to facilitate passenger travel, on the basis of ensuring public safety, the airport security process and scheduling to optimize. The stochastic Petri net is used to simulate the single channel security process, draw the reachable graph, construct the homogeneous Markov chain to realize the performance analysis of the security process network, and find the bottleneck to limit the passenger throughput. Curve changes in the flow of passengers to open a security channel for the initial state. When the passenger arrives at a rate that exceeds the processing capacity of the security channel, it is queued. The passenger reaches the acceptable threshold of the queuing time as the time to open or close the next channel, simulate the number of dynamic security channel scheduling to reduce the passenger queuing time.
On the optimal z-score threshold for SISCOM analysis to localize the ictal onset zone.
De Coster, Liesbeth; Van Laere, Koen; Cleeren, Evy; Baete, Kristof; Dupont, Patrick; Van Paesschen, Wim; Goffin, Karolien E
2018-04-17
In epilepsy patients, SISCOM or subtraction ictal single photon emission computed tomography co-registered to magnetic resonance imaging has become a routinely used, non-invasive technique to localize the ictal onset zone (IOZ). Thresholding of clusters with a predefined number of standard deviations from normality (z-score) is generally accepted to localize the IOZ. In this study, we aimed to assess the robustness of this parameter in a group of patients with well-characterized drug-resistant epilepsy in whom the exact location of the IOZ was known after successful epilepsy surgery. Eighty patients underwent preoperative SISCOM and were seizure free in a postoperative period of minimum 1 year. SISCOMs with z-threshold 2 and 1.5 were analyzed by two experienced readers separately, blinded from the clinical ground truth data. Their reported location of the IOZ was compared with the operative resection zone. Furthermore, confidence scores of the SISCOM IOZ were compared for the two thresholds. Visual reporting with a z-score threshold of 1.5 and 2 showed no statistically significant difference in localizing correspondence with the ground truth (70 vs. 72% respectively, p = 0.17). Interrater agreement was moderate (κ = 0.65) at the threshold of 1.5, but high (κ = 0.84) at a threshold of 2, where also reviewers were significantly more confident (p < 0.01). SISCOM is a clinically useful, routinely used modality in the preoperative work-up in many epilepsy surgery centers. We found no significant differences in localizing value of the IOZ using a threshold of 1.5 or 2, but interrater agreement and reader confidence were higher using a z-score threshold of 2.
Translucency thresholds for dental materials.
Salas, Marianne; Lucena, Cristina; Herrera, Luis Javier; Yebra, Ana; Della Bona, Alvaro; Pérez, María M
2018-05-12
To determine the translucency acceptability and perceptibility thresholds for dental resin composites using CIEDE2000 and CIELAB color difference formulas. A 30-observer panel performed perceptibility and acceptability judgments on 50 pairs of resin composites discs (diameter: 10mm; thickness: 1mm). Disc pair differences for the Translucency Parameter (ΔTP) were calculated using both color difference formulas (ΔTP 00 ranged from 0.11 to 7.98, and ΔTP ab ranged from 0.01 to 12.79). A Takagi-Sugeno-Kang (TSK) Fuzzy Approximation was used as fitting procedure. From the resultant fitting curves, the 95% confidence intervals were estimated and the 50:50% translucency perceptibility and acceptability thresholds (TPT and TAT) were calculated. Differences between thresholds were statistically analyzed using Student t tests (α=0.05). CIEDE2000 50:50% TPT was 0.62 and TAT was 2.62. Corresponding CIELAB values were 1.33 and 4.43, respectively. Translucency perceptibility and acceptability thresholds were significantly different using both color difference formulas (p=0.01 for TPT and p=0.005 for TAT). CIEDE2000 color difference formula provided a better data fit than CIELAB formula. The visual translucency difference thresholds determined with CIEDE2000 color difference formula can serve as reference values in the selection of resin composites and evaluation of its clinical performance. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.
Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina
2017-07-01
A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.
Quantifying ecological thresholds from response surfaces
Heather E. Lintz; Bruce McCune; Andrew N. Gray; Katherine A. McCulloh
2011-01-01
Ecological thresholds are abrupt changes of ecological state. While an ecological threshold is a widely accepted concept, most empirical methods detect them in time or across geographic space. Although useful, these approaches do not quantify the direct drivers of threshold response. Causal understanding of thresholds detected empirically requires their investigation...
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
Multiobjective hedging rules for flood water conservation
NASA Astrophysics Data System (ADS)
Ding, Wei; Zhang, Chi; Cai, Ximing; Li, Yu; Zhou, Huicheng
2017-03-01
Flood water conservation can be beneficial for water uses especially in areas with water stress but also can pose additional flood risk. The potential of flood water conservation is affected by many factors, especially decision makers' preference for water conservation and reservoir inflow forecast uncertainty. This paper discusses the individual and joint effects of these two factors on the trade-off between flood control and water conservation, using a multiobjective, two-stage reservoir optimal operation model. It is shown that hedging between current water conservation and future flood control exists only when forecast uncertainty or decision makers' preference is within a certain range, beyond which, hedging is trivial and the multiobjective optimization problem is reduced to a single objective problem with either flood control or water conservation. Different types of hedging rules are identified with different levels of flood water conservation preference, forecast uncertainties, acceptable flood risk, and reservoir storage capacity. Critical values of decision preference (represented by a weight) and inflow forecast uncertainty (represented by standard deviation) are identified. These inform reservoir managers with a feasible range of their preference to water conservation and thresholds of forecast uncertainty, specifying possible water conservation within the thresholds. The analysis also provides inputs for setting up an optimization model by providing the range of objective weights and the choice of hedging rule types. A case study is conducted to illustrate the concepts and analyses.
The variance needed to accurately describe jump height from vertical ground reaction force data.
Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran
2014-12-01
In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.
Relationship Between Consumer Acceptability and Pungency-Related Flavor Compounds of Vidalia Onions.
Kim, Ha-Yeon; Jackson, Daniel; Adhikari, Koushik; Riner, Cliff; Sanchez-Brambila, Gabriela
2017-10-01
A consumer study was conducted to evaluate preferences in Vidalia onions, and define consumer acceptability thresholds for commonly analyzed flavor compounds associated with pungency. Two varieties of Vidalia onions (Plethora and Sapelo Sweet) were grown at 3 fertilizer application rates (37.5 and 0; 134.5 and 59.4; and 190 and 118.8 kg/ha of nitrogen and sulfur, respectively), creating 6 treatments with various flavor attributes to use in the study. Bulb soluble solids, sugars, pyruvic acid, lachrymatory factor (LF; propanethial S-oxide), and methyl thiosulfinate (MT) content were determined and compared to sensory responses for overall liking, intensity of the sharp/pungent/burning sensation (SPB), and intent to buy provided by 142 consumers. Onion pyruvate, LF, MT, and sugar content increased as fertilization rate increased, regardless of onion variety. Consumer responses showed participants preferred onions with low SPB, which correlated positively to lower pyruvate, LF and MT concentrations, but showed no relationship to total sugars in the onion bulb. Regression analyses revealed that the majority of consumers (≥55%) found the flavor of Vidalia onions acceptable when the concentrations of LF, pyruvic acid, and MT within the bulbs were below 2.21, 4.83, and 0.43 nmol/mL, respectively. These values will support future studies aimed at identifying the optimal cultivation practices for production of sweet Vidalia onions, and can serve as an industry benchmark for quality control, thus ensuring the flavor of Vidalia onions will be acceptable to the majority of consumers. This study identified the relationship between consumer preferences and commonly analyzed flavor compounds in Vidalia onions, and established thresholds for these compounds at concentrations which the majority of consumers will find desirable. These relationships and thresholds will support future research investigating how cultural practices impact onion quality, and can be used to assist growers in variety selection decisions. In addition, this information will provide a benchmark to Vidalia onion producers for quality control of the sweet onions produced, ensuring that the onions are consistently of a desired quality, thereby increasing consumer's reliability in the Vidalia onion brand. © 2017 Institute of Food Technologists®.
Beyond gains and losses: the effect of need on risky choice in framed decisions.
Mishra, Sandeep; Fiddick, Laurence
2012-06-01
Substantial evidence suggests people are risk-averse when making decisions described in terms of gains and risk-prone when making decisions described in terms of losses, a phenomenon known as the framing effect. Little research, however, has examined whether framing effects are a product of normative risk-sensitive cognitive processes. In 5 experiments, it is demonstrated that framing effects in the Asian disease problem can be explained by risk-sensitivity theory, which predicts that decision makers adjust risk acceptance on the basis of minimal acceptable thresholds, or need. Both explicit and self-determined need requirements eliminated framing effects and affected risk acceptance consistent with risk-sensitivity theory. Furthermore, negative language choice in loss frames conferred the perception of high need and led to the construction of higher minimal acceptable thresholds. The results of this study suggest that risk-sensitivity theory provides a normative rationale for framing effects based on sensitivity to minimal acceptable thresholds, or needs. 2012 APA, all rights reserved
Irwin, R John; Irwin, Timothy C
2011-06-01
Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Summary:Background. It is widely accepted that substances that cannot penetrate through the skin will not be sensitisers. Thresholds based on relevant physicochemical parameters such as a LogKow > 1 and a MW < 500, are assumed and widely accepted as self-evident truths. Objective...
48 CFR 3409.570 - Certification at or below the simplified acquisition threshold.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the simplified acquisition threshold. 3409.570 Section 3409.570 Federal Acquisition Regulations System... threshold. By accepting any contract, including orders against any Schedule or Government-wide Acquisition Contract (GWAC), with the Department at or below the simplified acquisition threshold: (a) The contractor...
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan
2007-05-01
In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.
OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN
An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Marcus, Carol S
2015-07-01
On February 9, 2015, I submitted a petition to the U.S. Nuclear Regulatory Commission (NRC) to reject the linear-no threshold (LNT) hypothesis and ALARA as the bases for radiation safety regulation in the United States, using instead threshold and hormesis evidence. In this article, I will briefly review the history of LNT and its use by regulators, the lack of evidence supporting LNT, and the large body of evidence supporting thresholds and hormesis. Physician acceptance of cancer risk from low dose radiation based upon federal regulatory claims is unfortunate and needs to be reevaluated. This is dangerous to patients and impedes good medical care. A link to my petition is available: http://radiationeffects.org/wp-content/uploads/2015/03/Hormesis-Petition-to-NRC-02-09-15.pdf, and support by individual physicians once the public comment period begins would be extremely important.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
NASA Astrophysics Data System (ADS)
Akkala, Arun Goud
Leakage currents in CMOS transistors have risen dramatically with technology scaling leading to significant increase in standby power consumption. Among the various transistor candidates, the excellent short channel immunity of Silicon double gate FinFETs have made them the best contender for successful scaling to sub-10nm nodes. For sub-10nm FinFETs, new quantum mechanical leakage mechanisms such as direct source to drain tunneling (DSDT) of charge carriers through channel potential energy barrier arising due to proximity of source/drain regions coupled with the high transport direction electric field is expected to dominate overall leakage. To counter the effects of DSDT and worsening short channel effects and to maintain Ion/ Ioff, performance and power consumption at reasonable values, device optimization techniques are necessary for deeply scaled transistors. In this work, source/drain underlapping of FinFETs has been explored using quantum mechanical device simulations as a potentially promising method to lower DSDT while maintaining the Ion/ Ioff ratio at acceptable levels. By adopting a device/circuit/system level co-design approach, it is shown that asymmetric underlapping, where the drain side underlap is longer than the source side underlap, results in optimal energy efficiency for logic circuits in near-threshold as well as standard, super-threshold operating regimes. In addition, read/write conflict in 6T SRAMs and the degradation in cell noise margins due to the low supply voltage can be mitigated by using optimized asymmetric underlapped n-FinFETs for the access transistor, thereby leading to robust cache memories. When gate-workfunction tuning is possible, using asymmetric underlapped n-FinFETs for both access and pull-down devices in an SRAM bit cell can lead to high-speed and low-leakage caches. Further, it is shown that threshold voltage degradation in the presence of Hot Carrier Injection (HCI) is less severe in asymmetric underlap n-FinFETs. A lifetime projection is carried out assuming that HCI is the major degradation mechanism and it is shown that a 3.4x improvement in device lifetime is possible over symmetric underlapped n-FinFET.
Chorel, Marine; Lanternier, Thomas; Lavastre, Éric; Bonod, Nicolas; Bousquet, Bruno; Néauport, Jérôme
2018-04-30
We report on a numerical optimization of the laser induced damage threshold of multi-dielectric high reflection mirrors in the sub-picosecond regime. We highlight the interplay between the electric field distribution, refractive index and intrinsic laser induced damage threshold of the materials on the overall laser induced damage threshold (LIDT) of the multilayer. We describe an optimization method of the multilayer that minimizes the field enhancement in high refractive index materials while preserving a near perfect reflectivity. This method yields a significant improvement of the damage resistance since a maximum increase of 40% can be achieved on the overall LIDT of the multilayer.
Optimizing Retransmission Threshold in Wireless Sensor Networks
Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang
2016-01-01
The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2017-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS. PMID:28728470
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2018-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS.
A study of the threshold method utilizing raingage data
NASA Technical Reports Server (NTRS)
Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David
1993-01-01
The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.
Sadrhaghighi, Amir Houman; Zarghami, Afsaneh; Sadrhaghighi, Shahrzad; Mohammadi, Amir; Eskandarinezhad, Mahsa
2017-01-01
Culture and ethnicity are among the factors affecting esthetic judgment of individuals. This study aimed to assess the acceptability threshold of variations in four components of an esthetic smile namely vertical lip thickness, dental midline deviation, buccal corridor, and the golden ratio in maxillary lateral incisors display among laypersons of different races and cultures. Raters (n = 35 in each city) among laypersons of nine cities namely Istanbul, Isfahan, Tabriz, Tehran, Doha, Rome, Sydney, Chicago, and Yazd, were given a photo album containing 27 random images of an attractive female smile, digitally altered with regard to the four smile components. They scored each picture from 0 to 100 in terms of smile attractiveness. Data were analyzed using SPSS 13 and the acceptability threshold for each component was calculated in each city using the Spearman and Wilcoxon tests. P< 0.05 was considered statistically significant. No significant differences were noted with regard to the increased vertical lip thickness, and an acceptability threshold could not be determined for it. The acceptability thresholds for midline deviations, buccal corridor, and the golden ratio were different among different cities. One-millimeter increase in the displayed width of maxillary lateral incisors was more desirable than the golden ratio standard width. Culture and race may significantly affect the esthetic preference of individuals with regard to smile attractiveness.
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Shieh, Yiwey; Eklund, Martin; Madlensky, Lisa; Sawyer, Sarah D; Thompson, Carlie K; Stover Fiscalini, Allison; Ziv, Elad; Van't Veer, Laura J; Esserman, Laura J; Tice, Jeffrey A
2017-01-01
Ongoing controversy over the optimal approach to breast cancer screening has led to discordant professional society recommendations, particularly in women age 40 to 49 years. One potential solution is risk-based screening, where decisions around the starting age, stopping age, frequency, and modality of screening are based on individual risk to maximize the early detection of aggressive cancers and minimize the harms of screening through optimal resource utilization. We present a novel approach to risk-based screening that integrates clinical risk factors, breast density, a polygenic risk score representing the cumulative effects of genetic variants, and sequencing for moderate- and high-penetrance germline mutations. We demonstrate how thresholds of absolute risk estimates generated by our prediction tools can be used to stratify women into different screening strategies (biennial mammography, annual mammography, annual mammography with adjunctive magnetic resonance imaging, defer screening at this time) while informing the starting age of screening for women age 40 to 49 years. Our risk thresholds and corresponding screening strategies are based on current evidence but need to be tested in clinical trials. The Women Informed to Screen Depending On Measures of risk (WISDOM) Study, a pragmatic, preference-tolerant randomized controlled trial of annual vs personalized screening, will study our proposed approach. WISDOM will evaluate the efficacy, safety, and acceptability of risk-based screening beginning in the fall of 2016. The adaptive design of this trial allows continued refinement of our risk thresholds as the trial progresses, and we discuss areas where we anticipate emerging evidence will impact our approach. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A random optimization approach for inherent optic properties of nearshore waters
NASA Astrophysics Data System (ADS)
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
32 CFR 32.44 - Procurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... acceptable characteristics or minimum acceptable standards. (iv) The specific features of “brand name or... expected to exceed the simplified acquisition threshold, specifies a “brand name” product. (4) The proposed...
NASA Astrophysics Data System (ADS)
Zhu, C.; Zhang, S.; Xiao, F.; Li, J.; Yuan, L.; Zhang, Y.; Zhu, T.
2018-05-01
The NASA Operation IceBridge (OIB) mission is the largest program in the Earth's polar remote sensing science observation project currently, initiated in 2009, which collects airborne remote sensing measurements to bridge the gap between NASA's ICESat and the upcoming ICESat-2 mission. This paper develop an improved method that optimizing the selection method of Digital Mapping System (DMS) image and using the optimal threshold obtained by experiments in Beaufort Sea to calculate the local instantaneous sea surface height in this area. The optimal threshold determined by comparing manual selection with the lowest (Airborne Topographic Mapper) ATM L1B elevation threshold of 2 %, 1 %, 0.5 %, 0.2 %, 0.1 % and 0.05 % in A, B, C sections, the mean of mean difference are 0.166 m, 0.124 m, 0.083 m, 0.018 m, 0.002 m and -0.034 m. Our study shows the lowest L1B data of 0.1 % is the optimal threshold. The optimal threshold and manual selections are also used to calculate the instantaneous sea surface height over images with leads, we find that improved methods has closer agreement with those from L1B manual selections. For these images without leads, the local instantaneous sea surface height estimated by using the linear equations between distance and sea surface height calculated over images with leads.
D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.
Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W
2005-12-01
Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.
The variance of length of stay and the optimal DRG outlier payments.
Felder, Stefan
2009-09-01
Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.
Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.
2014-01-01
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366
COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Borland, Michael
Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.
Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.
d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K
2015-12-01
Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.
A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
Robotic Vision, Tray-Picking System Design Using Multiple, Optical Matched Filters
NASA Astrophysics Data System (ADS)
Leib, Kenneth G.; Mendelsohn, Jay C.; Grieve, Philip G.
1986-10-01
The optical correlator is applied to a robotic vision, tray-picking problem. Complex matched filters (MFs) are designed to provide sufficient optical memory for accepting any orientation of the desired part, and a multiple holographic lens (MHL) is used to increase the memory for continuous coverage. It is shown that with appropriate thresholding a small part can be selected using optical matched filters. A number of criteria are presented for optimizing the vision system. Two of the part-filled trays that Mendelsohn used are considered in this paper which is the analog (optical) expansion of his paper. Our view in this paper is that of the optical correlator as a cueing device for subsequent, finer vision techniques.
NASA Technical Reports Server (NTRS)
Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.
2012-01-01
Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.
Connolly, Declan A J
2012-09-01
The purpose of this article is to assess the value of the anaerobic threshold for use in clinical populations with the intent to improve exercise adaptations and outcomes. The anaerobic threshold is generally poorly understood, improperly used, and poorly measured. It is rarely used in clinical settings and often reserved for athletic performance testing. Increased exercise participation within both clinical and other less healthy populations has increased our attention to optimizing exercise outcomes. Of particular interest is the optimization of lipid metabolism during exercise in order to improve numerous conditions such as blood lipid profile, insulin sensitivity and secretion, and weight loss. Numerous authors report on the benefits of appropriate exercise intensity in optimizing outcomes even though regulation of intensity has proved difficult for many. Despite limited use, selected exercise physiology markers have considerable merit in exercise-intensity regulation. The anaerobic threshold, and other markers such as heart rate, may well provide a simple and valuable mechanism for regulating exercising intensity. The use of the anaerobic threshold and accurate target heart rate to regulate exercise intensity is a valuable approach that is under-utilized across populations. The measurement of the anaerobic threshold can be simplified to allow clients to use nonlaboratory measures, for example heart rate, in order to self-regulate exercise intensity and improve outcomes.
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-21
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.
Using instrumental (CIE and reflectance) measures to predict consumers' acceptance of beef colour.
Holman, Benjamin W B; van de Ven, Remy J; Mao, Yanwei; Coombs, Cassius E O; Hopkins, David L
2017-05-01
We aimed to establish colorimetric thresholds based upon the capacity for instrumental measures to predict consumer satisfaction with beef colour. A web-based survey was used to distribute standardised photographs of beef M. longissimus lumborum with known colorimetrics (L*, a*, b*, hue, chroma, ratio of reflectance at 630nm and 580nm, and estimated deoxymyoglobin, oxymyoglobin and metmyoglobin concentrations) for scrutiny. Consumer demographics and perceived importance of colour to beef value were also evaluated. It was found that a* provided the most simple and robust prediction of beef colour acceptability. Beef colour was considered acceptable (with 95% acceptance) when a* values were equal to or above 14.5. Demographic effects on this threshold were negligible, but consumer nationality and gender did contribute to variation in the relative importance of colour to beef value. These results provide future beef colour studies with context to interpret objective colour measures in terms of consumer acceptance and market appeal. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Threshold-driven optimization for reference-based auto-planning
NASA Astrophysics Data System (ADS)
Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo
2018-02-01
We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.
Wang, Rui-Ping; Jiang, Yong-Gen; Zhao, Gen-Ming; Guo, Xiao-Qin; Michael, Engelgau
2017-12-01
The China Infectious Disease Automated-alert and Response System (CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control (CDC) at all levels in China. In the CIDARS, thresholds are determined using the "Mean+2SD‟ in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the "Mean +2SD‟ method to the performance of 5 novel algorithms to select optimal "Outbreak Gold Standard (OGS)‟ and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The "Mean+2SD‟, C1, C2, moving average (MA), seasonal model (SM), and cumulative sum (CUSUM) algorithms were applied. Outbreak signals for the predicted value (Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A (chickenpox and mumps), TYPE B (influenza and rubella) and TYPE C [hand foot and mouth disease (HFMD) and scarlet fever]. Optimized thresholds for chickenpox (P 55 ), mumps (P 50 ), influenza (P 40 , P 55 , and P 75 ), rubella (P 45 and P 75 ), HFMD (P 65 and P 70 ), and scarlet fever (P 75 and P 80 ) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.
Optimization of contrast-enhanced spectral mammography depending on clinical indication
Dromain, Clarisse; Canale, Sandra; Saab-Puong, Sylvie; Carton, Ann-Katherine; Muller, Serge; Fallenberg, Eva Maria
2014-01-01
Abstract. The objective is to optimize low-energy (LE) and high-energy (HE) exposure parameters of contrast-enhanced spectral mammography (CESM) examinations in four different clinical applications for which different levels of average glandular dose (AGD) and ratios between LE and total doses are required. The optimization was performed on a Senographe DS with a SenoBright® upgrade. Simulations were performed to find the optima by maximizing the contrast-to-noise ratio (CNR) on the recombined CESM image using different targeted doses and LE image quality. The linearity between iodine concentration and CNR as well as the minimal detectable iodine concentration was assessed. The image quality of the LE image was assessed on the CDMAM contrast-detail phantom. Experiments confirmed the optima found on simulation. The CNR was higher for each clinical indication than for SenoBright®, including the screening indication for which the total AGD was 22% lower. Minimal iodine concentrations detectable in the case of a 3-mm-diameter round tumor were 12.5% lower than those obtained for the same dose in the clinical routine. LE image quality satisfied EUREF acceptable limits for threshold contrast. This newly optimized set of acquisition parameters allows increased contrast detectability compared to parameters currently used without a significant loss in LE image quality. PMID:26158058
Optimization of contrast-enhanced spectral mammography depending on clinical indication.
Dromain, Clarisse; Canale, Sandra; Saab-Puong, Sylvie; Carton, Ann-Katherine; Muller, Serge; Fallenberg, Eva Maria
2014-10-01
The objective is to optimize low-energy (LE) and high-energy (HE) exposure parameters of contrast-enhanced spectral mammography (CESM) examinations in four different clinical applications for which different levels of average glandular dose (AGD) and ratios between LE and total doses are required. The optimization was performed on a Senographe DS with a SenoBright® upgrade. Simulations were performed to find the optima by maximizing the contrast-to-noise ratio (CNR) on the recombined CESM image using different targeted doses and LE image quality. The linearity between iodine concentration and CNR as well as the minimal detectable iodine concentration was assessed. The image quality of the LE image was assessed on the CDMAM contrast-detail phantom. Experiments confirmed the optima found on simulation. The CNR was higher for each clinical indication than for SenoBright®, including the screening indication for which the total AGD was 22% lower. Minimal iodine concentrations detectable in the case of a 3-mm-diameter round tumor were 12.5% lower than those obtained for the same dose in the clinical routine. LE image quality satisfied EUREF acceptable limits for threshold contrast. This newly optimized set of acquisition parameters allows increased contrast detectability compared to parameters currently used without a significant loss in LE image quality.
Reliability of the method of levels for determining cutaneous temperature sensitivity
NASA Astrophysics Data System (ADS)
Jakovljević, Miroljub; Mekjavić, Igor B.
2012-09-01
Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
Albanese, Mark A; Farrell, Philip; Dottl, Susan L
2005-01-01
Using Medical College Admission Test-grade point average (MCAT-GPA) scores as a threshold has the potential to address issues raised in recent Supreme Court cases, but it introduces complicated methodological issues for medical school admissions. To assess various statistical indexes to determine optimally discriminating thresholds for MCAT-GPA scores. Entering classes from 1992 through 1998 (N = 752) are used to develop guidelines for cut scores that optimize discrimination between students who pass and do not pass the United States Medical Licensing Examination (USMLE) Step 1 on the first attempt. Risk differences, odds ratios, sensitivity, and specificity discriminated best for setting thresholds. Compensatory versus noncompensatory procedures both accounted for 54% of Step 1 failures, but demanded different performance requirements (noncompensatory MCAT-biological sciences = 8, physical sciences = 7, verbal reasoning = 7--sum of scores = 22; compensatory MCAT total = 24). Rational and defensible intellectual achievement thresholds that are likely to comply with recent Supreme Court decisions can be set from MCAT scores and GPAs.
A stimulus-dependent spike threshold is an optimal neural coder
Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama
2015-01-01
A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710
Optimizing fluence and debridement effects on cutaneous resurfacing carbon dioxide laser surgery.
Weisberg, N K; Kuo, T; Torkian, B; Reinisch, L; Ellis, D L
1998-10-01
To develop methods to compare carbon dioxide (CO2) resurfacing lasers, fluence, and debridement effects on tissue shrinkage and histological thermal denaturation. In vitro human or in vivo porcine skin samples received up to 5 passes with scanner or short-pulsed CO2 resurfacing lasers. Fluences ranging from 2.19 to 17.58 J/cm2 (scanner) and 1.11 to 5.56 J/cm2 (short pulsed) were used to determine each laser's threshold energy for clinical effect. Variable amounts of debridement were also studied. Tissue shrinkage was evaluated by using digital photography to measure linear distance change of the treated tissue. Tissue histological studies were evaluated using quantitative computer image analysis. Fluence-independent in vitro tissue shrinkage was seen with the scanned and short-pulsed lasers above threshold fluence levels of 5.9 and 2.5 J/cm2, respectively. Histologically, fluence-independent thermal depths of damage of 77 microns (scanner) and 25 microns (pulsed) were observed. Aggressive debridement of the tissue increased the shrinkage per pass of the laser, and decreased the fluence required for the threshold effect. In vivo experiments confirmed the in vitro results, although the in vivo threshold fluence level was slightly higher and the shrinkage obtained was slightly lower per pass. Our methods allow comparison of different resurfacing lasers' acute effects. We found equivalent laser tissue effects using lower fluences than those currently accepted clinically. This suggests that the morbidity associated with CO2 laser resurfacing may be minimized by lowering levels of tissue input energy and controlling for tissue debridement.
Meier-Dinkel, Lisa; Gertheiss, Jan; Schnäckel, Wolfram; Mörlein, Daniel
2016-08-01
Characteristic off-flavours may occur in uncastrated male pigs depending on the accumulation of androstenone and skatole. Feasible processing of strongly tainted carcasses is challenging but gains in importance due to the European ban on piglet castration in 2018. This paper investigates consumers' acceptability of two sausage types: (a) emulsion-type (BOILED) and (b) smoked raw-fermented (FERM). Liking (9 point scales) and flavour perception (check-all-that-apply with both, typical and negatively connoted sensory terms) were evaluated by 120 consumers (within-subject design). Proportion of tainted boar meat (0, 50, 100%) affected overall liking of BOILED, F (2, 238)=23.22, P<.001, but not of FERM sausages, F (2, 238)=0.89, P=.414. Consumers described the flavour of BOILED-100 as strong and sweaty. In conclusion, FERM products seem promising for processing of tainted carcasses whereas formulations must be optimized for BOILED in order to eliminate perceptible off-flavours. Boar taint rejection thresholds may be higher for processed than those suggested for unprocessed meat cuts. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Connection Admission Control Method for Web Server Systems
NASA Astrophysics Data System (ADS)
Satake, Shinsuke; Inai, Hiroshi; Saito, Tomoya; Arai, Tsuyoshi
Most browsers establish multiple connections and download files in parallel to reduce the response time. On the other hand, a web server limits the total number of connections to prevent from being overloaded. That could decrease the response time, but would increase the loss probability, the probability of which a newly arriving client is rejected. This paper proposes a connection admission control method which accepts only one connection from a newly arriving client when the number of connections exceeds a threshold, but accepts new multiple connections when the number of connections is less than the threshold. Our method is aimed at reducing the response time by allowing as many clients as possible to establish multiple connections, and also reducing the loss probability. In order to reduce spending time to examine an adequate threshold for web server administrators, we introduce a procedure which approximately calculates the loss probability under a condition that the threshold is given. Via simulation, we validate the approximation and show effectiveness of the admission control.
Jafri, Nazia F; Newitt, David C; Kornak, John; Esserman, Laura J; Joe, Bonnie N; Hylton, Nola M
2014-08-01
To evaluate optimal contrast kinetics thresholds for measuring functional tumor volume (FTV) by breast magnetic resonance imaging (MRI) for assessment of recurrence-free survival (RFS). In this Institutional Review Board (IRB)-approved retrospective study of 64 patients (ages 29-72, median age of 48.6) undergoing neoadjuvant chemotherapy (NACT) for breast cancer, all patients underwent pre-MRI1 and postchemotherapy MRI4 of the breast. Tumor was defined as voxels meeting thresholds for early percent enhancement (PEthresh) and early-to-late signal enhancement ratio (SERthresh); and FTV (PEthresh, SERthresh) by summing all voxels meeting threshold criteria and minimum connectivity requirements. Ranges of PEthresh from 50% to 220% and SERthresh from 0.0 to 2.0 were evaluated. A Cox proportional hazard model determined associations between change in FTV over treatment and RFS at different PE and SER thresholds. The plot of hazard ratios for change in FTV from MRI1 to MRI4 showed a broad peak with the maximum hazard ratio and highest significance occurring at PE threshold of 70% and SER threshold of 1.0 (hazard ratio = 8.71, 95% confidence interval 2.86-25.5, P < 0.00015), indicating optimal model fit. Enhancement thresholds affect the ability of MRI tumor volume to predict RFS. The value is robust over a wide range of thresholds, supporting the use of FTV as a biomarker. © 2013 Wiley Periodicals, Inc.
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Colour perception with changes in levels of illumination
NASA Astrophysics Data System (ADS)
Baah, Kwame F.; Green, Phil; Pointer, Michael
2012-01-01
The perceived colour of a stimulus depends on the conditions under which it is viewed. For colours employed as an important cue or identifier, such as signage and brand colours, colour reproduction tolerances are critically important. Typically, such stimuli would be judged using a known level of illumination but, in the target environment, the level of illumination used to view the samples may be entirely different. The effect of changes in the viewing condition on the perceptibility and acceptability of small colour differences should be understood when such tolerances and associated viewing conditions, are specified. A series of psychophysical experiments was conducted to determine whether changes in illumination level significantly alter acceptability and perceptibility thresholds of uniform colour stimuli. It was found that perceived colour discrimination thresholds varied by up to 2.0 ΔE00. For the perceptual correlate of hue however, this value could be of significance if the accepted error of colour difference was at the threshold, thereby yielding the possibility of rejection with changes in illumination level. Lightness and chroma on the other hand, exhibited greater tolerance and were less likely to be rejected with illuminance changes.
Heng, Shi Thong; Tan, Michelle; Young, Barnaby; Lye, David; Ng, Tat Ming
2017-01-01
Abstract Background Antibiotic clinical decision support systems (CDSS) were implemented to provide stewardship at the point of ordering of broad-spectrum antibiotics (piperacillin-tazobactam and carbapenems). We postulated that a YouTube based educational video package (EP) with quizzes can help to improve CDSS acceptance. Methods A before-after study was conducted in general wards at Tan Tock Seng Hospital from April 2016 to March 2017. Baseline data were collected for 6 months before EP was implemented and during the next 6 months with EP dissemination to all doctors. Acceptance of CDSS recommendations between both phases were compared. Independent factors associated with acceptance of specific CDSS recommendations were identified by logistic regression. Results Patients recruited before and after EP was 1642 and 1313 respectively. Overall CDSS acceptance rate was similar before and after EP. There was improved acceptance for recommendations for dose optimizaton, antibiotic optimization and set duration (Figures 1 and 2). Independent factors of CDSS acceptance for dose optimizaton, antibiotic optimization and set duration are shown in Table 1. EP implementation was independently associated with acceptance of recommendations to set duration and optimize antibiotics. Conclusion EP was independently associated with increased CDSS acceptance on antibiotic duration and antibiotic optimization. Although acceptance of dose optimization was improved, EP was not associated independently with acceptance of the recommendations. Figure 2 Acceptance of CDSS recommendations by classifications of recommendations Table 1 3 multivariate models of acceptance of CDSS recommendations on antibiotic optimization, dose optimization and duration setting Set duration Antibiotic optimization Dose optimization Factor Odds ratio [95% CI] Lung infection 2.71[2.13–3.45] 2.08[1.71–2.52] 2.79[2.19-3.55] Unknown sepsis source 1.73[1.27–2.35] – 1.44[1.05-1.96] Piperacillin-tazobactam use 3.02[2.17–4.19] – – Temperature during initiation of antibiotics 0.86[0.79–0.94] – – The presence of oxygen supplementation during initiation of antibiotics – 0.76[0.64–0.91] 0.76[0.64–0.91] EP implementation 1.38[1.18–1.62] 1.21[1.02–1.43] - Disclosures All authors: No reported disclosures.
Nordanstig, J; Pettersson, M; Morgan, M; Falkenberg, M; Kumlien, C
2017-09-01
Patient reported outcomes are increasingly used to assess outcomes after peripheral arterial disease (PAD) interventions. VascuQoL-6 (VQ-6) is a PAD specific health-related quality of life (HRQoL) instrument for routine clinical practice and clinical research. This study assessed the minimum important difference for the VQ-6 and determined thresholds for the minimum important difference and substantial clinical benefit following PAD revascularisation. This was a population-based observational cohort study. VQ-6 data from the Swedvasc Registry (January 2014 to September 2016) was analysed for revascularised PAD patients. The minimum important difference was determined using a combination of a distribution based and an anchor-based method, while receiver operating characteristic curve analysis (ROC) was used to determine optimal thresholds for a substantial clinical benefit following revascularisation. A total of 3194 revascularised PAD patients with complete VQ-6 baseline recordings (intermittent claudication (IC) n = 1622 and critical limb ischaemia (CLI) n = 1572) were studied, of which 2996 had complete VQ-6 recordings 30 days and 1092 a year after the vascular intervention. The minimum important difference 1 year after revascularisation for IC patients ranged from 1.7 to 2.2 scale steps, depending on the method of analysis. Among CLI patients, the minimum important difference after 1 year was 1.9 scale steps. ROC analyses demonstrated that the VQ-6 discriminative properties for a substantial clinical benefit was excellent for IC patients (area under curve (AUC) 0.87, sensitivity 0.81, specificity 0.76) and acceptable in CLI (AUC 0.736, sensitivity 0.63, specificity 0.72). An optimal VQ-6 threshold for a substantial clinical benefit was determined at 3.5 scale steps among IC patients and 4.5 in CLI patients. The suggested thresholds for minimum important difference and substantial clinical benefit could be used when evaluating VQ-6 outcomes following different interventions in PAD and in the design of clinical trials. Copyright © 2017 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Junhua; Li, Dazhen; Wang, Bo; Yang, Jing; Yang, Houwen; Wang, Xiaoqian; Cheng, Wenyong
2017-11-01
In inertial confinement fusion, ultraviolet laser damage of the fused silica lens is an important limiting factor for load capability of the laser driver. To solve this problem, a new configuration of frequency tripling is proposed in this paper. The frequency tripling crystal is placed on downstream of the focusing lens, thus sum frequency generation of fundamental frequency light and doubling frequency light occurs in the beam convergence path. The focusing lens is only irradiated by fundamental light and doubling frequency lights. Thus, its damage threshold will increase. LiB3O5 (LBO) crystals are employed as frequency tripling crystals for its larger acceptance angle and higher damage threshold than KDP/DKDP crystals'. With the limitation of acceptance angle and crystal growth size are taken into account, the tiling scheme of LBO crystals is proposed and designed optimally to adopt to the total convergence angle of 36.0 mrad. Theoretical results indicate that 3 LBO crystals titling with different cutting angles in θ direction can meet the phase matching condition. Compared with frequency tripling of parallel beam using one LBO crystal, 83.8% (93.1% with 5 LBO crystals tiling) of the frequency tripling conversion efficiency can be obtained employing this new configuration. The results of a principle experiment also support this scheme. By employing this new design, not only the load capacity of a laser driver will be significantly improved, but also the fused silica lens can be changed to K9 glass lens which has the mature technology and low cost.
Simulation models in population breast cancer screening: A systematic review.
Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H
2015-08-01
The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J
2012-12-15
A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.
A low threshold nanocavity in a two-dimensional 12-fold photonic quasicrystal
NASA Astrophysics Data System (ADS)
Ren, Jie; Sun, XiaoHong; Wang, Shuai
2018-05-01
In this article, a low threshold nanocavity is built and investigated in a two-dimensional 12-fold holographic photonic quasicrystal (PQC). The cavity is formed by using the method of multi-beam common-path interference. By finely adjusting the structure parameters of the cavity, the Q factor and the mode volume are optimized, which are two keys to low-threshold on the basis of Purcell effect. Finally, an optimal cavity is obtained with Q value of 6023 and mode volume of 1.24 ×10-12cm3 . On the other hand, by Fourier Transformation of the electric field components in the cavity, the in-plane wave vectors are calculated and fitted to evaluate the cavity performance. The performance analysis of the cavity further proves the effectiveness of the optimization process. This has a guiding significance for the research of low threshold nano-laser.
Subtil, Fabien; Rabilloud, Muriel
2015-07-01
The receiver operating characteristic curves (ROC curves) are often used to compare continuous diagnostic tests or determine the optimal threshold of a test; however, they do not consider the costs of misclassifications or the disease prevalence. The ROC graph was extended to allow for these aspects. Two new lines are added to the ROC graph: a sensitivity line and a specificity line. Their slopes depend on the disease prevalence and on the ratio of the net benefit of treating a diseased subject to the net cost of treating a nondiseased one. First, these lines help researchers determine the range of specificities within which test comparisons of partial areas under the curves is clinically relevant. Second, the ROC curve point the farthest from the specificity line is shown to be the optimal threshold in terms of expected utility. This method was applied: (1) to determine the optimal threshold of ratio specific immunoglobulin G (IgG)/total IgG for the diagnosis of congenital toxoplasmosis and (2) to select, among two markers, the most accurate for the diagnosis of left ventricular hypertrophy in hypertensive subjects. The two additional lines transform the statistically valid ROC graph into a clinically relevant tool for test selection and threshold determination. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin
Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.
Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi
2018-06-01
Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity. Cancer Epidemiol Biomarkers Prev; 27(6); 704-9. ©2018 AACR . ©2018 American Association for Cancer Research.
20 CFR 404.1641 - Standards of performance.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... (a) General. The performance standards include both a target level of performance and a threshold level of performance for the State agency. The target level represents a level of performance that we and the States will work to attain in the future. The threshold level is the minimum acceptable level...
20 CFR 416.1041 - Standards of performance.
Code of Federal Regulations, 2010 CFR
2010-04-01
... performance. (a) General. The performance standards include both a target level of performance and a threshold level of performance for the State agency. The target level represents a level of performance that we and the States will work to attain in the future. The threshold level is the minimum acceptable level...
Patel, Bhavik N; Farjat, Alfredo; Schabel, Christoph; Duvnjak, Petar; Mileto, Achille; Ramirez-Giraldo, Juan Carlos; Marin, Daniele
2018-05-01
The purpose of this study was to determine in vitro and in vivo the optimal threshold for renal lesion vascularity at low-energy (40-60 keV) virtual monoenergetic imaging. A rod simulating unenhanced renal parenchymal attenuation (35 HU) was fitted with a syringe containing water. Three iodinated solutions (0.38, 0.57, and 0.76 mg I/mL) were inserted into another rod that simulated enhanced renal parenchyma (180 HU). Rods were inserted into cylindric phantoms of three different body sizes and scanned with single- and dual-energy MDCT. In addition, 102 patients (32 men, 70 women; mean age, 66.8 ± 12.9 [SD] years) with 112 renal lesions (67 nonvascular, 45 vascular) measuring 1.1-8.9 cm underwent single-energy unenhanced and contrast-enhanced dual-energy CT. Optimal threshold attenuation values that differentiated vascular from nonvascular lesions at 40-60 keV were determined. Mean optimal threshold values were 30.2 ± 3.6 (standard error), 20.9 ± 1.3, and 16.1 ± 1.0 HU in the phantom, and 35.9 ± 3.6, 25.4 ± 1.8, and 17.8 ± 1.8 HU in the patients at 40, 50, and 60 keV. Sensitivity and specificity for the thresholds did not change significantly between low-energy and 70-keV virtual monoenergetic imaging (sensitivity, 87-98%; specificity, 90-91%). The AUC from 40 to 70 keV was 0.96 (95% CI, 0.93-0.99) to 0.98 (95% CI, 0.95-1.00). Low-energy virtual monoenergetic imaging at energy-specific optimized attenuation thresholds can be used for reliable characterization of renal lesions.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
Threshold selection for classification of MR brain images by clustering method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less
Influence of sensor ingestion timing on consistency of temperature measures.
Goodman, Daniel A; Kenefick, Robert W; Cadarette, Bruce S; Cheuvront, Samuel N
2009-03-01
The validity and the reliability of using intestinal temperature (T int) via ingestible temperature sensors (ITS) to measure core body temperature have been demonstrated. However, the effect of elapsed time between ITS ingestion and T int measurement has not been thoroughly studied. Eight volunteers (six men and two women) swallowed ITS 5 h (ITS-5) and 29 h (ITS-29) before 4 h of varying intensity activity. T int was measured simultaneously from both ITS, and T int differences between the ITS-5 and the ITS-29 over the 4 h of activity were plotted and compared relative to a meaningful threshold of acceptance (+/-0.25 degrees C). The percentage of time in which the differences between paired ITS (ITS-5 vs ITS-29) were greater than or less than the threshold of acceptance was calculated. T int values showed no systematic bias, were normally distributed, and ranged from 36.94 degrees C to 39.24 degrees C. The maximum T int difference between paired ITS was 0.83 degrees C with a minimum difference of 0.00 degrees C. The typical magnitude of the differences (SE of the estimate) was 0.24 degrees C, and these differences were uniform across the entire range of observed temperatures. Paired T int measures fell outside of the threshold of acceptance 43.8% of the time during the 4 h of activity. The differences between ITS-5 and ITS-29 were larger than the threshold of acceptance during a substantial portion of the observed 4-h activity period. Ingesting an ITS more than 5 h before activity will not completely eliminate confounding factors but may improve accuracy and consistency of core body temperature.
Optimal estimation of recurrence structures from time series
NASA Astrophysics Data System (ADS)
beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel
2016-05-01
Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
Threshold effect under nonlinear limitation of the intensity of high-power light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tereshchenko, S A; Podgaetskii, V M; Gerasimenko, A Yu
2015-04-30
A model is proposed to describe the properties of limiters of high-power laser radiation, which takes into account the threshold character of nonlinear interaction of radiation with the working medium of the limiter. The generally accepted non-threshold model is a particular case of the threshold model if the threshold radiation intensity is zero. Experimental z-scan data are used to determine the nonlinear optical characteristics of media with carbon nanotubes, polymethine and pyran dyes, zinc selenide, porphyrin-graphene and fullerene-graphene. A threshold effect of nonlinear interaction between laser radiation and some of investigated working media of limiters is revealed. It is shownmore » that the threshold model more adequately describes experimental z-scan data. (nonlinear optical phenomena)« less
NASA Astrophysics Data System (ADS)
Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel
2015-06-01
We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.
CHOW PARAMETERS IN THRESHOLD LOGIC,
respect to threshold functions, they provide the optimal test-synthesis method for completely specified 7-argument (or less) functions, reflect the...signs and relative magnitudes of realizing weights and threshold , and can be used themselves as approximating weights. Results are reproved in a
Data Assimilation Experiments using Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel
2008-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AlRS data. Version 5 contains accurate case-by-case error estimates for most derived products, which are also used for quality control. We have conducted forecast impact experiments assimilating AlRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM. Assimilation of quality controlled temperature profiles resulted in significantly improved forecast skill in both the Northern Hemisphere and Southern Hemisphere Extra-Tropics, compared to that obtained from analyses obtained when all data used operationally by NCEP except for AlRS data is assimilated. Experiments using different Quality Control thresholds for assimilation of AlRS temperature retrievals showed that a medium quality control threshold performed better than a tighter threshold, which provided better overall sounding accuracy; or a looser threshold, which provided better spatial coverage of accepted soundings. We are conducting more experiments to further optimize this balance of spatial coverage and sounding accuracy from the data assimilation perspective. In all cases, temperature soundings were assimilated well below cloud level in partially cloudy cases. The positive impact of assimilating AlRS derived atmospheric temperatures all but vanished when only AIRS stratospheric temperatures were assimilated. Forecast skill resulting from assimilation of AlRS radiances uncontaminated by clouds, instead of AlRS temperature soundings, was only slightly better than that resulting from assimilation of only stratospheric AlRS temperatures. This reduction in forecast skill is most likely the result of significant loss of tropospheric information when only AIRS radiances unaffected by clouds are used in the data assimilation process.
Tay, Timothy Kwang Yong; Thike, Aye Aye; Pathmanathan, Nirmala; Jara-Lazaro, Ana Richelia; Iqbal, Jabed; Sng, Adeline Shi Hui; Ye, Heng Seow; Lim, Jeffrey Chun Tatt; Koh, Valerie Cui Yun; Tan, Jane Sie Yong; Yeong, Joe Poh Sheng; Chow, Zi Long; Li, Hui Hua; Cheng, Chee Leong; Tan, Puay Hoon
2018-01-01
Background Ki67 positivity in invasive breast cancers has an inverse correlation with survival outcomes and serves as an immunohistochemical surrogate for molecular subtyping of breast cancer, particularly ER positive breast cancer. The optimal threshold of Ki67 in both settings, however, remains elusive. We use computer assisted image analysis (CAIA) to determine the optimal threshold for Ki67 in predicting survival outcomes and differentiating luminal B from luminal A breast cancers. Methods Quantitative scoring of Ki67 on tissue microarray (TMA) sections of 440 invasive breast cancers was performed using Aperio ePathology ImmunoHistochemistry Nuclear Image Analysis algorithm, with TMA slides digitally scanned via Aperio ScanScope XT System. Results On multivariate analysis, tumours with Ki67 ≥14% had an increased likelihood of recurrence (HR 1.941, p=0.021) and shorter overall survival (HR 2.201, p=0.016). Similar findings were observed in the subset of 343 ER positive breast cancers (HR 2.409, p=0.012 and HR 2.787, p=0.012 respectively). The value of Ki67 associated with ER+HER2-PR<20% tumours (Luminal B subtype) was found to be <17%. Conclusion Using CAIA, we found optimal thresholds for Ki67 that predict a poorer prognosis and an association with the Luminal B subtype of breast cancer. Further investigation and validation of these thresholds are recommended. PMID:29545924
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
Process Control for Precipitation Prevention in Space Water Recovery Systems
NASA Technical Reports Server (NTRS)
Sargusingh, Miriam; Callahan, Michael R.; Muirhead, Dean
2015-01-01
The ability to recover and purify water through physiochemical processes is crucial for realizing long-term human space missions, including both planetary habitation and space travel. Because of their robust nature, rotary distillation systems have been actively pursued by NASA as one of the technologies for water recovery from wastewater primarily comprised of human urine. A specific area of interest is the prevention of the formation of solids that could clog fluid lines and damage rotating equipment. To mitigate the formation of solids, operational constraints are in place that limits such that the concentration of key precipitating ions in the wastewater brine are below the theoretical threshold. This control in effected by limiting the amount of water recovered such that the risk of reaching the precipitation threshold is within acceptable limits. The water recovery limit is based on an empirically derived worst case wastewater composition. During the batch process, water recovery is estimated by monitoring the throughput of the system. NASA Johnson Space Center is working on means of enhancing the process controls to increase water recovery. Options include more precise prediction of the precipitation threshold. To this end, JSC is developing a means of more accurately measuring the constituent of the brine and/or wastewater. Another means would be to more accurately monitor the throughput of the system. In spring of 2015, testing will be performed to test strategies for optimizing water recovery without increasing the risk of solids formation in the brine.
Optimization of a matched-filter receiver for frequency hopping code acquisition in jamming
NASA Astrophysics Data System (ADS)
Pawlowski, P. R.; Polydoros, A.
A matched-filter receiver for frequency hopping (FH) code acquisition is optimized when either partial-band tone jamming or partial-band Gaussian noise jamming is present. The receiver is matched to a segment of the FH code sequence, sums hard per-channel decisions to form a test, and uses multiple tests to verify acquisition. The length of the matched filter and the number of verification tests are fixed. Optimization is then choosing thresholds to maximize performance based upon the receiver's degree of knowledge about the jammer ('side-information'). Four levels of side-information are considered, ranging from none to complete. The latter level results in a constant-false-alarm-rate (CFAR) design. At each level, performance sensitivity to threshold choice is analyzed. Robust thresholds are chosen to maximize performance as the jammer varies its power distribution, resulting in simple design rules which aid threshold selection. Performance results, which show that optimum distributions for the jammer power over the total FH bandwidth exist, are presented.
NASA Astrophysics Data System (ADS)
Kefayati, Mahdi; Baldick, Ross
2015-07-01
Flexible loads, i.e. the loads whose power trajectory is not bound to a specific one, constitute a sizable portion of current and future electric demand. This flexibility can be used to improve the performance of the grid, should the right incentives be in place. In this paper, we consider the optimal decision making problem faced by a flexible load, demanding a certain amount of energy over its availability period, subject to rate constraints. The load is also capable of providing ancillary services (AS) by decreasing or increasing its consumption in response to signals from the independent system operator (ISO). Under arbitrarily distributed and correlated Markovian energy and AS prices, we obtain the optimal policy for minimising expected total cost, which includes cost of energy and benefits from AS provision, assuming no capacity reservation requirement for AS provision. We also prove that the optimal policy has a multi-threshold form and can be computed, stored and operated efficiently. We further study the effectiveness of our proposed optimal policy and its impact on the grid. We show that, while optimal simultaneous consumption and AS provision under real-time stochastic prices are achievable with acceptable computational burden, the impact of adopting such real-time pricing schemes on the network might not be as good as suggested by the majority of the existing literature. In fact, we show that such price responsive loads are likely to induce peak-to-average ratios much more than what is observed in the current distribution networks and adversely affect the grid.
McCaffrey, Nikki; Agar, Meera; Harlum, Janeane; Karnon, Jonathon; Currow, David; Eckermann, Simon
2015-01-01
Introduction Comparing multiple, diverse outcomes with cost-effectiveness analysis (CEA) is important, yet challenging in areas like palliative care where domains are unamenable to integration with survival. Generic multi-attribute utility values exclude important domains and non-health outcomes, while partial analyses—where outcomes are considered separately, with their joint relationship under uncertainty ignored—lead to incorrect inference regarding preferred strategies. Objective The objective of this paper is to consider whether such decision making can be better informed with alternative presentation and summary measures, extending methods previously shown to have advantages in multiple strategy comparison. Methods Multiple outcomes CEA of a home-based palliative care model (PEACH) relative to usual care is undertaken in cost disutility (CDU) space and compared with analysis on the cost-effectiveness plane. Summary measures developed for comparing strategies across potential threshold values for multiple outcomes include: expected net loss (ENL) planes quantifying differences in expected net benefit; the ENL contour identifying preferred strategies minimising ENL and their expected value of perfect information; and cost-effectiveness acceptability planes showing probability of strategies minimising ENL. Results Conventional analysis suggests PEACH is cost-effective when the threshold value per additional day at home ( 1) exceeds $1,068 or dominated by usual care when only the proportion of home deaths is considered. In contrast, neither alternative dominate in CDU space where cost and outcomes are jointly considered, with the optimal strategy depending on threshold values. For example, PEACH minimises ENL when 1=$2,000 and 2=$2,000 (threshold value for dying at home), with a 51.6% chance of PEACH being cost-effective. Conclusion Comparison in CDU space and associated summary measures have distinct advantages to multiple domain comparisons, aiding transparent and robust joint comparison of costs and multiple effects under uncertainty across potential threshold values for effect, better informing net benefit assessment and related reimbursement and research decisions. PMID:25751629
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs
NASA Astrophysics Data System (ADS)
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Jiang Graves, Yan; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-01
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine. This work was originally presented at the 54th AAPM annual meeting in Charlotte, NC, July 29-August 2, 2012.
Flagging threshold optimization for manual blood smear review in primary care laboratory.
Bihl, Pierre-Adrien
2018-04-01
Manual blood smear review is required when an anomaly detected by the automated hematologic analyzer triggers a flag. Our will through this study is to optimize these flagging thresholds for manual slide review in order to limit workload, while insuring clinical care through no extra false-negative. Flagging causes of 4,373 samples were investigated by manual slide review, after having been run on ADVIA 2120i. A set of 6 user-adjustments is proposed. By implementing all recommendations that we made, false-positive rate falls from 81.8% to 58.6%, while PPV increases from 18.2% to 23.7%. Hence, use of such optimized thresholds enables us to maximize efficiency without altering clinical care, but each laboratory should establish its own criteria to take into consideration local distinctive features.
Adaptive time-sequential binary sensing for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Hu, Chenhui; Lu, Yue M.
2012-06-01
We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.
Maximizing algebraic connectivity in interconnected networks.
Shakeri, Heman; Albin, Nathan; Darabi Sahneh, Faryad; Poggi-Corradini, Pietro; Scoglio, Caterina
2016-03-01
Algebraic connectivity, the second eigenvalue of the Laplacian matrix, is a measure of node and link connectivity on networks. When studying interconnected networks it is useful to consider a multiplex model, where the component networks operate together with interlayer links among them. In order to have a well-connected multilayer structure, it is necessary to optimally design these interlayer links considering realistic constraints. In this work, we solve the problem of finding an optimal weight distribution for one-to-one interlayer links under budget constraint. We show that for the special multiplex configurations with identical layers, the uniform weight distribution is always optimal. On the other hand, when the two layers are arbitrary, increasing the budget reveals the existence of two different regimes. Up to a certain threshold budget, the second eigenvalue of the supra-Laplacian is simple, the optimal weight distribution is uniform, and the Fiedler vector is constant on each layer. Increasing the budget past the threshold, the optimal weight distribution can be nonuniform. The interesting consequence of this result is that there is no need to solve the optimization problem when the available budget is less than the threshold, which can be easily found analytically.
Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation
Liu, Yang; Liu, Junfei
2016-01-01
This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency. PMID:27725826
Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation.
Liu, Yang; Liu, Junfei; Tian, Liwei; Ma, Lianbo
2016-01-01
This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false SBA acceptance under partnership agreements for acquisitions exceeding the simplified acquisition threshold. 2419.804-370 Section 2419.804-370 Federal Acquisition Regulations System DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false SBA acceptance under partnership agreements for acquisitions exceeding the simplified acquisition threshold. 2419.804-370 Section 2419.804-370 Federal Acquisition Regulations System DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS...
AN EVALUATION OF HEURISTICS FOR THRESHOLD-FUNCTION TEST-SYNTHESIS,
Linear programming offers the most attractive procedure for testing and obtaining optimal threshold gate realizations for functions generated in...The design of the experiments may be of general interest to students of automatic problem solving; the results should be of interest in threshold logic and linear programming. (Author)
Ultra-low threshold polariton condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steger, Mark; Fluegel, Brian; Alberi, Kirstin
Here, we demonstrate the condensation of microcavity polaritons with a very sharp threshold occurring at a two orders of magnitude pump intensity lower than previous demonstrations of condensation. The long cavity lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or 'polariton lasing' has long been proposed as a promising source of coherent light at a lower threshold than traditional lasing, and these results indicate some considerations for optimizing designs for lower thresholds.
Ultra-low threshold polariton condensation
Steger, Mark; Fluegel, Brian; Alberi, Kirstin; ...
2017-03-13
Here, we demonstrate the condensation of microcavity polaritons with a very sharp threshold occurring at a two orders of magnitude pump intensity lower than previous demonstrations of condensation. The long cavity lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or 'polariton lasing' has long been proposed as a promising source of coherent light at a lower threshold than traditional lasing, and these results indicate some considerations for optimizing designs for lower thresholds.
Optimization of MLS receivers for multipath environments
NASA Technical Reports Server (NTRS)
Mcalpine, G. A.; Irwin, S. H.; NELSON; Roleyni, G.
1977-01-01
Optimal design studies of MLS angle-receivers and a theoretical design-study of MLS DME-receivers are reported. The angle-receiver results include an integration of the scan data processor and tracking filter components of the optimal receiver into a unified structure. An extensive simulation study comparing the performance of the optimal and threshold receivers in a wide variety of representative dynamical interference environments was made. The optimal receiver was generally superior. A simulation of the performance of the threshold and delay-and-compare receivers in various signal environments was performed. An analysis of combined errors due to lateral reflections from vertical structures with small differential path delays, specular ground reflections with neglible differential path delays, and thermal noise in the receivers is provided.
Optimal control strategy for a novel computer virus propagation model on scale-free networks
NASA Astrophysics Data System (ADS)
Zhang, Chunming; Huang, Haitao
2016-06-01
This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.
Validity of the Talk Test for exercise prescription after myocardial revascularization.
Zanettini, Renzo; Centeleghe, Paola; Franzelli, Cristina; Mori, Ileana; Benna, Stefania; Penati, Chiara; Sorlini, Nadia
2013-04-01
For exercise prescription, rating of perceived exertion is the subjective tool most frequently used in addition to methods based on percentage of peak exercise variables. The aim of this study was the validation of a subjective method widely called the Talk Test (TT) for optimization of training intensity in patients with recent myocardial revascularization. Fifty patients with recent myocardial revascularization (17 by coronary artery bypass grafting and 33 by percutaneous coronary intervention) were enrolled in a cardiac rehabilitation programme. Each patient underwent three repetitions of the TT during three different exercise sessions to evaluate the within-patient and between-operators reliability in assessing the workload (WL) at TT thresholds. These parameters were then compared with the data of a final cardiopulmonary exercise testing, and the WL range between the individual aerobic threshold (AeT) and anaerobic threshold (AnT) was considered as the optimal training zone. The within-patient and between-operators reliability in assessing TT thresholds were satisfactory. No significant differences were found between patients' and physiotherapists' evaluations of WL at different TT thresholds. WL at Last TT+ was between AeT and AnT in 88% of patients and slightly
NASA Astrophysics Data System (ADS)
Bellingeri, Michele; Agliari, Elena; Cassi, Davide
2015-10-01
The best strategy to immunize a complex network is usually evaluated in terms of the percolation threshold, i.e. the number of vaccine doses which make the largest connected cluster (LCC) vanish. The strategy inducing the minimum percolation threshold represents the optimal way to immunize the network. Here we show that the efficacy of the immunization strategies can change during the immunization process. This means that, if the number of doses is limited, the best strategy is not necessarily the one leading to the smallest percolation threshold. This outcome should warn about the adoption of global measures in order to evaluate the best immunization strategy.
2010-01-01
Background Herpes zoster (HZ) is a painful disease affecting a considerable part of the elderly. Programmatic HZ vaccination of elderly people may considerably reduce HZ morbidity and its related costs, but the extent of these effects is unknown. In this article, the potential effects and cost-effectiveness of programmatic HZ vaccination of elderly in the Netherlands have been assessed according to a framework that was developed to support evidence-based decision making regarding inclusion of new vaccines in the Dutch National Immunization Program. Methods An analytical framework was used combining a checklist, which structured relevant data on the vaccine, pathogen and disease, and a cost-effectiveness analysis. The cost-effectiveness analysis was performed from a societal perspective, using a Markov-cohort-model. Simultaneous vaccination with influenza was assumed. Results Due to the combination of waning immunity after vaccination and a reduced efficacy of vaccination at high ages, the most optimal cost-effectiveness ratio (€21716 per QALY) for HZ vaccination in the Netherlands was found for 70-year olds. This estimated ratio is just above the socially accepted threshold in the Netherlands of €20000 per QALY. If additional reduction of postherpetic neuralgia was included, the cost-effectiveness ratio improved (~€10000 per QALY) but uncertainty for this scenario is high. Conclusions Vaccination against HZ at the age of 70 years seems marginally cost-effective in the Netherlands. Due to limited vaccine efficacy a considerable part of the disease burden caused by HZ will remain, even with optimal acceptance of programmatic vaccination. PMID:20707884
van Lier, Alies; van Hoek, Albert Jan; Opstelten, Wim; Boot, Hein J; de Melker, Hester E
2010-08-13
Herpes zoster (HZ) is a painful disease affecting a considerable part of the elderly. Programmatic HZ vaccination of elderly people may considerably reduce HZ morbidity and its related costs, but the extent of these effects is unknown. In this article, the potential effects and cost-effectiveness of programmatic HZ vaccination of elderly in the Netherlands have been assessed according to a framework that was developed to support evidence-based decision making regarding inclusion of new vaccines in the Dutch National Immunization Program. An analytical framework was used combining a checklist, which structured relevant data on the vaccine, pathogen and disease, and a cost-effectiveness analysis. The cost-effectiveness analysis was performed from a societal perspective, using a Markov-cohort-model. Simultaneous vaccination with influenza was assumed. Due to the combination of waning immunity after vaccination and a reduced efficacy of vaccination at high ages, the most optimal cost-effectiveness ratio (21716 euro per QALY) for HZ vaccination in the Netherlands was found for 70-year olds. This estimated ratio is just above the socially accepted threshold in the Netherlands of 20000 euro per QALY. If additional reduction of postherpetic neuralgia was included, the cost-effectiveness ratio improved (approximately 10000 euro per QALY) but uncertainty for this scenario is high. Vaccination against HZ at the age of 70 years seems marginally cost-effective in the Netherlands. Due to limited vaccine efficacy a considerable part of the disease burden caused by HZ will remain, even with optimal acceptance of programmatic vaccination.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.
Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-09-15
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks
Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-01-01
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity. PMID:28914818
Consumer perception of astringency in clear acidic whey protein beverages.
Childs, Jessica L; Drake, MaryAnne
2010-01-01
Acidic whey protein beverages are a growing component of the functional food and beverage market. These beverages are also astringent, but astringency is an expected and desirable attribute of many beverages (red wine, tea, coffee) and may not necessarily be a negative attribute of acidic whey protein beverages. The goal of this study was to define the consumer perception of astringency in clear acidic whey protein beverages. Six focus groups (n=49) were held to gain understanding of consumer knowledge of astringency. Consumers were presented with beverages and asked to map them based on astringent mouthfeel and liking. Orthonasal thresholds for whey protein isolate (WPI) in water and flavored model beverages were determined using a 7-series ascending forced choice method. Mouthfeel/basic taste thresholds were determined for WPI in water. Acceptance tests on model beverages were conducted using consumers (n=120) with and without wearing nose clips. Consumers in focus groups were able to identify astringency in beverages. Astringency intensity was not directly related to dislike. The orthonasal threshold for WPI in water was lower (P < 0.05) than the mouthfeel/basic taste threshold of WPI in water. Consumer acceptance of beverages containing WPI was lower (P < 0.05) when consumers were not wearing nose clips compared to acceptance scores of beverages when consumers were wearing nose clips. These results suggest that flavors contributed by WPI in acidic beverages are more objectionable than the astringent mouthfeel and that both flavor and astringency should be the focus of ongoing studies to improve the palatability of these products. © 2010 Institute of Food Technologists®
Assenova, Valentina A
2018-01-01
Complex innovations- ideas, practices, and technologies that hold uncertain benefits for potential adopters-often vary in their ability to diffuse in different communities over time. To explain why, I develop a model of innovation adoption in which agents engage in naïve (DeGroot) learning about the value of an innovation within their social networks. Using simulations on Bernoulli random graphs, I examine how adoption varies with network properties and with the distribution of initial opinions and adoption thresholds. The results show that: (i) low-density and high-asymmetry networks produce polarization in influence to adopt an innovation over time, (ii) increasing network density and asymmetry promote adoption under a variety of opinion and threshold distributions, and (iii) the optimal levels of density and asymmetry in networks depend on the distribution of thresholds: networks with high density (>0.25) and high asymmetry (>0.50) are optimal for maximizing diffusion when adoption thresholds are right-skewed (i.e., barriers to adoption are low), but networks with low density (<0.01) and low asymmetry (<0.25) are optimal when thresholds are left-skewed. I draw on data from a diffusion field experiment to predict adoption over time and compare the results to observed outcomes.
The absolute threshold of cone vision
Koeing, Darran; Hofer, Heidi
2013-01-01
We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch. PMID:25540814
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch.
Relationship between consumer ranking of lamb colour and objective measures of colour.
Khliji, S; van de Ven, R; Lamb, T A; Lanza, M; Hopkins, D L
2010-06-01
Given the lack of data that relates consumer acceptance of lamb colour to instrument measures a study was undertaken to establish the acceptability thresholds for fresh and displayed meat. Consumers (n=541) were asked to score 20 samples of lamb loin (m.longissimus thoracis et lumborum; LL) on an ordinal scale of 1 (very acceptable) to 5 (very unacceptable). A sample was considered acceptable by a consumer if it scored three or less. Ten samples were used for testing consumer response to fresh colour and 10 to test consumer response to colour during display of up to 4days. The colour of fresh meat was measured using a Minolta chromameter with a closed cone and a Hunter Lab Miniscan was used for measuring meat on display. For fresh meat when the a( *) (redness) and L( *) (lightness) values are equal to or exceed 9.5 and 34, respectively, on average consumers will consider the meat colour acceptable. However a( *) and L( *) values must be much higher (14.5 and 44, respectively) to have 95% confidence that a randomly selected consumer will consider a sample acceptable. For aged meat, when the wavelength ratio (630/580nm) and the a( *) values are equal to or greater than 3.3 and 14.8, respectively, on average consumers will consider the meat acceptable. These thresholds need to be increased to 6.8 for ratio (630/580nm) and 21.7 for a( *) to be 95% confident that a randomly selected consumer will consider a sample acceptable. Crown Copyright 2010. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Murdoch, Jennifer L.; Swieringa, Kurt A.; Barmore, Bryan E.; Capron, William R.; Hubbs, Clay E.; Shay, Richard F.; Abbott, Terence S.
2013-01-01
The predicted increase in the number of commercial aircraft operations creates a need for improved operational efficiency. Two areas believed to offer increases in aircraft efficiency are optimized profile descents and dependent parallel runway operations. Using Flight deck Interval Management (FIM) software and procedures during these operations, flight crews can achieve by the runway threshold an interval assigned by air traffic control (ATC) behind the preceding aircraft that maximizes runway throughput while minimizing additional fuel consumption and pilot workload. This document describes an experiment where 24 pilots flew arrivals into the Dallas Fort-Worth terminal environment using one of three simulators at NASA?s Langley Research Center. Results indicate that pilots delivered their aircraft to the runway threshold within +/- 3.5 seconds of their assigned time interval, and reported low workload levels. In general, pilots found the FIM concept, procedures, speeds, and interface acceptable. Analysis of the time error and FIM speed changes as a function of arrival stream position suggest the spacing algorithm generates stable behavior while in the presence of continuous (wind) or impulse (offset) error. Concerns reported included multiple speed changes within a short time period, and an airspeed increase followed shortly by an airspeed decrease.
Flethøj, Mette; Kanters, Jørgen K; Pedersen, Philip J; Haugaard, Maria M; Carstensen, Helena; Olsen, Lisbeth H; Buhl, Rikke
2016-11-28
Although premature beats are a matter of concern in horses, the interpretation of equine ECG recordings is complicated by a lack of standardized analysis criteria and a limited knowledge of the normal beat-to-beat variation of equine cardiac rhythm. The purpose of this study was to determine the appropriate threshold levels of maximum acceptable deviation of RR intervals in equine ECG analysis, and to evaluate a novel two-step timing algorithm by quantifying the frequency of arrhythmias in a cohort of healthy adult endurance horses. Beat-to-beat variation differed considerably with heart rate (HR), and an adaptable model consisting of three different HR ranges with separate threshold levels of maximum acceptable RR deviation was consequently defined. For resting HRs <60 beats/min (bpm) the threshold level of RR deviation was set at 20%, for HRs in the intermediate range between 60 and 100 bpm the threshold was 10%, and for exercising HRs >100 bpm, the threshold level was 4%. Supraventricular premature beats represented the most prevalent arrhythmia category with varying frequencies in seven horses at rest (median 7, range 2-86) and six horses during exercise (median 2, range 1-24). Beat-to-beat variation of equine cardiac rhythm varies according to HR, and threshold levels in equine ECG analysis should be adjusted accordingly. Standardization of the analysis criteria will enable comparisons of studies and follow-up examinations of patients. A small number of supraventricular premature beats appears to be a normal finding in endurance horses. Further studies are required to validate the findings and determine the clinical significance of premature beats in horses.
Bettembourg, Charles; Diot, Christian; Dameron, Olivier
2015-01-01
Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274
Optimal glottal configuration for ease of phonation.
Lucero, J C
1998-06-01
Recent experimental studies have shown the existence of optimal values of the glottal width and convergence angle, at which the phonation threshold pressure is minimum. These results indicate the existence of an optimal glottal configuration for ease of phonation, not predicted by the previous theory. In this paper, the origin of the optimal configuration is investigated using a low dimensional mathematical model of the vocal fold. Two phenomena of glottal aerodynamics are examined: pressure losses due to air viscosity, and air flow separation from a divergent glottis. The optimal glottal configuration seems to be a consequence of the combined effect of both factors. The results agree with the experimental data, showing that the phonation threshold pressure is minimum when the vocal folds are slightly separated in a near rectangular glottis.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
NASA Astrophysics Data System (ADS)
Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto
2015-04-01
Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This has consequences in the ROC analysis. We applied the proposed procedure to a catalogue of rainfall conditions that have resulted in landslides, and to a set of rainfall events that - presumably - have not resulted in landslides, in Sicily, in the period 2002-2012. First, we determined regional event duration-cumulated event (ED) rainfall thresholds for shallow landslide occurrence using 200 rainfall conditions that have resulted in 223 shallow landslides in Sicily in the period 2002-2011. Next, we validated the thresholds using 29 rainfall conditions that have triggered 42 shallow landslides in Sicily in 2012, and 1250 rainfall events that presumably have not resulted in landslides in the same year. We performed a back analysis simulating the use of the thresholds in a hypothetical landslide warning system operating in 2012.
Potgieter, Danielle; Simmers, Dale; Ryan, Lisa; Biccard, Bruce M; Lurati-Buse, Giovanna A; Cardinale, Daniela M; Chong, Carol P W; Cnotliwy, Miloslaw; Farzi, Sylvia I; Jankovic, Radmilo J; Lim, Wen Kwang; Mahla, Elisabeth; Manikandan, Ramaswamy; Oscarsson, Anna; Phy, Michael P; Rajagopalan, Sriram; Van Gaal, William J; Waliszek, Marek; Rodseth, Reitze N
2015-08-01
N-terminal fragment B-type natriuretic peptide (NT-proBNP) prognostic utility is commonly determined post hoc by identifying a single optimal discrimination threshold tailored to the individual study population. The authors aimed to determine how using these study-specific post hoc thresholds impacts meta-analysis results. The authors conducted a systematic review of studies reporting the ability of preoperative NT-proBNP measurements to predict the composite outcome of all-cause mortality and nonfatal myocardial infarction at 30 days after noncardiac surgery. Individual patient-level data NT-proBNP thresholds were determined using two different methodologies. First, a single combined NT-proBNP threshold was determined for the entire cohort of patients, and a meta-analysis conducted using this single threshold. Second, study-specific thresholds were determined for each individual study, with meta-analysis being conducted using these study-specific thresholds. The authors obtained individual patient data from 14 studies (n = 2,196). Using a single NT-proBNP cohort threshold, the odds ratio (OR) associated with an increased NT-proBNP measurement was 3.43 (95% CI, 2.08 to 5.64). Using individual study-specific thresholds, the OR associated with an increased NT-proBNP measurement was 6.45 (95% CI, 3.98 to 10.46). In smaller studies (<100 patients) a single cohort threshold was associated with an OR of 5.4 (95% CI, 2.27 to 12.84) as compared with an OR of 14.38 (95% CI, 6.08 to 34.01) for study-specific thresholds. Post hoc identification of study-specific prognostic biomarker thresholds artificially maximizes biomarker predictive power, resulting in an amplification or overestimation during meta-analysis of these results. This effect is accentuated in small studies.
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
Optimal Stimulus Amplitude for Vestibular Stochastic Stimulation to Improve Sensorimotor Function
NASA Technical Reports Server (NTRS)
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Cohen, H.; Bloomberg, J. J.;
2014-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). Our previous work has shown the advantageous effects of VSR in a balance task of standing on an unstable surface. This technique to improve detection of vestibular signals uses a stimulus delivery system that is wearable or portable and provides imperceptibly low levels of white noise-based binaural bipolar electrical stimulation of the vestibular system. The goal of this project is to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection. A series of experiments were carried out to determine a robust paradigm to identify a vestibular threshold that can then be used to recommend optimal stimulation levels for SR training applications customized to each crewmember. Customizing stimulus intensity can maximize treatment effects. The amplitude of stimulation to be used in the VSR application has varied across studies in the literature such as 60% of nociceptive stimulus thresholds. We compared subjects' perceptual threshold with that obtained from two measures of body sway. Each test session was 463s long and consisted of several 15s sinusoidal stimuli, at different current amplitudes (0-2 mA), interspersed with 20-20.5s periods of no stimulation. Subjects sat on a chair with their eyes closed and had to report their perception of motion through a joystick. A force plate underneath the chair recorded medio-lateral shear forces and roll moments. First we determined the percent time during stimulation periods for which perception of motion (activity above a pre-defined threshold) was reported using the joystick, and body sway (two standard deviation of the noise level in the baseline measurement) was detected by the sensors. The percentage time at each stimulation level for motion detection was normalized with respect to the largest value and a logistic regression curve fit was applied to these data. The threshold was defined at the 50% probability of motion detection. Comparison of threshold of motion detection obtained from joystick data versus body sway suggests that perceptual thresholds were significantly lower, and were not impacted by system noise. Further, in order to determine optimal stimulation amplitude to improve balance, two sets of experiments were carried out. In the first set of experiments, all subjects received the same level of stimuli and the intensity of optimal performance was projected back on subjects' vestibular threshold curve. In the second set of experiments, on different subjects, stimulation was administered from 20-400% of subjects' vestibular threshold obtained from joystick data. Preliminary results of our study show that, in general, using stimulation amplitudes at 40-60% of perceptual motion threshold improved balance performance significantly compared to control (no stimulation). The amplitude of vestibular stimulation that improved balance function was predominantly in the range of +/- 100 to +/- 400 micro A. We hypothesize that VSR stimulation will act synergistically with sensorimotor adaptability (SA) training to improve adaptability by increasing utilization of vestibular information and therefore will help us to optimize and personalize a SA countermeasure prescription. This combination will help to significantly reduce the number of days required to recover functional performance to preflight levels after long-duration spaceflight.
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2016-02-01
A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.
Shi, Feng; Tian, Ye; Peng, Xiaoqiang; Dai, Yifan
2014-02-01
The inadequate laser-induced damage threshold (LIDT) of optical elements limits the future development of high-power laser systems. With the aim of raising the LIDT, the elastic passivating treatment mechanism and parameter optimization of a combined magnetorheological finishing (MRF) and HF etching process are investigated. The relationships among the width/depth ratio of defects and parameters of the passivating treatment process (MRF and HF etching), relative intensity (RI), and LIDT of fused silica (FS) optics are revealed through a set of simulations and experiments. For high-efficiency improvement of LIDT, in an elastic passivating treatment process, scratches or other defects need not be wiped off entirely, but only passivated or enlarged to an acceptable profile. This combined process can be applied in polishing high-power-laser-irradiated components with high efficiency, low damage, and high LIDT. A 100 mm×100 mm×10 mm FS optic window is treated, and the width/depth ratio rises from 3 to 11, RI decreases from 4 to 1.2, and LIDT is improved from 7.8 to 17.8 J/cm2 after 385 min of MRF elastic polishing and 60 min of HF etching. Comparing this defect-carrying sample to the defect-free one, the MRF polishing time is shortened, obviously, from 1100 to 385 min, and the LIDT is merely decreased from 19.4 to 17.8 J/cm2. Due to the optimized technique, the fabricating time was shortened by a factor of 2.6, while the LIDT decreased merely 8.2%.
NASA Technical Reports Server (NTRS)
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Cohen, H.; Bloomberg, J.J.;
2015-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). Our previous work has shown the advantageous effects of VSR in a balance task of standing on an unstable surface [1]. This technique to improve detection of vestibular signals uses a stimulus delivery system that provides imperceptibly low levels of white noise-based binaural bipolar electrical stimulation of the vestibular system. The goal of this project is to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection. A series of experiments were carried out to determine a robust paradigm to identify a vestibular threshold that can then be used to recommend optimal stimulation levels for sensorimotor adaptability (SA) training applications customized to each crewmember. The amplitude of stimulation to be used in the VSR application has varied across studies in the literature such as 60% of nociceptive stimulus thresholds [2]. We compared subjects' perceptual threshold with that obtained from two measures of body sway. Each test session was 463s long and consisted of several 15s long sinusoidal stimuli, at different current amplitudes (0-2 mA), interspersed with 20-20.5s periods of no stimulation. Subjects sat on a chair with their eyes closed and had to report their perception of motion through a joystick. A force plate underneath the chair recorded medio-lateral shear forces and roll moments. Comparison of threshold of motion detection obtained from joystick data versus body sway suggests that perceptual thresholds were significantly lower. In the balance task, subjects stood on an unstable surface and had to maintain balance, and the stimulation was administered from 20-400% of subjects' vestibular threshold. Optimal stimulation amplitude was determined at which the balance performance was best compared to control (no stimulation). Preliminary results show that, in general, using stimulation amplitudes at 40-60% of perceptual motion threshold significantly improved the balance performance. We hypothesize that VSR stimulation will act synergistically with SA training to improve adaptability by increasing utilization of vestibular information and therefore will help us to optimize and personalize a SA countermeasure prescription. This combination may help to significantly reduce the number of days required to recover functional performance to preflight levels after long-duration spaceflight.
Fluorescently labeled bevacizumab in human breast cancer: defining the classification threshold
NASA Astrophysics Data System (ADS)
Koch, Maximilian; de Jong, Johannes S.; Glatz, Jürgen; Symvoulidis, Panagiotis; Lamberts, Laetitia E.; Adams, Arthur L. L.; Kranendonk, Mariëtte E. G.; Terwisscha van Scheltinga, Anton G. T.; Aichler, Michaela; Jansen, Liesbeth; de Vries, Jakob; Lub-de Hooge, Marjolijn N.; Schröder, Carolien P.; Jorritsma-Smit, Annelies; Linssen, Matthijs D.; de Boer, Esther; van der Vegt, Bert; Nagengast, Wouter B.; Elias, Sjoerd G.; Oliveira, Sabrina; Witkamp, Arjen J.; Mali, Willem P. Th. M.; Van der Wall, Elsken; Garcia-Allende, P. Beatriz; van Diest, Paul J.; de Vries, Elisabeth G. E.; Walch, Axel; van Dam, Gooitzen M.; Ntziachristos, Vasilis
2017-07-01
In-vivo fluorescently labelled drug (bevacizumab) breast cancer specimen where obtained from patients. We propose a new structured method to determine the optimal classification threshold in targeted fluorescence intra-operative imaging.
Optimization of vehicle-trailer connection systems
NASA Astrophysics Data System (ADS)
Sorge, F.
2016-09-01
The three main requirements of a vehicle-trailer connection system are: en route stability, over- or under-steering restraint, minimum off-tracking along curved path. Linking the two units by four-bar trapeziums, wider stability margins may be attained in comparison with the conventional pintle-hitch for both instability types, divergent or oscillating. The stability maps are traced applying the Hurwitz method or the direct analysis of the characteristic equation at the instability threshold. Several types of four-bar linkages may be quickly tested, with the drawbars converging towards the trailer or the towing unit. The latter configuration appears preferable in terms of self-stability and may yield high critical speeds by optimising the geometrical and physical properties. Nevertheless, the system stability may be improved in general by additional vibration dampers in parallel with the connection linkage. Moreover, the four-bar connection may produce significant corrections of the under-steering or over-steering behaviour of the vehicle-train after a steering command from the driver. The off- tracking along the curved paths may be also optimized or kept inside prefixed margins of acceptableness. Activating electronic stability systems if necessary, fair results are obtainable for both the steering conduct and the off-tracking.
NASA Astrophysics Data System (ADS)
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
Percolation threshold determines the optimal population density for public cooperation
NASA Astrophysics Data System (ADS)
Wang, Zhen; Szolnoki, Attila; Perc, Matjaž
2012-03-01
While worldwide census data provide statistical evidence that firmly link the population density with several indicators of social welfare, the precise mechanisms underlying these observations are largely unknown. Here we study the impact of population density on the evolution of public cooperation in structured populations and find that the optimal density is uniquely related to the percolation threshold of the host graph irrespective of its topological details. We explain our observations by showing that spatial reciprocity peaks in the vicinity of the percolation threshold, when the emergence of a giant cooperative cluster is hindered neither by vacancy nor by invading defectors, thus discovering an intuitive yet universal law that links the population density with social prosperity.
Intelligent Network Flow Optimization (INFLO) prototype acceptance test summary.
DOT National Transportation Integrated Search
2015-05-01
This report summarizes the results of System Acceptance Testing for the implementation of the Intelligent Network Flow Optimization (INFLO) Prototype bundle within the Dynamic Mobility Applications (DMA) portion of the Connected Vehicle Program. This...
Cowley, Laura E; Maguire, Sabine; Farewell, Daniel M; Quinn-Scoggins, Harriet D; Flynn, Matthew O; Kemp, Alison M
2018-05-09
The validated Predicting Abusive Head Trauma (PredAHT) tool estimates the probability of abusive head trauma (AHT) based on combinations of six clinical features: head/neck bruising; apnea; seizures; rib/long-bone fractures; retinal hemorrhages. We aimed to determine the acceptability of PredAHT to child protection professionals. We conducted qualitative semi-structured interviews with 56 participants: clinicians (25), child protection social workers (10), legal practitioners (9, including 4 judges), police officers (8), and pathologists (4), purposively sampled across southwest United Kingdom. Interviews were recorded, transcribed and imported into NVivo for thematic analysis (38% double-coded). We explored participants' evaluations of PredAHT, their opinions about the optimal way to present the calculated probabilities, and their interpretation of probabilities in the context of suspected AHT. Clinicians, child protection social workers and police thought PredAHT would be beneficial as an objective adjunct to their professional judgment, to give them greater confidence in their decisions. Lawyers and pathologists appreciated its value for prompting multidisciplinary investigations, but were uncertain of its usefulness in court. Perceived disadvantages included: possible over-reliance and false reassurance from a low score. Interpretations regarding which percentages equate to 'low', 'medium' or 'high' likelihood of AHT varied; participants preferred a precise % probability over these general terms. Participants would use PredAHT with provisos: if they received multi-agency training to define accepted risk thresholds for consistent interpretation; with knowledge of its development; if it was accepted by colleagues. PredAHT may therefore increase professionals' confidence in their decision-making when investigating suspected AHT, but may be of less value in court. Copyright © 2018 Elsevier Ltd. All rights reserved.
Objective lens simultaneously optimized for pupil ghosting, wavefront delivery and pupil imaging
NASA Technical Reports Server (NTRS)
Olczak, Eugene G (Inventor)
2011-01-01
An objective lens includes multiple optical elements disposed between a first end and a second end, each optical element oriented along an optical axis. Each optical surface of the multiple optical elements provides an angle of incidence to a marginal ray that is above a minimum threshold angle. This threshold angle minimizes pupil ghosts that may enter an interferometer. The objective lens also optimizes wavefront delivery and pupil imaging onto an optical surface under test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.
1992-03-01
In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less
Geneste, J.; Pereira, B.; Arnaud, B.; Christol, N.; Liotier, J.; Blanc, O.; Teissedre, F.; Hope, S.; Schwan, R.; Llorca, P.M.; Schmidt, J.; Cherpitel, C.J.; Malet, L.; Brousse, G.
2012-01-01
Aims: A number of screening instruments are routinely used in Emergency Department (ED) situations to identify alcohol-use disorders (AUD). We wished to study the psychometric features, particularly concerning optimal thresholds scores (TSs), of four assessment scales frequently used to screen for abuse and/or dependence, the cut-down annoyed guilty eye-opener (CAGE), Rapid Alcohol Problem Screen 4 (RAPS4), RAPS4-quantity-frequency and AUD Identification Test (AUDIT) questionnaires, particularly in the sub-group of people admitted for acute alcohol intoxication (AAI). Methods: All included patients [AAI admitted to ED (blood alcohol level ≥0.8 g/l)] were assessed by the four scales, and with a gold standard (alcohol dependence⁄abuse section of the Mini International Neuropsychiatric Interview), to determine AUD status. To investigate the TSs of the scales, we used Youden's index, efficiency, receiver operating characteristic (ROC) curve techniques and quality ROC curve technique for optimized TS (indices of quality). Results: A total of 164 persons (122 males, 42 females) were included in the study. Nineteen (11.60%) were identified as alcohol abusers alone and 128 (78.1%) as alcohol dependents (DSM-IV). Results suggest a statistically significant difference between men and women (P < 0.05) in performance of the screening tests RAPS4 (≥1) and CAGE (≥2) for detecting abuse. Also, in this population, we show an increase in TSs of RAPS4 (≥2) and CAGE (≥3) for detecting dependence compared with those typically accepted in non-intoxicated individuals. The AUDIT test demonstrates good performance for detecting alcohol abuse and/or alcohol-dependent patients (≥7 for women and ≥12 for men) and for distinguishing alcohol dependence (≥11 for women and ≥14 for men) from other conditions. Conclusion: Our study underscores for the first time the need to adapt, taking into account gender, the thresholds of tests typically used for detection of abuse and dependence in this population. PMID:22414922
An adaptive threshold detector and channel parameter estimator for deep space optical communications
NASA Technical Reports Server (NTRS)
Arabshahi, P.; Mukai, R.; Yan, T. -Y.
2001-01-01
This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.
Optimal control of population recovery--the role of economic restoration threshold.
Lampert, Adam; Hastings, Alan
2014-01-01
A variety of ecological systems around the world have been damaged in recent years, either by natural factors such as invasive species, storms and global change or by direct human activities such as overfishing and water pollution. Restoration of these systems to provide ecosystem services entails significant economic benefits. Thus, choosing how and when to restore in an optimal fashion is important, but has not been well studied. Here we examine a general model where population growth can be induced or accelerated by investing in active restoration. We show that the most cost-effective method to restore an ecosystem dictates investment until the population approaches an 'economic restoration threshold', a density above which the ecosystem should be left to recover naturally. Therefore, determining this threshold is a key general approach for guiding efficient restoration management, and we demonstrate how to calculate this threshold for both deterministic and stochastic ecosystems. © 2013 John Wiley & Sons Ltd/CNRS.
Optimal threshold estimation for binary classifiers using game theory.
Sanchez, Ignacio Enrique
2016-01-01
Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.
Perceptions of midline deviations among different facial types.
Williams, Ryan P; Rinchuse, Daniel J; Zullo, Thomas G
2014-02-01
The correction of a deviated midline can involve complicated mechanics and a protracted treatment. The threshold below which midline deviations are considered acceptable might depend on multiple factors. The objective of this study was to evaluate the effect of facial type on laypersons' perceptions of various degrees of midline deviation. Smiling photographs of male and female subjects were altered to create 3 facial type variations (euryprosopic, mesoprosopic, and leptoprosopic) and deviations in the midline ranging from 0.0 to 4.0 mm. Evaluators rated the overall attractiveness and acceptability of each photograph. Data were collected from 160 raters. The overall threshold for the acceptability of a midline deviation was 2.92 ± 1.10 mm, with the threshold for the male subject significantly lower than that for the female subject. The euryprosopic facial type showed no decrease in mean attractiveness until the deviations were 2 mm or more. All other facial types were rated as decreasingly attractive from 1 mm onward. Among all facial types, the attractiveness of the male subject was only affected at deviations of 2 mm or greater; for the female subject, the attractiveness scores were significantly decreased at 1 mm. The mesoprosopic facial type was most attractive for the male subject but was the least attractive for the female subject. Facial type and sex may affect the thresholds at which a midline deviation is detected and above which a midline deviation is considered unacceptable. Both the euryprosopic facial type and male sex were associated with higher levels of attractiveness at relatively small levels of deviations. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Minet, V; Baudar, J; Bailly, N; Douxfils, J; Laloy, J; Lessire, S; Gourdin, M; Devalet, B; Chatelain, B; Dogné, J M; Mullier, F
2014-06-01
Accurate diagnosis of heparin-induced thrombocytopenia (HIT) is essential but remains challenging. We have previously demonstrated, in a retrospective study, the usefulness of the combination of the 4Ts score, AcuStar HIT and heparin-induced multiple electrode aggregometry (HIMEA) with optimized thresholds. We aimed at exploring prospectively the performances of our optimized diagnostic algorithm on suspected HIT patients. The secondary objective is to evaluate performances of AcuStar HIT-Ab (PF4-H) in comparison with the clinical outcome. 116 inpatients with clinically suspected immune HIT were included. Our optimized diagnostic algorithm was applied to each patient. Sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) of the overall diagnostic strategy as well as AcuStar HIT-Ab (at manufacturer's thresholds and at our thresholds) were calculated using clinical diagnosis as the reference. Among 116 patients, 2 patients had clinically-diagnosed HIT. These 2 patients were positive on AcuStar HIT-Ab, AcuStar HIT-IgG and HIMEA. Using our optimized algorithm, all patients were correctly diagnosed. AcuStar HIT-Ab at our cut-off (>9.41 U/mL) and at manufacturer's cut-off (>1.00 U/mL) showed both a sensitivity of 100.0% and a specificity of 99.1% and 90.4%, respectively. The combination of the 4Ts score, the HemosIL® AcuStar HIT and HIMEA with optimized thresholds may be useful for the rapid and accurate exclusion of the diagnosis of immune HIT. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reasoning in psychosis: risky but not necessarily hasty.
Moritz, Steffen; Scheu, Florian; Andreou, Christina; Pfueller, Ute; Weisbrod, Matthias; Roesch-Ely, Daniela
2016-01-01
A liberal acceptance (LA) threshold for hypotheses has been put forward to explain the well-replicated "jumping to conclusions" (JTC) bias in psychosis, particularly in patients with paranoid symptoms. According to this account, schizophrenia patients rest their decisions on lower subjective probability estimates. The initial formulation of the LA account also predicts an absence of the JTC bias under high task ambiguity (i.e., if more than one response option surpasses the subjective acceptance threshold). Schizophrenia patients (n = 62) with current or former delusions and healthy controls (n = 30) were compared on six scenarios of a variant of the beads task paradigm. Decision-making was assessed under low and high task ambiguity. Along with decision judgments (optional), participants were required to provide probability estimates for each option in order to determine decision thresholds (i.e., the probability the individual deems sufficient for a decision). In line with the LA account, schizophrenia patients showed a lowered decision threshold compared to controls (82% vs. 93%) which predicted both more errors and less draws to decisions. Group differences on thresholds were comparable across conditions. At the same time, patients did not show hasty decision-making, reflecting overall lowered probability estimates in patients. Results confirm core predictions derived from the LA account. Our results may (partly) explain why hasty decision-making is sometimes aggravated and sometimes abolished in psychosis. The proneness to make risky decisions may contribute to the pathogenesis of psychosis. A revised LA account is put forward.
[The analysis of threshold effect using Empower Stats software].
Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan
2013-11-01
In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.
Optimizing computer-aided colonic polyp detection for CT colonography by evolving the Pareto front1
Li, Jiang; Huang, Adam; Yao, Jack; Liu, Jiamin; Van Uitert, Robert L.; Petrick, Nicholas; Summers, Ronald M.
2009-01-01
A multiobjective genetic algorithm is designed to optimize a computer-aided detection (CAD) system for identifying colonic polyps. Colonic polyps appear as elliptical protrusions on the inner surface of the colon. Curvature-based features for colonic polyp detection have proved to be successful in several CT colonography (CTC) CAD systems. Our CTC CAD program uses a sequential classifier to form initial polyp detections on the colon surface. The classifier utilizes a set of thresholds on curvature-based features to cluster suspicious colon surface regions into polyp candidates. The thresholds were previously chosen experimentally by using feature histograms. The chosen thresholds were effective for detecting polyps sized 10 mm or larger in diameter. However, many medium-sized polyps, 6–9 mm in diameter, were missed in the initial detection procedure. In this paper, the task of finding optimal thresholds as a multiobjective optimization problem was formulated, and a genetic algorithm to solve it was utilized by evolving the Pareto front of the Pareto optimal set. The new CTC CAD system was tested on 792 patients. The sensitivities of the optimized system improved significantly, from 61.68% to 74.71% with an increase of 13.03% (95% CI [6.57%, 19.5%], p=7.78×10−5) for the size category of 6–9 mm polyps, from 65.02% to 77.4% with an increase of 12.38% (95% CI [6.23%, 18.53%], p=7.95×10−5) for polyps 6 mm or larger, and from 82.2% to 90.58% with an increase of 8.38% (95%CI [0.75%, 16%], p=0.03) for polyps 8 mm or larger at comparable false positive rates. The sensitivities of the optimized system are nearly equivalent to those of expert radiologists. PMID:19235388
Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R
To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.
Sarmast, Nima D; Angelov, Nikola; Ghinea, Razvan; Powers, John M; Paravina, Rade D
The CIELab and CIEDE2000 coverage error (ΔE* COV and ΔE' COV , respectively) of basic shades of different gingival shade guides and gingiva-colored restorative dental materials (n = 5) was calculated as compared to a previously compiled database on healthy human gingiva. Data were analyzed using analysis of variance with Tukey-Kramer multiple-comparison test (P < .05). A 50:50% acceptability threshold of 4.6 for ΔE* and 4.1 for ΔE' was used to interpret the results. ΔE* COV / ΔE' COV ranged from 4.4/3.5 to 8.6/6.9. The majority of gingival shade guides and gingiva-colored restorative materials exhibited statistically significant coverage errors above the 50:50% acceptability threshold and uneven shade distribution.
Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine
2013-08-06
We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.
de Lemos Zingano, Bianca; Guarnieri, Ricardo; Diaz, Alexandre Paim; Schwarzbold, Marcelo Liborio; Bicalho, Maria Alice Horta; Claudino, Lucia Sukys; Markowitsch, Hans J; Wolf, Peter; Lin, Katia; Walz, Roger
2015-09-01
This study aimed to evaluate the diagnostic accuracy of the Hamilton Rating Scale for Depression (HRSD), the Beck Depression Inventory (BDI), the Hospital Anxiety and Depression Scale (HADS), and the Hospital Anxiety and Depression Scale-Depression subscale (HADS-D) as diagnostic tests for depressive disorder in drug-resistant mesial temporal lobe epilepsy with hippocampal sclerosis (MTLE-HS). One hundred three patients with drug-resistant MTLE-HS were enrolled. All patients underwent a neurological examination, interictal and ictal video-electroencephalogram (V-EEG) analyses, and magnetic resonance imaging (MRI). Psychiatric interviews were based on DSM-IV-TR criteria and ILAE Commission of Psychobiology classification as a gold standard; HRSD, BDI, HADS, and HADS-D were used as psychometric diagnostic tests, and receiver operating characteristic (ROC) curves were used to determine the optimal threshold scores. For all the scales, the areas under the curve (AUCs) were approximately 0.8, and they were able to identify depression in this sample. A threshold of ≥9 on the HRSD and a threshold of ≥8 on the HADS-D showed a sensitivity of 70% and specificity of 80%. A threshold of ≥19 on the BDI and HADS-D total showed a sensitivity of 55% and a specificity of approximately 90%. The instruments showed a negative predictive value of approximately 87% and a positive predictive value of approximately 65% for the BDI and HADS total and approximately 60% for the HRSD and HADS-D. HRSD≥9 and HADS-D≥8 had the best balance between sensitivity (approximately 70%) and specificity (approximately 80%). However, with these thresholds, these diagnostic tests do not appear useful in identifying depressive disorder in this population with epilepsy, and their specificity (approximately 80%) and PPV (approximately 55%) were lower than those of the other scales. We believe that the BDI and HADS total are valid diagnostic tests for depressive disorder in patients with MTLE-HS, as both scales showed acceptable (though not high) specificity and PPV for this type of study. Copyright © 2015 Elsevier Inc. All rights reserved.
Children's perceptions of smile esthetics and their influence on social judgment.
Rossini, Gabriele; Parrini, Simone; Castroflorio, Tommaso; Fortini, Arturo; Deregibus, Andrea; Debernardi, Cesare L
2016-11-01
To define a threshold of acceptance of smile esthetics for children and adolescents. A systematic search in the medical literature (PubMed, PubMed Central, National Library of Medicine's Medline, Embase, Cochrane Central Register of Controlled Clinical Trials, Web of Knowledge, Scopus, Google Scholar, and LILACs) was performed to identify all peer-reviewed papers reporting data regarding the evaluation of children's and adolescents' perceptions of dental esthetic factors. The search was conducted using a research strategy based on keywords such as "children," "adolescents," "smile aesthetics perception," "smile aesthetics evaluation." Studies analyzing smile esthetics involving at least 10 observers younger than 18 years of age were selected. Among the 1667 analyzed articles, five studies were selected for the final review process. No study included in the review analyzed perception of smile anomalies in a quantitative or qualitative way, thus no threshold was identified for smile features. Among the analyzed samples, unaltered smiles were always significantly associated with better evaluation scores when compared with altered smiles. Smile esthetics influence social perception during childhood and adolescence. However, thresholds of smile esthetic acceptance in children and adolescents are still not available.
Austel, Michaela; Hensel, Patrick; Jackson, Dawn; Vidyashankar, Anand; Zhao, Ying; Medleau, Linda
2006-06-01
The purpose of this study was to determine the optimal histamine concentration and 'irritant' allergen threshold concentrations in intradermal testing (IDT) in normal cats. Thirty healthy cats were tested with three different histamine concentrations and four different concentrations of each allergen. The optimal histamine concentration was determined to be 1: 50,000 w/v (0.05 mg mL(-1)). Using this histamine concentration, the 'irritant' threshold concentration for most allergens was above the highest concentrations tested (4,000 PNU mL(-1) for 41 allergens and 700 PNU mL(-1) for human dander). The 'irritant' threshold concentration for flea antigen was determined to be 1:750 w/v. More than 10% of the tested cats showed positive reactions to Dermatophagoides farinae, Dermatophagoides pteronyssinus, housefly, mosquito and moth at every allergen concentration, which suggests that the 'irritant' threshold concentration for these allergens is below 1,000 PNU mL(-1), the lowest allergen concentration tested. Our results confirm previous studies in indicating that allergen and histamine concentrations used in feline IDT may need to be revised.
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia
2018-04-01
This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.
NASA Astrophysics Data System (ADS)
Smetanin, S. N.; Jelínek, M., Jr.; Kubeček, V.; Jelínková, H.
2015-09-01
Optimal conditions of low-threshold collinear parametric Raman comb generation in calcite (CaCO3) are experimentally investigated under 20 ps laser pulse excitation, in agreement with the theoretical study. The collinear parametric Raman generation of the highest number of Raman components in the short calcite crystals corresponding to the optimal condition of Stokes-anti-Stokes coupling was achieved. At the excitation wavelength of 1064 nm, using the optimum-length crystal resulted in the effective multi-octave frequency Raman comb generation containing up to five anti-Stokes and more than four Stokes components (from 674 nm to 1978 nm). The 532 nm pumping resulted in the frequency Raman comb generation from the 477 nm 2nd anti-Stokes up to the 692 nm 4th Stokes component. Using the crystal with a non-optimal length leads to the Stokes components generation only with higher thresholds because of the cascade-like stimulated Raman scattering with suppressed parametric coupling.
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Paradigm shift in lead design.
Irnich, W
1999-09-01
During the past 30 years there has been a tremendous development in electrode technology from bulky (90 mm2) to pin-sized (1.0 mm2) electrodes. Simultaneously, impedance has increased from 110 Ohms to >1 kOhms, which has been termed a "paradigm shift" in lead design. If current is responsible for stimulation, why is its impedance a key factor in saving energy? Further, what mechanism is behind this development based on experimental findings and what conclusion can be drawn from it to optimize electrode size? If it is assumed that there is always a layer of nonexcitable tissue between the electrode surface and excitable myocardium and that the electric field (potential gradient) produced by the electrode at this boundary is reaching threshold level, then a formula can be derived for the voltage threshold that completely describes the electrophysiology and electrophysics of a hemispherical electrode. Assuming that the mean chronic threshold for porous steroid-eluting electrodes is 0.6 V with 0.5-ms pulse duration, thickness of nonexcitable tissue can be estimated to be 1.5 mm. Taking into account this measure and the relationship between chronaxie and electrode area, voltage threshold, impedance, and energy as a function of surface area can be calculated. The lowest voltage for 0.5-ms pulse duration is reached with r(o) = 0.5 d, yielding a surface area of 4 mm2 and a voltage threshold of 0.62 V, an impedance of 1 kOhms, and an energy level of 197 nJ. It can be deduced from our findings that a further reduction of surface areas below 1.6 mm2 will not diminish energy threshold substantially, if pulse duration remains at 0.5 ms. Lowest energy is reached with t = chronaxie, yielding an energy level <100 nJ with surface areas < or =1.5 mm2. It is striking to see how well the theoretically derived results correspond to the experimental findings. It is also surprising that the hemispheric model so accurately approximates experimental results with differently shaped electrodes that it can be concluded that electrode shape seems to play a minor role in electrode efficiency. Further energy reduction can only be achieved by reducing the pulse duration to chronaxie. A real paradigm shift will occur only if the fundamentals of electrostimulation in combination with electrophysics are accepted by the pacing community.
Using Reanalysis Data for the Prediction of Seasonal Wind Turbine Power Losses Due to Icing
NASA Astrophysics Data System (ADS)
Burtch, D.; Mullendore, G. L.; Delene, D. J.; Storm, B.
2013-12-01
The Northern Plains region of the United States is home to a significant amount of potential wind energy. However, in winter months capturing this potential power is severely impacted by the meteorological conditions, in the form of icing. Predicting the expected loss in power production due to icing is a valuable parameter that can be used in wind turbine operations, determination of wind turbine site locations and long-term energy estimates which are used for financing purposes. Currently, losses due to icing must be estimated when developing predictions for turbine feasibility and financing studies, while icing maps, a tool commonly used in Europe, are lacking in the United States. This study uses the Modern-Era Retrospective Analysis for Research and Applications (MERRA) dataset in conjunction with turbine production data to investigate various methods of predicting seasonal losses (October-March) due to icing at two wind turbine sites located 121 km apart in North Dakota. The prediction of icing losses is based on temperature and relative humidity thresholds and is accomplished using three methods. For each of the three methods, the required atmospheric variables are determined in one of two ways: using industry-specific software to correlate anemometer data in conjunction with the MERRA dataset and using only the MERRA dataset for all variables. For each season, a percentage of the total expected generated power lost due to icing is determined and compared to observed losses from the production data. An optimization is performed in order to determine the relative humidity threshold that minimizes the difference between the predicted and observed values. Eight seasons of data are used to determine an optimal relative humidity threshold, and a further three seasons of data are used to test this threshold. Preliminary results have shown that the optimized relative humidity threshold for the northern turbine is higher than the southern turbine for all methods. For the three test seasons, the optimized thresholds tend to under-predict the icing losses. However, the threshold determined using boundary layer similarity theory most closely predicts the power losses due to icing versus the other methods. For the northern turbine, the average predicted power loss over the three seasons is 4.65 % while the observed power loss is 6.22 % (average difference of 1.57 %). For the southern turbine, the average predicted power loss and observed power loss over the same time period are 4.43 % and 6.16 %, respectively (average difference of 1.73 %). The three-year average, however, does not clearly capture the variability that exists season-to-season. On examination of each of the test seasons individually, the optimized relative humidity threshold methodology performs better than fixed power loss estimates commonly used in the wind energy industry.
Perez, Claudio A; Cohn, Theodore E; Medina, Leonel E; Donoso, José R
2007-08-31
Stochastic resonance (SR) is the counterintuitive phenomenon in which noise enhances detection of sub-threshold stimuli. The SR psychophysical threshold theory establishes that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. The aim of this study was to test the SR theory by comparing detection results from two different randomly-presented stimulus conditions. In the first condition, optimal noise was present during the whole attention interval; in the second, the optimal noise was restricted to the same time interval as the stimulus. SR threshold theory predicts no difference between the two conditions because noise helps the sub-threshold stimulus to reach threshold in both cases. The psychophysical experimental method used a 300 ms rectangular force pulse as a stimulus within an attention interval of 1.5 s, applied to the index finger of six human subjects in the two distinct conditions. For all subjects we show that in the condition in which the noise was present only when synchronized with the stimulus, detection was better (p<0.05) than in the condition in which the noise was delivered throughout the attention interval. These results provide the first direct evidence that SR threshold theory is incomplete and that a new phenomenon has been identified, which we call Coincidence-Enhanced Stochastic Resonance (CESR). We propose that CESR might occur because subject uncertainty is reduced when noise points at the same temporal window as the stimulus.
Sato, Atsushi; Shimizu, Yusaku; Koyama, Junichi; Hongo, Kazuhiro
2017-06-01
Tissue plasminogen activator (tPA) is effective for the treatment of acute brain ischemia, but may trigger fatal brain edema or hemorrhage if the brain ischemia results in a large infarct. Herein, we attempted to predict the extent of infarcts by determining the optimal threshold of ADC values on DWI that predictively distinguishes between infarct and reversible areas, and by reconstructing color-coded images based on this threshold. The study subjects consisted of 36 patients with acute brain ischemia in whom MRA had confirmed reopening of the occluded arteries in a short time (mean: 99min) after tPA treatment. We measured the apparetnt diffusion coefficient (ADC) values in several small regions of interest over the white matter within high-intensity areas on the initial diffusion weighted image (DWI); then, by comparing the findings to the follow-up images, we obtained the optimal threshold of ADC values using receiver-operating characteristic analysis. The threshold obtained (583×10 -6 m 2 /s) was lower than those previously reported; this threshold could distinguish between infarct and reversible areas with considerable accuracy (sensitivity: 0.87, specificity: 0.94). The threshold obtained and the reconstructed images were predictive of the final radiological result of tPA treatment, and this threshold may be helpful in determining the appropriate management of patients with acute brain ischemia. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Defining ADHD symptom persistence in adulthood: optimizing sensitivity and specificity.
Sibley, Margaret H; Swanson, James M; Arnold, L Eugene; Hechtman, Lily T; Owens, Elizabeth B; Stehli, Annamarie; Abikoff, Howard; Hinshaw, Stephen P; Molina, Brooke S G; Mitchell, John T; Jensen, Peter S; Howard, Andrea L; Lakes, Kimberley D; Pelham, William E
2017-06-01
Longitudinal studies of children diagnosed with ADHD report widely ranging ADHD persistence rates in adulthood (5-75%). This study documents how information source (parent vs. self-report), method (rating scale vs. interview), and symptom threshold (DSM vs. norm-based) influence reported ADHD persistence rates in adulthood. Five hundred seventy-nine children were diagnosed with DSM-IV ADHD-Combined Type at baseline (ages 7.0-9.9 years) 289 classmates served as a local normative comparison group (LNCG), 476 and 241 of whom respectively were evaluated in adulthood (Mean Age = 24.7). Parent and self-reports of symptoms and impairment on rating scales and structured interviews were used to investigate ADHD persistence in adulthood. Persistence rates were higher when using parent rather than self-reports, structured interviews rather than rating scales (for self-report but not parent report), and a norm-based (NB) threshold of 4 symptoms rather than DSM criteria. Receiver-Operating Characteristics (ROC) analyses revealed that sensitivity and specificity were optimized by combining parent and self-reports on a rating scale and applying a NB threshold. The interview format optimizes young adult self-reporting when parent reports are not available. However, the combination of parent and self-reports from rating scales, using an 'or' rule and a NB threshold optimized the balance between sensitivity and specificity. With this definition, 60% of the ADHD group demonstrated symptom persistence and 41% met both symptom and impairment criteria in adulthood. © 2016 Association for Child and Adolescent Mental Health.
A framework for optimizing phytosanitary thresholds in seed systems
USDA-ARS?s Scientific Manuscript database
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps ...
Kaur, Taranjit; Saini, Barjinder Singh; Gupta, Savita
2018-03-01
In the present paper, a hybrid multilevel thresholding technique that combines intuitionistic fuzzy sets and tsallis entropy has been proposed for the automatic delineation of the tumor from magnetic resonance images having vague boundaries and poor contrast. This novel technique takes into account both the image histogram and the uncertainty information for the computation of multiple thresholds. The benefit of the methodology is that it provides fast and improved segmentation for the complex tumorous images with imprecise gray levels. To further boost the computational speed, the mutation based particle swarm optimization is used that selects the most optimal threshold combination. The accuracy of the proposed segmentation approach has been validated on simulated, real low-grade glioma tumor volumes taken from MICCAI brain tumor segmentation (BRATS) challenge 2012 dataset and the clinical tumor images, so as to corroborate its generality and novelty. The designed technique achieves an average Dice overlap equal to 0.82010, 0.78610 and 0.94170 for three datasets. Further, a comparative analysis has also been made between the eight existing multilevel thresholding implementations so as to show the superiority of the designed technique. In comparison, the results indicate a mean improvement in Dice by an amount equal to 4.00% (p < 0.005), 9.60% (p < 0.005) and 3.58% (p < 0.005), respectively in contrast to the fuzzy tsallis approach.
NASA Technical Reports Server (NTRS)
James, G. K.; Slevin, J. A.; Shemansky, D. E.; McConkey, J. W.; Bray, I.; Dziczek, D.; Kanik, I.; Ajello, J. M.
1997-01-01
The optical excitation function of prompt Lyman-Alpha radiation, produced by electron impact on atomic hydrogen, has been measured over the extended energy range from threshold to 1.8 keV. Measurements were obtained in a crossed-beams experiment using both magnetically confined and electrostatically focused electrons in collision with atomic hydrogen produced by an intense discharge source. A vacuum-ultraviolet mono- chromator system was used to measure the emitted Lyman-Alpha radiation. The absolute H(1s-2p) electron impact excitation cross section was obtained from the experimental optical excitation function by normalizing to the accepted optical oscillator strength, with corrections for polarization and cascade. Statistical and known systematic uncertainties in our data range from +/- 4% near threshold to +/- 2% at 1.8 keV. Multistate coupling affecting the shape of the excitation function up to 1 keV impact energy is apparent in both the present experimental data and present theoretical results obtained with convergent close- coupling (CCC) theory. This shape function effect leads to an uncertainty in absolute cross sections at the 10% level in the analysis of the experimental data. The derived optimized absolute cross sections are within 7% of the CCC calculations over the 14 eV-1.8 keV range. The present CCC calculations converge on the Bethe- Fano profile for H(1s-2p) excitation at high energy. For this reason agreement with the CCC values to within 3% is achieved in a nonoptimal normalization of the experimental data to the Bethe-Fano profile. The fundamental H(1s-2p) electron impact cross section is thereby determined to an unprecedented accuracy over the 14 eV - 1.8 keV energy range.
Lane change warning threshold based on driver perception characteristics.
Wang, Chang; Sun, Qinyu; Fu, Rui; Li, Zhen; Zhang, Qiong
2018-08-01
Lane Change Warning system (LCW) is exploited to alleviate driver workload and improve the safety performance of lane changes. Depending on the secure threshold, the lane change warning system could transmit caution to drivers. Although the system possesses substantial benefits, it may perturb the conventional operating of the driver and affect driver judgment if the warning threshold does not conform to the driver perception of safety. Therefore, it is essential to establish an appropriate warning threshold to enhance the accuracy rate and acceptability of the lane change warning system. This research aims to identify the threshold that conforms to the driver perception of the ability to safely change lanes with a rear vehicle fast approaching. We propose a theoretical warning model of lane change based on a safe minimum distance and deceleration of the rear vehicle. For the purpose of acquiring the different safety levels of lane changes, 30 licensed drivers are recruited and we obtain the extreme moments represented by driver perception characteristics from a Front Extremity Test and a Rear Extremity Test implemented on the freeway. The required deceleration of the rear vehicle corresponding to the extreme time is calculated according to the proposed model. In light of discrepancies in the deceleration in these extremity experiments, we determine two levels of a hierarchical warning system. The purpose of the primary warning is to remind drivers of the existence of potentially dangerous vehicles and the second warning is used to warn the driver to stop changing lanes immediately. We use the signal detection theory to analyze the data. Ultimately, we confirm that the first deceleration threshold is 1.5 m/s 2 and the second deceleration threshold is 2.7 m/s 2 . The findings provide the basis for the algorithm design of LCW and enhance the acceptability of the intelligent system. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.
2005-04-01
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
Acceptable regret in medical decision making.
Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M
1999-09-01
When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.
Why does society accept a higher risk for alcohol than for other voluntary or involuntary risks?
Rehm, Jürgen; Lachenmeier, Dirk W; Room, Robin
2014-10-21
Societies tend to accept much higher risks for voluntary behaviours, those based on individual decisions (for example, to smoke, to consume alcohol, or to ski), than for involuntary exposure such as exposure to risks in soil, drinking water or air. In high-income societies, an acceptable risk to those voluntarily engaging in a risky behaviour seems to be about one death in 1,000 on a lifetime basis. However, drinking more than 20 g pure alcohol per day over an adult lifetime exceeds a threshold of one in 100 deaths, based on a calculation from World Health Organization data of the odds in six European countries of dying from alcohol-attributable causes at different levels of drinking. The voluntary mortality risk of alcohol consumption exceeds the risks of other lifestyle risk factors. In addition, evidence shows that the involuntary risks resulting from customary alcohol consumption far exceed the acceptable threshold for other involuntary risks (such as those established by the World Health Organization or national environmental agencies), and would be judged as not acceptable. Alcohol's exceptional status reflects vagaries of history, which have so far resulted in alcohol being exempted from key food legislation (no labelling of ingredients and nutritional information) and from international conventions governing all other psychoactive substances (both legal and illegal). This is along with special treatment of alcohol in the public health field, in part reflecting overestimation of its beneficial effect on ischaemic disease when consumed in moderation. A much higher mortality risk from alcohol than from other risk factors is currently accepted by high income countries.
Damian, Anne M; Jacobson, Sandra A; Hentz, Joseph G; Belden, Christine M; Shill, Holly A; Sabbagh, Marwan N; Caviness, John N; Adler, Charles H
2011-01-01
To perform an item analysis of the Montreal Cognitive Assessment (MoCA) versus the Mini-Mental State Examination (MMSE) in the prediction of cognitive impairment, and to examine the characteristics of different MoCA threshold scores. 135 subjects enrolled in a longitudinal clinicopathologic study were administered the MoCA by a single physician and the MMSE by a trained research assistant. Subjects were classified as cognitively impaired or cognitively normal based on independent neuropsychological testing. 89 subjects were found to be cognitively normal, and 46 cognitively impaired (20 with dementia, 26 with mild cognitive impairment). The MoCA was superior in both sensitivity and specificity to the MMSE, although not all MoCA tasks were of equal predictive value. A MoCA threshold score of 26 had a sensitivity of 98% and a specificity of 52% in this population. In a population with a 20% prevalence of cognitive impairment, a threshold of 24 was optimal (negative predictive value 96%, positive predictive value 47%). This analysis suggests the potential for creating an abbreviated MoCA. For screening in primary care, the MoCA threshold of 26 appears optimal. For testing in a memory disorders clinic, a lower threshold has better predictive value. Copyright © 2011 S. Karger AG, Basel.
Gauging the likelihood of stable cavitation from ultrasound contrast agents
NASA Astrophysics Data System (ADS)
Bader, Kenneth B.; Holland, Christy K.
2013-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Gauging the likelihood of stable cavitation from ultrasound contrast agents.
Bader, Kenneth B; Holland, Christy K
2013-01-07
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Gauging the likelihood of stable cavitation from ultrasound contrast agents
Bader, Kenneth B; Holland, Christy K
2015-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs. PMID:23221109
SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M
Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less
40 CFR 30.44 - Procurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... characteristics or minimum acceptable standards. (iv) The specific features of “brand name or equal” descriptions..., specifies a “brand name” product. (4) The proposed award over the small purchase threshold is to be awarded...
Dose-response studies and 'no-effect-levels' of N-nitroso compounds: some general aspects.
Preussmann, R
1980-01-01
One major problem in the evaluation of potential carcinogenic food additives and contaminants is that of thresholds or, better, of 'no-adverse-effect-levels'. Arguments in favor of the postulated 'irreversibility' of carcinogenic effects are based on dose-response studies, single dose and multigeneration experiments as well as on the concept of somatic mutation as the first step in carcinogenesis with subsequent transmittance of induced defects during cell replication. The problem of extrapolation of results of animal experiments using high doses to low exposure and low incidences in man is not yet solved satisfactorily. Possible practical consequences include zero tolerance, acceptable thresholds at low risk and safety factors. Acceptable intakes should never be considered constants but should be changeable as soon as new facts in regard to the safety evaluation are available.
Rodríguez Barrios, José Manuel; Pérez Alcántara, Ferran; Crespo Palomo, Carlos; González García, Paloma; Antón De Las Heras, Enrique; Brosa Riestra, Max
2012-12-01
The objective of this study was to evaluate the methodological characteristics of cost-effectiveness evaluations carried out in Spain, since 1990, which include LYG as an outcome to measure the incremental cost-effectiveness ratio. A systematic review of published studies was conducted describing their characteristics and methodological quality. We analyse the cost per LYG results in relation with a commonly accepted Spanish cost-effectiveness threshold and the possible relation with the cost per quality adjusted life year (QALY) gained when they both were calculated for the same economic evaluation. A total of 62 economic evaluations fulfilled the selection criteria, 24 of them including the cost per QALY gained result as well. The methodological quality of the studies was good (55%) or very good (26%). A total of 124 cost per LYG results were obtained with a mean ratio of 49,529
Shen, Jing; Hu, Yanyun; Liu, Fang; Zeng, Hui; Li, Lianxi; Zhao, Jun; Zhao, Jungong; Zheng, Taishan; Lu, Huijuan; Lu, Fengdi; Bao, Yuqian; Jia, Weiping
2013-10-01
We investigated the relationship between vibration perception threshold and diabetic retinopathy and verified the screening value of vibration perception threshold for severe diabetic retinopathy. A total of 955 patients with type 2 diabetes were recruited and divided into three groups according to their fundus oculi photography results: no diabetic retinopathy (n = 654, 68.48%), non-sight-threatening diabetic retinopathy (n = 189, 19.79%) and sight-threatening diabetic retinopathy (n = 112, 11.73%). Their clinical and biochemical characteristics, vibration perception threshold and the diabetic retinopathy grades were detected and compared. There were significant differences in diabetes duration and blood glucose levels among three groups (all p < 0.05). The values of vibration perception threshold increased with the rising severity of retinopathy, and the vibration perception threshold level of sight-threatening diabetic retinopathy group was significantly higher than both non-sight-threatening diabetic retinopathy and no diabetic retinopathy groups (both p < 0.01). The prevalence of sight-threatening diabetic retinopathy in vibration perception threshold >25 V group was significantly higher than those in 16-24 V group (p < 0.01). The severity of diabetic retinopathy was positively associated with diabetes duration, blood glucose indexes and vibration perception threshold (all p < 0.01). Multiple stepwise regression analysis proved that glycosylated haemoglobin (β = 0.385, p = 0.000), diabetes duration (β = 0.275, p = 0.000) and vibration perception threshold (β = 0.180, p = 0.015) were independent risk factors for diabetic retinopathy. Receiver operating characteristic analysis further revealed that vibration perception threshold higher than 18 V was the optimal cut point for reflecting high risk of sight-threatening diabetic retinopathy (odds ratio = 4.20, 95% confidence interval = 2.67-6.59). There was a close association between vibration perception threshold and the severity of diabetic retinopathy. vibration perception threshold was a potential screening method for diabetic retinopathy, and its optimal cut-off for prompting high risk of sight-threatening retinopathy was 18 V. Copyright © 2013 John Wiley & Sons, Ltd.
van Rhoon, Gerard C; Samaras, Theodoros; Yarmolenko, Pavel S; Dewhirst, Mark W; Neufeld, Esra; Kuster, Niels
2013-08-01
To define thresholds of safe local temperature increases for MR equipment that exposes patients to radiofrequency fields of high intensities for long duration. These MR systems induce heterogeneous energy absorption patterns inside the body and can create localised hotspots with a risk of overheating. The MRI + EUREKA research consortium organised a "Thermal Workshop on RF Hotspots". The available literature on thresholds for thermal damage and the validity of the thermal dose (TD) model were discussed. The following global TD threshold guidelines for safe use of MR are proposed: 1. All persons: maximum local temperature of any tissue limited to 39 °C 2. Persons with compromised thermoregulation AND (a) Uncontrolled conditions: maximum local temperature limited to 39 °C (b) Controlled conditions: TD < 2 CEM43°C 3. Persons with uncompromised thermoregulation AND (a) Uncontrolled conditions: TD < 2 CEM43°C (b) Controlled conditions: TD < 9 CEM43°C The following definitions are applied: Controlled conditions A medical doctor or a dedicated trained person can respond instantly to heat-induced physiological stress Compromised thermoregulation All persons with impaired systemic or reduced local thermoregulation • Standard MRI can cause local heating by radiofrequency absorption. • Monitoring thermal dose (in units of CEM43°C) can control risk during MRI. • 9 CEM43°C seems an acceptable thermal dose threshold for most patients. • For skin, muscle, fat and bone,16 CEM43°C is likely acceptable.
Optimizing the motion of a folding molecular motor in soft matter.
Rajonson, Gabriel; Ciobotarescu, Simona; Teboul, Victor
2018-04-18
We use molecular dynamics simulations to investigate the displacement of a periodically folding molecular motor in a viscous environment. Our aim is to find significant parameters to optimize the displacement of the motor. We find that the choice of a massy host or of small host molecules significantly increase the motor displacements. While in the same environment, the motor moves with hopping solid-like motions while the host moves with diffusive liquid-like motions, a result that originates from the motor's larger size. Due to hopping motions, there are thresholds on the force necessary for the motor to reach stable positions in the medium. These force thresholds result in a threshold in the size of the motor to induce a significant displacement, that is followed by plateaus in the motor displacement.
Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu
2017-01-01
The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.
Oberle, Eva; Schonert-Reichl, Kimberly A; Thomson, Kimberly C
2010-11-01
Past studies have investigated relationships between peer acceptance and peer-rated social behaviors. However, relatively little is known about the manner in which indices of well-being such as optimism and positive affect may predict peer acceptance above and beyond peer ratings of antisocial and prosocial behaviors. Early adolescence-roughly between the ages of 9 and 14-is a time in the life span in which individuals undergo a myriad of changes at many different levels, such as changes due to cognitive development, pubertal development, and social role redefinitions. The present study investigated the relationship of self-reported affective empathy, optimism, anxiety (trait measures), and positive affect (state measure) to peer-reported peer acceptance in 99 (43% girls) 4th and 5th grade early adolescents. Because our preliminary analyses revealed gender-specific patterns, hierarchical regression analyses were conducted to investigate the predictors of peer acceptance separately for boys and for girls. Girls' acceptance of peers was significantly predicted by higher levels of empathy and optimism, and lower positive affect. For boys, higher positive affect, lower empathy, and lower anxiety significantly predicted peer acceptance. The results emphasize the importance of including indices of social and emotional well-being in addition to peer-ratings in understanding peer acceptance in early adolescence, and urge for more research on gender-specific peer acceptance.
Psychological Resources and Self-rated Health Status on Fifty-year-old Women
2015-01-01
Objectives The aim of the study is to expand knowledge about predictors of the self-rated health and mental health in fifty-year-old women. The study exploring links between self-rated mental/health and optimism, self-esteem, acceptance of the changes in physical look and some sociodemographic factors. Methods Participants in this study were 209 women aged 50 to 59. A single-items measures of self-rated health and mental health were used. Self-esteem was measured through the Rosenberg Self-Esteem Scale; optimism through the OPEB questionnaire; acceptance of the changes in physical look was rated by respondents on a seven-point scale. Participants were also asked about weight loss attempts, the amount of leisure time, and going on vacation during the last year. Results Predictors of the self-rated mental health in women in the age range of 50 to 59 were: acceptance of the changes in physical look, self-esteem and optimism. Predictors of the self-rated health were: optimism and acceptance of the changes in physical look. Conclusion Optimism and acceptance of the changes in physical look seem to be important factors that may impact subjective health both physical and mental of women in their 50s. The role of the leisure time and vacation in instilling the subjective health requires further investigation. PMID:26793678
Holman, Benjamin W B; Mao, Yanwei; Coombs, Cassius E O; van de Ven, Remy J; Hopkins, David L
2016-11-01
The relationship between instrumental colorimetric values (L*, a*, b*, the ratio of reflectance at 630nm and 580nm) and consumer perception of acceptable beef colour was evaluated using a web-based survey and standardised photographs of beef m. longissimus lumborum with known colorimetrics. Only L* and b* were found to relate to average consumer opinions of beef colour acceptability. Respondent nationality was also identified as a source of variation in beef colour acceptability score. Although this is a preliminary study with the findings necessitating additional investigation, these results suggest L* and b* as candidates for developing instrumental thresholds for consumer beef colour expectations. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Le Prell, Colleen G; Brungart, Douglas S
2016-09-01
In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.
Rejection thresholds in solid chocolate-flavored compound coating.
Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E
2012-10-01
Classical detection thresholds do not predict liking, as they focus on the presence or absence of a sensation. Recently however, Prescott and colleagues described a new method, the rejection threshold, where a series of forced choice preference tasks are used to generate a dose-response function to determine hedonically acceptable concentrations. That is, how much is too much? To date, this approach has been used exclusively in liquid foods. Here, we determined group rejection thresholds in solid chocolate-flavored compound coating for bitterness. The influences of self-identified preferences for milk or dark chocolate, as well as eating style (chewers compared to melters) on rejection thresholds were investigated. Stimuli included milk chocolate-flavored compound coating spiked with increasing amounts of sucrose octaacetate, a bitter and generally recognized as safe additive. Paired preference tests (blank compared to spike) were used to determine the proportion of the group that preferred the blank. Across pairs, spiked samples were presented in ascending concentration. We were able to quantify and compare differences between 2 self-identified market segments. The rejection threshold for the dark chocolate preferring group was significantly higher than the milk chocolate preferring group (P= 0.01). Conversely, eating style did not affect group rejection thresholds (P= 0.14), although this may reflect the amount of chocolate given to participants. Additionally, there was no association between chocolate preference and eating style (P= 0.36). Present work supports the contention that this method can be used to examine preferences within specific market segments and potentially individual differences as they relate to ingestive behavior. This work makes use of the rejection threshold method to study market segmentation, extending its use to solid foods. We believe this method has broad applicability to the sensory specialist and product developer by providing a process to identify how much is too much when formulating products, even in the context of specific market segments. We illustrate this in solid chocolate-flavored compound coating, identifying substantial differences in the amount of acceptable bitterness in those who prefer milk chocolate compared to dark chocolate. This method provides a direct means to answer the question of how much is too much. © 2012 Institute of Food Technologists®
A Framework for Optimizing Phytosanitary Thresholds in Seed Systems.
Choudhury, Robin Alan; Garrett, Karen A; Klosterman, Steven J; Subbarao, Krishna V; McRoberts, Neil
2017-10-01
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps limit the movement of pathogen inoculum through seed, but the costs associated with rejected seed lots can be prohibitive for voluntary implementation of phytosanitary thresholds. In this paper, we outline a framework to optimize thresholds for seedborne pathogens, balancing the cost of rejected seed lots and benefit of reduced inoculum levels. The method requires relatively small amounts of data, and the accuracy and robustness of the analysis improves over time as data accumulate from seed testing. We demonstrate the method first and illustrate it with a case study of seedborne oospores of Peronospora effusa, the causal agent of spinach downy mildew. A seed lot threshold of 0.23 oospores per seed could reduce the overall number of oospores entering the production system by 90% while removing 8% of seed lots destined for distribution. Alternative mitigation strategies may result in lower economic losses to seed producers, but have uncertain efficacy. We discuss future challenges and prospects for implementing this approach.
2016-07-02
beams Superresolution machining Threshold effect of ablation means that structure diameter is less than the beam diameter fs pulses at 800 nm yield 200...Approved for public release: distribution unlimited. Applications of Bessel beams Superresolution machining Threshold effect of ablation means that... Superresolution machining Threshold effect of ablation means that structure diameter is less than the beam diameter fs pulses at 800 nm yield 200 nm
NASA Astrophysics Data System (ADS)
Bénichou, O.; Bhat, U.; Krapivsky, P. L.; Redner, S.
2018-02-01
We introduce the frugal foraging model in which a forager performs a discrete-time random walk on a lattice in which each site initially contains S food units. The forager metabolizes one unit of food at each step and starves to death when it last ate S steps in the past. Whenever the forager eats, it consumes all food at its current site and this site remains empty forever (no food replenishment). The crucial property of the forager is that it is frugal and eats only when encountering food within at most k steps of starvation. We compute the average lifetime analytically as a function of the frugality threshold and show that there exists an optimal strategy, namely, an optimal frugality threshold k* that maximizes the forager lifetime.
Optimization of the design of Gas Cherenkov Detectors for ICF diagnosis
NASA Astrophysics Data System (ADS)
Liu, Bin; Hu, Huasi; Han, Hetong; Lv, Huanwen; Li, Lan
2018-07-01
A design method, which combines a genetic algorithm (GA) with Monte-Carlo simulation, is established and applied to two different types of Cherenkov detectors, namely, Gas Cherenkov Detector (GCD) and Gamma Reaction History (GRH). For accelerating the optimization program, open Message Passing Interface (MPI) is used in the Geant4 simulation. Compared with the traditional optical ray-tracing method, the performances of these detectors have been improved with the optimization method. The efficiency for GCD system, with a threshold of 6.3 MeV, is enhanced by ∼20% and time response improved by ∼7.2%. For the GRH system, with threshold of 10 MeV, the efficiency is enhanced by ∼76% in comparison with previously published results.
Towards a Delamination Fatigue Methodology for Composite Materials
NASA Technical Reports Server (NTRS)
OBrien, Thomas K.
2007-01-01
A methodology that accounts for both delaminaton onset and growth in composite structural components is proposed for improved fatigue life prediction to reduce life cycle costs and improve accept/reject criteria for manufacturing flaws. The benefits of using a Delamination Onset Threshold (DOT) approach in combination with a Modified Damage Tolerance (MDT) approach is highlighted. The use of this combined approach to establish accept/reject criteria, requiring less conservative initial manufacturing flaw sizes, is illustrated.
Comparison of algorithms of testing for use in automated evaluation of sensation.
Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M
1990-10-01
Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten
2018-05-01
Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.
Application of automatic threshold in dynamic target recognition with low contrast
NASA Astrophysics Data System (ADS)
Miao, Hua; Guo, Xiaoming; Chen, Yu
2014-11-01
Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.
Effect of 2 Bedding Materials on Ammonia Levels in Individually Ventilated Cages
Koontz, Jason M; Kumsher, David M; III, Richard Kelly; Stallings, Jonathan D
2016-01-01
This study sought to identify an optimal rodent bedding and cage-change interval to establish standard procedures for the IVC in our rodent vivarium. Disposable cages were prefilled with either corncob or α-cellulose bedding and were used to house 2 adult Sprague–Dawley rats (experimental condition) or contained no animals (control). Rats were observed and intracage ammonia levels measured daily for 21 d. Intracage ammonia accumulation became significant by day 8 in experimental cages containing α-cellulose bedding, whereas experimental cages containing corncob bedding did not reach detectable levels of ammonia until day 14. In all 3 experimental cages containing α-cellulose, ammonia exceeded 100 ppm (our maximum acceptable limit) by day 11. Two experimental corncob cages required changing at days 16 and 17, whereas the remaining cage containing corncob bedding lasted the entire 21 d without reaching the 100-ppm ammonia threshold. These data suggests that corncob bedding provides nearly twice the service life of α-cellulose bedding in the IVC system. PMID:26817976
Gilad, O; Horesh, L; Holder, D S
2007-07-01
For the novel application of recording of resistivity changes related to neuronal depolarization in the brain with electrical impedance tomography, optimal recording is with applied currents below 100 Hz, which might cause neural stimulation of skin or underlying brain. The purpose of this work was to develop a method for application of low frequency currents to the scalp, which delivered the maximum current without significant stimulation of skin or underlying brain. We propose a recessed electrode design which enabled current injection with an acceptable skin sensation to be increased from 100 muA using EEG electrodes, to 1 mA in 16 normal volunteers. The effect of current delivered to the brain was assessed with an anatomically realistic finite element model of the adult head. The modelled peak cerebral current density was 0.3 A/m(2), which was 5 to 25-fold less than the threshold for stimulation of the brain estimated from literature review.
Phospholipid imprinted polymers as selective endotoxin scavengers
NASA Astrophysics Data System (ADS)
Sulc, Robert; Szekely, Gyorgy; Shinde, Sudhirkumar; Wierzbicka, Celina; Vilela, Filipe; Bauer, David; Sellergren, Börje
2017-03-01
Herein we explore phospholipid imprinting as a means to design receptors for complex glycolipids comprising the toxic lipopolysaccharide endotoxin. A series of polymerizable bis-imidazolium and urea hosts were evaluated as cationic and neutral hosts for phosphates and phosphonates, the latter used as mimics of the phospholipid head groups. The bis-imidazolium hosts interacted with the guests in a cooperative manner leading to the presence of tight and well defined 1:2 ternary complexes. Optimized monomer combinations were subsequently used for imprinting of phosphatidic acid as an endotoxin dummy template. Presence of the aforementioned ternary complexes during polymerization resulted in imprinting of lipid dimers - the latter believed to crudely mimic the endotoxin Lipid A motif. The polymers were characterized with respect to template rebinding, binding affinity, capacity and common structural properties, leading to the identification of polymers which were thereafter subjected to an industrially validated endotoxin removal test. Two of the polymers were capable of removing endotoxin down to levels well below the accepted threshold (0.005 EU/mg API) in pharmaceutical production.
Effect of 2 Bedding Materials on Ammonia Levels in Individually Ventilated Cages.
Koontz, Jason M; Kumsher, David M; Kelly, Richard; Stallings, Jonathan D
2016-01-01
This study sought to identify an optimal rodent bedding and cage-change interval to establish standard procedures for the IVC in our rodent vivarium. Disposable cages were prefilled with either corncob or α-cellulose bedding and were used to house 2 adult Sprague-Dawley rats (experimental condition) or contained no animals (control). Rats were observed and intracage ammonia levels measured daily for 21 d. Intracage ammonia accumulation became significant by day 8 in experimental cages containing α-cellulose bedding, whereas experimental cages containing corncob bedding did not reach detectable levels of ammonia until day 14. In all 3 experimental cages containing α-cellulose, ammonia exceeded 100 ppm (our maximum acceptable limit) by day 11. Two experimental corncob cages required changing at days 16 and 17, whereas the remaining cage containing corncob bedding lasted the entire 21 d without reaching the 100-ppm ammonia threshold. These data suggests that corncob bedding provides nearly twice the service life of α-cellulose bedding in the IVC system.
Hypoglycaemia and hypoxic-ischaemic encephalopathy.
Boardman, James P; Hawdon, Jane M
2015-04-01
The transition from fetal to neonatal life requires metabolic adaptation to ensure that energy supply to vital organs and systems is maintained after separation from the placental circulation. Under normal conditions, this is achieved through the mobilization and use of alternative cerebral fuels (fatty acids, ketone bodies, and lactate) when blood glucose concentration falls. Severe hypoxia-ischaemia is associated with impaired metabolic adaptation, and animal and human data suggest that levels of hypoglycaemia that are tolerated under normal conditions can be harmful in association with hypoxia-ischaemia. The optimal target blood glucose level for ensuring adequate energy provision in hypoxic-ischaemic encephalopathy (HIE) remains unknown. However, recent data support guidance to maintain a blood glucose concentration of 2.5 mmol/L or more in neonates with signs of acute neurological dysfunction, which includes those with HIE, and this is higher than the accepted threshold of 2 mmol/L in infants without signs of neurological dysfunction or hyperinsulinism. © The Authors. Journal compilation © 2015 Mac Keith Press.
The effect of decentralized behavioral decision making on system-level risk.
Kaivanto, Kim
2014-12-01
Certain classes of system-level risk depend partly on decentralized lay decision making. For instance, an organization's network security risk depends partly on its employees' responses to phishing attacks. On a larger scale, the risk within a financial system depends partly on households' responses to mortgage sales pitches. Behavioral economics shows that lay decisionmakers typically depart in systematic ways from the normative rationality of expected utility (EU), and instead display heuristics and biases as captured in the more descriptively accurate prospect theory (PT). In turn, psychological studies show that successful deception ploys eschew direct logical argumentation and instead employ peripheral-route persuasion, manipulation of visceral emotions, urgency, and familiar contextual cues. The detection of phishing emails and inappropriate mortgage contracts may be framed as a binary classification task. Signal detection theory (SDT) offers the standard normative solution, formulated as an optimal cutoff threshold, for distinguishing between good/bad emails or mortgages. In this article, we extend SDT behaviorally by rederiving the optimal cutoff threshold under PT. Furthermore, we incorporate the psychology of deception into determination of SDT's discriminability parameter. With the neo-additive probability weighting function, the optimal cutoff threshold under PT is rendered unique under well-behaved sampling distributions, tractable in computation, and transparent in interpretation. The PT-based cutoff threshold is (i) independent of loss aversion and (ii) more conservative than the classical SDT cutoff threshold. Independently of any possible misalignment between individual-level and system-level misclassification costs, decentralized behavioral decisionmakers are biased toward underdetection, and system-level risk is consequently greater than in analyses predicated upon normative rationality. © 2014 Society for Risk Analysis.
Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers
NASA Astrophysics Data System (ADS)
Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard
2018-03-01
In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.
Sensitivity to coincidences and paranormal belief.
Hadlaczky, Gergö; Westerlund, Joakim
2011-12-01
Often it is difficult to find a natural explanation as to why a surprising coincidence occurs. In attempting to find one, people may be inclined to accept paranormal explanations. The objective of this study was to investigate whether people with a lower threshold for being surprised by coincidences have a greater propensity to become believers compared to those with a higher threshold. Participants were exposed to artificial coincidences, which were formally defined as less or more probable, and were asked to provide remarkability ratings. Paranormal belief was measured by the Australian Sheep-Goat Scale. An analysis of the remarkability ratings revealed a significant interaction effect between Sheep-Goat score and type of coincidence, suggesting that people with lower thresholds of surprise, when experiencing coincidences, harbor higher paranormal belief than those with a higher threshold. The theoretical aspects of these findings were discussed.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Threshold matrix for digital halftoning by genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero
1998-10-01
Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.
NASA Astrophysics Data System (ADS)
Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao
2018-06-01
Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4.0 °C at most stations. The results of this study provide information for climate change research and hydrological process simulations in the SRB, as well as provide reference information for discriminating precipitation phase in other regions.
NASA Astrophysics Data System (ADS)
Khamwan, Kitiwat; Krisanachinda, Anchali; Pluempitiwiriyawej, Charnchai
2012-10-01
This study presents an automatic method to trace the boundary of the tumour in positron emission tomography (PET) images. It has been discovered that Otsu's threshold value is biased when the within-class variances between the object and the background are significantly different. To solve the problem, a double-stage threshold search that minimizes the energy between the first Otsu's threshold and the maximum intensity value is introduced. Such shifted-optimal thresholding is embedded into a region-based active contour so that both algorithms are performed consecutively. The efficiency of the method is validated using six sphere inserts (0.52-26.53 cc volume) of the IEC/2001 torso phantom. Both spheres and phantom were filled with 18F solution with four source-to-background ratio (SBR) measurements of PET images. The results illustrate that the tumour volumes segmented by combined algorithm are of higher accuracy than the traditional active contour. The method had been clinically implemented in ten oesophageal cancer patients. The results are evaluated and compared with the manual tracing by an experienced radiation oncologist. The advantage of the algorithm is the reduced erroneous delineation that improves the precision and accuracy of PET tumour contouring. Moreover, the combined method is robust, independent of the SBR threshold-volume curves, and it does not require prior lesion size measurement.
Estimating economic thresholds for pest control: an alternative procedure.
Ramirez, O A; Saunders, J L
1999-04-01
An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.
Detectability Thresholds and Optimal Algorithms for Community Structure in Dynamic Networks
NASA Astrophysics Data System (ADS)
Ghasemian, Amir; Zhang, Pan; Clauset, Aaron; Moore, Cristopher; Peel, Leto
2016-07-01
The detection of communities within a dynamic network is a common means for obtaining a coarse-grained view of a complex system and for investigating its underlying processes. While a number of methods have been proposed in the machine learning and physics literature, we lack a theoretical analysis of their strengths and weaknesses, or of the ultimate limits on when communities can be detected. Here, we study the fundamental limits of detecting community structure in dynamic networks. Specifically, we analyze the limits of detectability for a dynamic stochastic block model where nodes change their community memberships over time, but where edges are generated independently at each time step. Using the cavity method, we derive a precise detectability threshold as a function of the rate of change and the strength of the communities. Below this sharp threshold, we claim that no efficient algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this threshold. The first uses belief propagation, which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the belief propagation equations. These results extend our understanding of the limits of community detection in an important direction, and introduce new mathematical tools for similar extensions to networks with other types of auxiliary information.
Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.
Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles
2015-11-01
Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
NASA Astrophysics Data System (ADS)
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
Gözübüyük, Gökhan; Koç, Mevlüt; Kaypaklı, Onur; Şahin, Durmuş Yıldıray
2016-11-01
There are not enough data about threshold changes in patients with CRT. In this study, we aimed to investigate frequency of significant threshold increase of left ventricle lead and to determine clinical, demographic, medical and laboratory parameters that associated with threshold increase in CRT implanted patients. We included CRT implanted 200 patients (124 males, 76 females; mean age 65.8 ± 10.3 years) to this study. Basal and third month LV R wave amplitude, electrode impedance, and threshold values were recorded. Threshold increase was accepted as ≥0.1 V and significant increase as >1 V. Patients were divided into two groups: increased threshold and non-increased threshold for LV lead. Number of patients with increased LV threshold was 68 (37.6 %). Furthermore, 8 % of patients had severe increase (≥1 V) in LV threshold. We observed that serum levels of hs-CRP and 1,25 (OH)2 vitamin D were independently associated with increased LV threshold. We showed that 1 mg/dl increase in hs-CRP and the 1 mg/dl decrease in vitamin D are associated with 25.3 and 4.5 % increase in the odds of increased LV threshold, respectively. Increased hs-CRP and decreased 1,25 (OH)2 vitamin D are the strongest predictors of increased LV lead thresholds. We suggest that hs-CRP and 1,25 (OH)2 vitamin D may be used as markers to predict and follow the patients with increased thresholds. It may be useful to finalize CRT procedure with more appropriate basal threshold in patients with high serum hs-CRP and low 1,25 (OH)2 vitamin D levels.
NASA Astrophysics Data System (ADS)
Vartanian, Garen V.
Subconscious vision is a recent focus of the vision science community, brought on by the discovery of a previously unknown photoreceptor in the retina dedicated to driving non-image-forming responses, intrinsically photosensitive retinal ganglion cells (ipRGCs). In addition to accepting inputs from rod and cone photoreceptors, ipRGCs contain their own photopigment, melanopsin, and are considered true photoreceptors. ipRGCs drive various non-image-forming photoresponses, including circadian photoentrainment, melatonin suppression, and pupil constriction. In order to understand more about ipRGC function in humans, we studied its sensitivity to light stimuli in the evening and day. First, we measured the sensitivity threshold of melatonin suppression at night. Using a protocol that enhances data precision, we have found the threshold for human melatonin suppression to be two orders of magnitude lower than previously reported. This finding has far-reaching implications since there is mounting evidence that nocturnal activation of the circadian system can be harmful. Paradoxically, ipRGCs are understimulated during the day. Optimizing daytime non-image-forming photostimulation has health benefits, such as increased alertness, faster reaction times, better sleep quality, and treatment of depression. In order to enhance ipRGC excitation, we aimed to circumvent adaptation (i.e. desensitization) of the photoresponse by using flickering instead of steady light. We find that properly timed flickering light enhances pupillary light reflex significantly when compared to steady light with 9-fold more energy density. Employing our findings, a new form of LED light is proposed to enhance subconscious visual responses at a typical indoor illuminance level. Using the silent substitution technique, a melanopsin-selective flicker is introduced into the light. A linear optimization algorithm is used to maximize the contrast of the subconscious, melanopsin-based response function while keeping conscious, cone-driven responses to the pulsing light fixed. Additional boundary conditions utilizing test color samples as an environmental mimic are introduced to limit the amount of perceived color change in a simulated environment. Two examples of lights are given to illustrate potential applications for general illumination and therapeutic purposes. For the lighting and electronics industry, we hope our study of subconscious-stimulative thresholds at night will better inform their design guidelines for health conscious products.
Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0
NASA Astrophysics Data System (ADS)
Millán-Otoya, Jorge E.; Boettcher, Stefan
2014-03-01
Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3
Houser, Dorian S; Finneran, James J
2006-09-01
Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.
Defining operating rules for mitigation of drought effects on water supply systems
NASA Astrophysics Data System (ADS)
Rossi, G.; Caporali, E.; Garrote, L.; Federici, G. V.
2012-04-01
Reservoirs play a pivotal role for water supply systems regulation and management especially during drought periods. Optimization of reservoir releases, related to drought mitigation rules is particularly required. The hydrologic state of the system is evaluated defining some threshold values, expressed in probabilistic terms. Risk deficit curves are used to reduce the ensemble of possible rules for simulation. Threshold values can be linked to specific actions in an operational context in different levels of severity, i.e. normal, pre-alert, alert and emergency scenarios. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are validated with a long term simulation that takes into account the characteristics of the evaluated system. The threshold levels determine some curves that define reservoir releases as a function of existing storage volume. A demand reduction is related to each threshold level. The rules to manage the system in drought conditions, the threshold levels and the reductions are optimized using long term simulations with different hypothesized states of the system. Synthetic sequences of flows with the same statistical properties of the historical ones are produced to evaluate the system behaviour. Performances of different values of reduction and different threshold curves are evaluated using different objective function and performances indices. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in Central Italy. The considered demand centres are Firenze and Bagno a Ripoli that have, accordingly to the census ISTAT 2001, a total of 395.000 inhabitants.
Patient understanding of drug risks: an evaluation of medication guide assessments
Knox, Caitlin; Hampp, Christian; Willy, Mary; Winterstein, Almut G.; Dal Pan, Gerald
2016-01-01
Purpose When a Medication Guide (MG) is part of Risk Evaluation and Mitigation Strategy (REMS), manufacturers assess the effectiveness of MGs through patient surveys, which have not undergone systematic evaluation. We aimed to characterize knowledge rates from these patient surveys, describe their design and respondent characteristics, and explore predictors of acceptable knowledge rates. Methods We analyzed MG assessments submitted to the Food and Drug Administration from September 2008 through June 2012. We evaluated the prevalence of specific characteristics, and calculated knowledge rates, whereby we defined “acceptable knowledge” when ≥ 80% of respondents correctly answered questions about the primary drug risk. Univariate logistic models were used to investigate the predictors of acceptable knowledge rates. Results We analyzed the first completed MG assessment for each drug with a patient survey, resulting in 66 unique MG assessments. The mean knowledge rate was 63.8%, with 20 MG assessments (30.3%) achieving the 80% threshold. Compared to assessments that did not reach acceptable knowledge rates, those that did were more likely associated with additional REMS elements (e.g. Elements to Assure Safe Use or Communication Plans). Other factors, including mean age, reading or understanding the MG, and being offered or accepting counseling were not associated with knowledge rates. There was considerable variation in the design of MG assessments. Conclusions Most MG assessments did not reach the 80% knowledge threshold, but those associated with additional interventions were more likely to achieve it. Our study highlights the need to improve patient-directed information and the methods of assessing it. PMID:25808393
White, Khendi T.; Moorthy, M.V.; Akinkuolie, Akintunde O.; Demler, Olga; Ridker, Paul M; Cook, Nancy R.; Mora, Samia
2015-01-01
Background Nonfasting triglycerides are similar to or superior to fasting triglycerides at predicting cardiovascular events. However, diagnostic cutpoints are based on fasting triglycerides. We examined the optimal cutpoint for increased nonfasting triglycerides. Methods Baseline nonfasting (<8 hours since last meal) samples were obtained from 6,391 participants in the Women’s Health Study, followed prospectively for up to 17 years. The optimal diagnostic threshold for nonfasting triglycerides, determined by logistic regression models using c-statistics and Youden index (sum of sensitivity and specificity minus one), was used to calculate hazard ratios for incident cardiovascular events. Performance was compared to thresholds recommended by the American Heart Association (AHA) and European guidelines. Results The optimal threshold was 175 mg/dL (1.98 mmol/L), corresponding to a c-statistic of 0.656 that was statistically better than the AHA cutpoint of 200 mg/dL (c-statistic of 0.628). For nonfasting triglycerides above and below 175 mg/dL, adjusting for age, hypertension, smoking, hormone use, and menopausal status, the hazard ratio for cardiovascular events was 1.88 (95% CI, 1.52–2.33, P<0.001), and for triglycerides measured at 0–4 and 4–8 hours since last meal, hazard ratios (95%CIs) were 2.05 (1.54– 2.74) and 1.68 (1.21–2.32), respectively. Performance of this optimal cutpoint was validated using ten-fold cross-validation and bootstrapping of multivariable models that included standard risk factors plus total and HDL cholesterol, diabetes, body-mass index, and C-reactive protein. Conclusions In this study of middle aged and older apparently healthy women, we identified a diagnostic threshold for nonfasting hypertriglyceridemia of 175 mg/dL (1.98 mmol/L), with the potential to more accurately identify cases than the currently recommended AHA cutpoint. PMID:26071491
Sesay, Musa; Robin, Georges; Tauzin-Fin, Patrick; Sacko, Oumar; Gimbert, Edouard; Vignes, Jean-Rodolphe; Liguoro, Dominique; Nouette-Gaulain, Karine
2015-04-01
The autonomic nervous system is influenced by many stimuli including pain. Heart rate variability (HRV) is an indirect marker of the autonomic nervous system. Because of paucity of data, this study sought to determine the optimal thresholds of HRV above which the patients are in pain after minor spinal surgery (MSS). Secondly, we evaluated the correlation between HRV and the numeric rating scale (NRS). Following institutional review board approval, patients who underwent MSS were assessed in the postanesthesia care unit after extubation. A laptop containing the HRV software was connected to the ECG monitor. The low-frequency band (LF: 0.04 to 0.5 Hz) denoted both sympathetic and parasympathetic activities, whereas the high-frequency band (HF: 0.15 to 0.4 Hz) represented parasympathetic activity. LF/HF was the sympathovagal balance. Pain was quantified by the NRS ranging from 0 (no pain) to 10 (worst imaginable pain). Simultaneously, HRV parameters were noted. Optimal thresholds were calculated using receiver operating characteristic curves with NRS>3 as cutoff. The correlation between HRV and NRS was assessed using the Spearman rank test. We included 120 patients (64 men and 56 women), mean age 51±14 years. The optimal pain threshold values were 298 ms for LF and 3.12 for LF/HF, with no significant change in HF. NRS was correlated with LF (r=0.29, P<0.005) and LF/HF (r=0.31, P<0.001) but not with HF (r=0.09, NS). This study suggests that, after MSS, values of LF>298 m and LF/HF>3.1 denote acute pain (NRS>3). These HRV parameters are significantly correlated with NRS.
Snik, A; Cremers, C
2004-02-01
Typically, an implantable hearing device consists of a transducer that is coupled to the ossicular chain and electronics. The coupling is of major importance. The Vibrant Soundbridge (VSB) is such an implantable device; normally, the VSB transducer is fixed to the ossicular chain by means of a special clip that is crimped around the long process of the incus. In addition to crimping, bone cement was used to optimize the fixation in six patients. Long-term results were compared to those of five controls with crimp fixation alone. To assess the effect of bone cement (SerenoCem, Corinthian Medical Ltd, Nottingham, UK) on hearing thresholds, long-term post-surgery thresholds were compared to pre-surgery thresholds. Bone cement did not have any negative effect. Next, to test the hypothesis that aided thresholds might be better with the use of bone cement, aided thresholds were studied. After correction for the severity of hearing loss, only a small difference was found between the two groups at one frequency, viz. 2 kHz. It was concluded that there was no negative effect of using bone cement; however, there is also no reason to use bone cement in VSB users on a regular basis.
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
Using machine learning to examine medication adherence thresholds and risk of hospitalization.
Lo-Ciganic, Wei-Hsuan; Donohue, Julie M; Thorpe, Joshua M; Perera, Subashan; Thorpe, Carolyn T; Marcum, Zachary A; Gellad, Walid F
2015-08-01
Quality improvement efforts are frequently tied to patients achieving ≥80% medication adherence. However, there is little empirical evidence that this threshold optimally predicts important health outcomes. To apply machine learning to examine how adherence to oral hypoglycemic medications is associated with avoidance of hospitalizations, and to identify adherence thresholds for optimal discrimination of hospitalization risk. A retrospective cohort study of 33,130 non-dual-eligible Medicaid enrollees with type 2 diabetes. We randomly selected 90% of the cohort (training sample) to develop the prediction algorithm and used the remaining (testing sample) for validation. We applied random survival forests to identify predictors for hospitalization and fit survival trees to empirically derive adherence thresholds that best discriminate hospitalization risk, using the proportion of days covered (PDC). Time to first all-cause and diabetes-related hospitalization. The training and testing samples had similar characteristics (mean age, 48 y; 67% female; mean PDC=0.65). We identified 8 important predictors of all-cause hospitalizations (rank in order): prior hospitalizations/emergency department visit, number of prescriptions, diabetes complications, insulin use, PDC, number of prescribers, Elixhauser index, and eligibility category. The adherence thresholds most discriminating for risk of all-cause hospitalization varied from 46% to 94% according to patient health and medication complexity. PDC was not predictive of hospitalizations in the healthiest or most complex patient subgroups. Adherence thresholds most discriminating of hospitalization risk were not uniformly 80%. Machine-learning approaches may be valuable to identify appropriate patient-specific adherence thresholds for measuring quality of care and targeting nonadherent patients for intervention.
NASA Astrophysics Data System (ADS)
Liang, J.; Liu, D.
2017-12-01
Emergency responses to floods require timely information on water extents that can be produced by satellite-based remote sensing. As SAR image can be acquired in adverse illumination and weather conditions, it is particularly suitable for delineating water extent during a flood event. Thresholding SAR imagery is one of the most widely used approaches to delineate water extent. However, most studies apply only one threshold to separate water and dry land without considering the complexity and variability of different dry land surface types in an image. This paper proposes a new thresholding method for SAR image to delineate water from other different land cover types. A probability distribution of SAR backscatter intensity is fitted for each land cover type including water before a flood event and the intersection between two distributions is regarded as a threshold to classify the two. To extract water, a set of thresholds are applied to several pairs of land cover types—water and urban or water and forest. The subsets are merged to form the water distribution for the SAR image during or after the flooding. Experiments show that this land cover based thresholding approach outperformed the traditional single thresholding by about 5% to 15%. This method has great application potential with the broadly acceptance of the thresholding based methods and availability of land cover data, especially for heterogeneous regions.
Goldrath, Dara A.; Wright, Michael T.; Belitz, Kenneth
2010-01-01
Groundwater quality in the 188-square-mile Colorado River Study unit (COLOR) was investigated October through December 2007 as part of the Priority Basin Project of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001, and the U.S. Geological Survey (USGS) is the technical project lead. The Colorado River study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within COLOR, and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 28 wells in three study areas in San Bernardino, Riverside, and Imperial Counties. Twenty wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the Study unit; these wells are termed 'grid wells'. Eight additional wells were selected to evaluate specific water-quality issues in the study area; these wells are termed `understanding wells.' The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOC], gasoline oxygenates and degradates, pesticides and pesticide degradates, pharmaceutical compounds), constituents of special interest (perchlorate, 1,4-dioxane, and 1,2,3-trichlorpropane [1,2,3-TCP]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), and radioactive constituents. Concentrations of naturally occurring isotopes (tritium, carbon-14, and stable isotopes of hydrogen and oxygen in water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, approximately 220 constituents and water-quality indicators were investigated. Quality-control samples (blanks, replicates, and matrix spikes) were collected at approximately 30 percent of the wells, and the results were used to evaluate the quality of the data obtained from the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a significant source of bias in the data. Differences between replicate samples were within acceptable ranges and matrix-spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, raw groundwater typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw groundwater. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared to regulatory and nonregulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and the California Department of Public Health (CDPH) and to thresholds established for aesthetic concerns by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and do not indicate compliance or noncompliance with those thresholds. The concentrations of most constituents detected in groundwater samples were below drinking-water thresholds. Volatile organic compounds (VOC) were detected in approximately 35 percent of grid well samples; all concentrations were below health-based thresholds. Pesticides and pesticide degradates were detected in about 20 percent of all samples; detections were below health-based thresholds. No concentrations of constituents of special interest or nutrients were detected above health-based thresholds. Most of the major and minor ion constituents sampled do not have health-based thresholds; the exception is chloride. Concentrations of chloride, sulfate, and total dis
Betran, Ana Pilar; Torloni, Maria Regina; Zhang, Jun; Ye, Jiangfeng; Mikolajczyk, Rafael; Deneux-Tharaux, Catherine; Oladapo, Olufemi Taiwo; Souza, João Paulo; Tunçalp, Özge; Vogel, Joshua Peter; Gülmezoglu, Ahmet Metin
2015-06-21
In 1985, WHO stated that there was no justification for caesarean section (CS) rates higher than 10-15% at population-level. While the CS rates worldwide have continued to increase in an unprecedented manner over the subsequent three decades, concern has been raised about the validity of the 1985 landmark statement. We conducted a systematic review to identify, critically appraise and synthesize the analyses of the ecologic association between CS rates and maternal, neonatal and infant outcomes. Four electronic databases were searched for ecologic studies published between 2000 and 2014 that analysed the possible association between CS rates and maternal, neonatal or infant mortality or morbidity. Two reviewers performed study selection, data extraction and quality assessment independently. We identified 11,832 unique citations and eight studies were included in the review. Seven studies correlated CS rates with maternal mortality, five with neonatal mortality, four with infant mortality, two with LBW and one with stillbirths. Except for one, all studies were cross-sectional in design and five were global analyses of national-level CS rates versus mortality outcomes. Although the overall quality of the studies was acceptable; only two studies controlled for socio-economic factors and none controlled for clinical or demographic characteristics of the population. In unadjusted analyses, authors found a strong inverse relationship between CS rates and the mortality outcomes so that maternal, neonatal and infant mortality decrease as CS rates increase up to a certain threshold. In the eight studies included in this review, this threshold was at CS rates between 9 and 16%. However, in the two studies that adjusted for socio-economic factors, this relationship was either weakened or disappeared after controlling for these confounders. CS rates above the threshold of 9-16% were not associated with decreases in mortality outcomes regardless of adjustments. Our findings could be interpreted to mean that at CS rates below this threshold, socio-economic development may be driving the ecologic association between CS rates and mortality. On the other hand, at rates higher than this threshold, there is no association between CS and mortality outcomes regardless of adjustment. The ecological association between CS rates and relevant morbidity outcomes needs to be evaluated before drawing more definite conclusions at population level.
Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex
2016-01-01
Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561
NASA Astrophysics Data System (ADS)
Ye, Ming; Li, Yun; He, Yongning; Daneshmand, Mojgan
2017-05-01
With the development of space technology, microwave components with increased power handling capability and reduced weight have been urgently required. In this work, the perforated waveguide technology is proposed to suppress the multipactor effect of high power microwave components. Meanwhile, this novel method has the advantage of reducing components' weight, which makes it to have great potential in space applications. The perforated part of the waveguide components can be seen as an electron absorber (namely, its total electron emission yield is zero) since most of the electrons impacting on this part will go out of the components. Based on thoroughly benchmarked numerical simulation procedures, we simulated an S band and an X band waveguide transformer to conceptually verify this idea. Both electron dynamic simulations and electrical loss simulations demonstrate that the perforation technology can improve the multipactor threshold at least ˜8 dB while maintaining the acceptable insertion loss level compared with its un-perforated components. We also found that the component with larger minimum gap is easier to achieve multipactor suppression. This effect is interpreted by a parallel plate waveguide model. What's more, to improve the multipactor threshold of the X band waveguide transformer with a minimum gap of ˜0.1 mm, we proposed a perforation structure with the slope edge and explained its mechanism. Future study will focus on further optimization of the perforation structure, size, and distribution to maximize the comprehensive performances of microwave components.
The enigma of inhaled salbutamol and sport: unresolved after 45 years.
Fitch, Ken D
2017-07-01
During the past 45 years, there have been more changes on the World Anti-Doping Agency's (WADA) Prohibited List (the List) to the status of inhaled salbutamol than any other substance. With 658 athletes, 6.1% of all participating athletes approved to inhale salbutamol at the 2008 Beijing Games, it is one of the medications used most frequently by Olympic athletes. Nevertheless, since the 2008 Games, WADA has made numerous changes to inhaled salbutamol on the List including prohibiting its use, then a year later permitting it without prior notification and recommending a pharmacokinetic study if an athlete exceeds the urinary threshold of 1000 ng/mL. Recently, an elite athlete undertook two pharmacokinetic studies and the results have raised several questions. These include whether WADA should continue to permit nebulized salbutamol as an acceptable method of inhalation and there is some justification for nebulized salbutamol to be prohibited in sport. Another question is whether the modified advisory on salbutamol in the 2017 List appropriately informs athletes of the risks of exceeding the urinary threshold and the recent changes may not inform athletes optimally. Finally, concern is expressed at the persistent failure of WADA to apply a correction down to a specific gravity of 1.020 when an exogenous substance is identified in the urine of a dehydrated athlete. It is recommended that this should be implemented. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Bashir, Mustafa R; Weber, Paul W; Husarik, Daniela B; Howle, Laurens E; Nelson, Rendon C
2012-08-01
To assess whether a scan triggering technique based on the slope of the time-attenuation curve combined with table speed optimization may improve arterial enhancement in aortic CT angiography compared to conventional threshold-based triggering techniques. Measurements of arterial enhancement were performed in a physiologic flow phantom over a range of simulated cardiac outputs (2.2-8.1 L/min) using contrast media boluses of 80 and 150 mL injected at 4 mL/s. These measurements were used to construct computer models of aortic attenuation in CT angiography, using cardiac output, aortic diameter, and CT table speed as input parameters. In-plane enhancement was calculated for normal and aneurysmal aortic diameters. Calculated arterial enhancement was poor (<150 HU) along most of the scan length using the threshold-based triggering technique for low cardiac outputs and the aneurysmal aorta model. Implementation of the slope-based triggering technique with table speed optimization improved enhancement in all scenarios and yielded good- (>200 HU; 13/16 scenarios) to excellent-quality (>300 HU; 3/16 scenarios) enhancement in all cases. Slope-based triggering with table speed optimization may improve the technical quality of aortic CT angiography over conventional threshold-based techniques, and may reduce technical failures related to low cardiac output and slow flow through an aneurysmal aorta.
An n -material thresholding method for improving integerness of solutions in topology optimization
Watts, Seth; Tortorelli, Daniel A.
2016-04-10
It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
ERIC Educational Resources Information Center
Antunes, Amanda H.; Alberton, Cristine L.; Finatto, Paula; Pinto, Stephanie S.; Cadore, Eduardo L.; Zaffari, Paula; Kruel, Luiz F. M.
2015-01-01
Purpose: Maximal tests conducted on land are not suitable for the prescription of aquatic exercises, which makes it difficult to optimize the intensity of water aerobics classes. The aim of the present study was to evaluate the maximal and anaerobic threshold cardiorespiratory responses to 6 water aerobics exercises. Volunteers performed 3 of the…
Diagnostic performance of BMI percentiles to identify adolescents with metabolic syndrome.
Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C
2014-02-01
To compare the diagnostic performance of the Centers for Disease Control and Prevention (CDC) and FITNESSGRAM (FGram) BMI standards for quantifying metabolic risk in youth. Adolescents in the NHANES (n = 3385) were measured for anthropometric variables and metabolic risk factors. BMI percentiles were calculated, and youth were categorized by weight status (using CDC and FGram thresholds). Participants were also categorized by presence or absence of metabolic syndrome. The CDC and FGram standards were compared by prevalence of metabolic abnormalities, various diagnostic criteria, and odds of metabolic syndrome. Receiver operating characteristic curves were also created to identify optimal BMI percentiles to detect metabolic syndrome. The prevalence of metabolic syndrome in obese youth was 19% to 35%, compared with <2% in the normal-weight groups. The odds of metabolic syndrome for obese boys and girls were 46 to 67 and 19 to 22 times greater, respectively, than for normal-weight youth. The receiver operating characteristic analyses identified optimal thresholds similar to the CDC standards for boys and the FGram standards for girls. Overall, BMI thresholds were more strongly associated with metabolic syndrome in boys than in girls. Both the CDC and FGram standards are predictive of metabolic syndrome. The diagnostic utility of the CDC thresholds outperformed the FGram values for boys, whereas FGram standards were slightly better thresholds for girls. The use of a common set of thresholds for school and clinical applications would provide advantages for public health and clinical research and practice.
NASA Astrophysics Data System (ADS)
Yang, Wen; Fung, Richard Y. K.
2014-06-01
This article considers an order acceptance problem in a make-to-stock manufacturing system with multiple demand classes in a finite time horizon. Demands in different periods are random variables and are independent of one another, and replenishments of inventory deviate from the scheduled quantities. The objective of this work is to maximize the expected net profit over the planning horizon by deciding the fraction of the demand that is going to be fulfilled. This article presents a stochastic order acceptance optimization model and analyses the existence of the optimal promising policies. An example of a discrete problem is used to illustrate the policies by applying the dynamic programming method. In order to solve the continuous problems, a heuristic algorithm based on stochastic approximation (HASA) is developed. Finally, the computational results of a case example illustrate the effectiveness and efficiency of the HASA approach, and make the application of the proposed model readily acceptable.
Bettinger, Nicolas; Khalique, Omar K; Krepp, Joseph M; Hamid, Nadira B; Bae, David J; Pulerwitz, Todd C; Liao, Ming; Hahn, Rebecca T; Vahl, Torsten P; Nazif, Tamim M; George, Isaac; Leon, Martin B; Einstein, Andrew J; Kodali, Susheel K
The threshold for the optimal computed tomography (CT) number in Hounsfield Units (HU) to quantify aortic valvular calcium on contrast-enhanced scans has not been standardized. Our aim was to find the most accurate threshold to predict paravalvular regurgitation (PVR) after transcatheter aortic valve replacement (TAVR). 104 patients who underwent TAVR with the CoreValve prosthesis were studied retrospectively. Luminal attenuation (LA) in HU was measured at the level of the aortic annulus. Calcium volume score for the aortic valvular complex was measured using 6 threshold cutoffs (650 HU, 850 HU, LA × 1.25, LA × 1.5, LA+50, LA+100). Receiver-operating characteristic (ROC) analysis was performed to assess the predictive value for > mild PVR (n = 16). Multivariable analysis was performed to determine the accuracy to predict > mild PVR after adjustment for depth and perimeter oversizing. ROC analysis showed lower area under the curve (AUC) values for fixed threshold cutoffs (650 or 850 HU) compared to thresholds relative to LA. The LA+100 threshold had the highest AUC (0.81), and AUC was higher than all studied protocols, other than the LA x 1.25 and LA + 50 protocols, where the difference approached statistical significance (p = 0.05, and 0.068, respectively). Multivariable analysis showed calcium volume determined by the LAx1.25, LAx1.5, LA+50, and LA+ 100 HU protocols to independently predict PVR. Calcium volume scoring thresholds which are relative to LA are more predictive of PVR post-TAVR than those which use fixed cutoffs. A threshold of LA+100 HU had the highest predictive value. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Mechanisms of carcinogensis: dose response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehring, P.J.; Blau, G.E.
There is great controversy whether the carcinogenicity of chemicals is dose-dependent and whether a threshold dose exists below which cancer will not be induced by exposure. Evidence for dose-dependency exists and is believed to be accepted generally if extricated as it should be from the threshold concept. The threshold concept conflict is not likely to be resolved in the foreseeable future; proponents and opponents argue their case in a manner similar to those arguing religion. In this paper the various arguments are reviewed. Subsequently, a chemical process model for carcinogenesis is developed based on the generally accepted evidence that themore » carcinogenic activity of many chemicals can be related to electrophilic alkylation of DNA. Using this model, some incidence of cancer, albeit negligible, will be predicted regardless how low the dose. However, the model revelas that the incidence of cancer induced by real-life exposures is likely to be greatly overestimated by currently used stochastic statistical extrapolations. Even more important, modeling of the chemical processes involved in the fate of a carcinogenic chemical in the body reveals experimental approaches to elucidating the mechanism(s) of carcinogenesis and ultimately a more scientifically sound basis for assessing the hazard of low-level exposure to a chemical carcinogen.« less
The impact of uncertainty on optimal emission policies
NASA Astrophysics Data System (ADS)
Botta, Nicola; Jansson, Patrik; Ionescu, Cezar
2018-05-01
We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.
Kumagai, Naoki H; Yamano, Hiroya
2018-01-01
Coral reefs are one of the world's most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004-2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper.
Secure Distributed Detection under Energy Constraint in IoT-Oriented Sensor Networks.
Zhang, Guomei; Sun, Hao
2016-12-16
We study the secure distributed detection problems under energy constraint for IoT-oriented sensor networks. The conventional channel-aware encryption (CAE) is an efficient physical-layer secure distributed detection scheme in light of its energy efficiency, good scalability and robustness over diverse eavesdropping scenarios. However, in the CAE scheme, it remains an open problem of how to optimize the key thresholds for the estimated channel gain, which are used to determine the sensor's reporting action. Moreover, the CAE scheme does not jointly consider the accuracy of local detection results in determining whether to stay dormant for a sensor. To solve these problems, we first analyze the error probability and derive the optimal thresholds in the CAE scheme under a specified energy constraint. These results build a convenient mathematic framework for our further innovative design. Under this framework, we propose a hybrid secure distributed detection scheme. Our proposal can satisfy the energy constraint by keeping some sensors inactive according to the local detection confidence level, which is characterized by likelihood ratio. In the meanwhile, the security is guaranteed through randomly flipping the local decisions forwarded to the fusion center based on the channel amplitude. We further optimize the key parameters of our hybrid scheme, including two local decision thresholds and one channel comparison threshold. Performance evaluation results demonstrate that our hybrid scheme outperforms the CAE under stringent energy constraints, especially in the high signal-to-noise ratio scenario, while the security is still assured.
Secure Distributed Detection under Energy Constraint in IoT-Oriented Sensor Networks
Zhang, Guomei; Sun, Hao
2016-01-01
We study the secure distributed detection problems under energy constraint for IoT-oriented sensor networks. The conventional channel-aware encryption (CAE) is an efficient physical-layer secure distributed detection scheme in light of its energy efficiency, good scalability and robustness over diverse eavesdropping scenarios. However, in the CAE scheme, it remains an open problem of how to optimize the key thresholds for the estimated channel gain, which are used to determine the sensor’s reporting action. Moreover, the CAE scheme does not jointly consider the accuracy of local detection results in determining whether to stay dormant for a sensor. To solve these problems, we first analyze the error probability and derive the optimal thresholds in the CAE scheme under a specified energy constraint. These results build a convenient mathematic framework for our further innovative design. Under this framework, we propose a hybrid secure distributed detection scheme. Our proposal can satisfy the energy constraint by keeping some sensors inactive according to the local detection confidence level, which is characterized by likelihood ratio. In the meanwhile, the security is guaranteed through randomly flipping the local decisions forwarded to the fusion center based on the channel amplitude. We further optimize the key parameters of our hybrid scheme, including two local decision thresholds and one channel comparison threshold. Performance evaluation results demonstrate that our hybrid scheme outperforms the CAE under stringent energy constraints, especially in the high signal-to-noise ratio scenario, while the security is still assured. PMID:27999282
Yamano, Hiroya
2018-01-01
Coral reefs are one of the world’s most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004–2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper. PMID:29473007
Optimal Sequential Rules for Computer-Based Instruction.
ERIC Educational Resources Information Center
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D
2009-12-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.
Insecticides for suppression of Nylanderia fulva
USDA-ARS?s Scientific Manuscript database
Nylanderia fulva (Mayr) is an invasive ant that is a serious pest in the southern United States. Pest control operators and homeowners are challenged to manage pest populations below acceptable thresholds. Contact and bait insecticides are key components of an Integrated Pest Management (IPM) strate...
Air Traffic Controller Acceptability of Unmanned Aircraft System Detect-and-Avoid Thresholds
NASA Technical Reports Server (NTRS)
Mueller, Eric R.; Isaacson, Douglas R.; Stevens, Derek
2016-01-01
A human-in-the-loop experiment was conducted with 15 retired air traffic controllers to investigate two research questions: (a) what procedures are appropriate for the use of unmanned aircraft system (UAS) detect-and-avoid systems, and (b) how long in advance of a predicted close encounter should pilots request or execute a separation maneuver. The controller participants managed a busy Oakland air route traffic control sector with mixed commercial/general aviation and manned/UAS traffic, providing separation services, miles-in-trail restrictions and issuing traffic advisories. Controllers filled out post-scenario and post-simulation questionnaires, and metrics were collected on the acceptability of procedural options and temporal thresholds. The states of aircraft were also recorded when controllers issued traffic advisories. Subjective feedback indicated a strong preference for pilots to request maneuvers to remain well clear from intruder aircraft rather than deviate from their IFR clearance. Controllers also reported that maneuvering at 120 seconds until closest point of approach (CPA) was too early; maneuvers executed with less than 90 seconds until CPA were more acceptable. The magnitudes of the requested maneuvers were frequently judged to be too large, indicating a possible discrepancy between the quantitative UAS well clear standard and the one employed subjectively by manned pilots. The ranges between pairs of aircraft and the times to CPA at which traffic advisories were issued were used to construct empirical probability distributions of those metrics. Given these distributions, we propose that UAS pilots wait until an intruder aircraft is approximately 80 seconds to CPA or 6 nmi away before requesting a maneuver, and maneuver immediately if the intruder is within 60 seconds and 4 nmi. These thresholds should make the use of UAS detect and avoid systems compatible with current airspace procedures and controller expectations.
Is there a kink in consumers' threshold value for cost-effectiveness in health care?
O'Brien, Bernie J; Gertsen, Kirsten; Willan, Andrew R; Faulkner, Lisa A
2002-03-01
A reproducible observation is that consumers' willingness-to-accept (WTA) monetary compensation to forgo a program is greater than their stated willingness-to-pay (WTP) for the same benefit. Several explanations exist, including the psychological principle that the utility of losses weighs heavier than gains. We sought to quantify the WTP-WTA disparity from published literature and explore implications for cost-effectiveness analysis accept-reject thresholds in the south-west quadrant of the cost-effectiveness plane (less effect, less cost). We reviewed published studies (health and non-health) to estimate the ratio of WTA to WTP for the same program benefit for each study and to determine if WTA is consistently greater than WTP in the literature. WTA/WTP ratios were greater than unity for every study we reviewed. The ratios ranged from 3.2 to 89.4 for environmental studies (n=7), 1.9 to 6.4 for health care studies (n=2), 1.1 to 3.6 for safety studies (n=4) and 1.3 to 2.6 for experimental studies (n=7). Given that WTA is greater than WTP based on individual preferences, should not societal preferences used to determine cost-effectiveness thresholds reflect this disparity? Current convention in cost-effectiveness analysis is that any given accept-rejection criterion (e.g. $50 k/QALY gained) is symmetric - a straight line through the origin of the cost-effectiveness plane. The WTA-WTP evidence suggests a downward 'kink' through the origin for the south-west quadrant, such that the 'selling price' of a QALY is greater than the 'buying price'. The possibility of 'kinky cost-effectiveness' decision rules and the size of the kink merits further exploration. Copyright 2002 John Wiley & Sons, Ltd.
Multi-mode ultrasonic welding control and optimization
Tang, Jason C.H.; Cai, Wayne W
2013-05-28
A system and method for providing multi-mode control of an ultrasonic welding system. In one embodiment, the control modes include the energy of the weld, the time of the welding process and the compression displacement of the parts being welded during the welding process. The method includes providing thresholds for each of the modes, and terminating the welding process after the threshold for each mode has been reached, the threshold for more than one mode has been reached or the threshold for one of the modes has been reached. The welding control can be either open-loop or closed-loop, where the open-loop process provides the mode thresholds and once one or more of those thresholds is reached the welding process is terminated. The closed-loop control provides feedback of the weld energy and/or the compression displacement so that the weld power and/or weld pressure can be increased or decreased accordingly.
The safe volume threshold for chest drain removal following pulmonary resection.
Yap, Kok Hooi; Soon, Jia Lin; Ong, Boon Hean; Loh, Yee Jim
2017-11-01
A best evidence topic in thoracic surgery was written according to a structured protocol. The question addressed was 'In patients undergoing pulmonary resection, is there a safe drainage volume threshold for chest drain removal?' Altogether 1054 papers were found, of which 5 papers represented the best evidence. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. Chest drainage threshold, where used, ranged from 250 to 500 ml/day. Both randomized controlled trials showed no significant difference in reintervention rates with a higher chest drainage volume threshold. Four studies that performed analysis on other complications showed no statistical significant difference with a higher chest drainage volume threshold. Four studies evaluating length of hospital stay showed reduced or no difference in the length of stay with a higher chest drainage volume threshold. Two cohort studies reported the mortality rate of 0-0.01% with a higher chest drainage volume threshold. We conclude that early chest drain removal after pulmonary resection, accepting a higher chest drainage volume threshold of 250-500 ml/day is safe, and may result in shorter hospital stay without increasing reintervention, morbidity or mortality. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
NASA Astrophysics Data System (ADS)
Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun
2007-03-01
Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.
Grosso, Matthew J; Frangiamore, Salvatore J; Ricchetti, Eric T; Bauer, Thomas W; Iannotti, Joseph P
2014-03-19
Propionibacterium acnes is a clinically relevant pathogen with total shoulder arthroplasty. The purpose of this study was to determine the sensitivity of frozen section histology in identifying patients with Propionibacterium acnes infection during revision total shoulder arthroplasty and investigate various diagnostic thresholds of acute inflammation that may improve frozen section performance. We reviewed the results of forty-five patients who underwent revision total shoulder arthroplasty. Patients were divided into the non-infection group (n = 15), the Propionibacterium acnes infection group (n = 18), and the other infection group (n = 12). Routine preoperative testing was performed and intraoperative tissue culture and frozen section histology were collected for each patient. The histologic diagnosis was determined by one pathologist for each of the four different thresholds. The absolute maximum polymorphonuclear leukocyte concentration was used to construct a receiver operating characteristics curve to determine a new potential optimal threshold. Using the current thresholds for grading frozen section histology, the sensitivity was lower for the Propionibacterium acnes infection group (50%) compared with the other infection group (67%). The specificity of frozen section was 100%. Using a receiver operating characteristics curve, an optimized threshold was found at a total of ten polymorphonuclear leukocytes in five high-power fields (400×). Using this threshold, the sensitivity of frozen section for Propionibacterium acnes was increased to 72%, and the specificity remained at 100%. Using current histopathology grading systems, frozen sections were specific but showed low sensitivity with respect to the Propionibacterium acnes infection. A new threshold value of a total of ten or more polymorphonuclear leukocytes in five high-power fields may increase the sensitivity of frozen section, with minimal impact on specificity.
NASA Astrophysics Data System (ADS)
Lachaut, T.; Yoon, J.; Klassert, C. J. A.; Talozi, S.; Mustafa, D.; Knox, S.; Selby, P. D.; Haddad, Y.; Gorelick, S.; Tilmant, A.
2016-12-01
Probabilistic approaches to uncertainty in water systems management can face challenges of several types: non stationary climate, sudden shocks such as conflict-driven migrations, or the internal complexity and dynamics of large systems. There has been a rising trend in the development of bottom-up methods that place focus on the decision side instead of probability distributions and climate scenarios. These approaches are based on defining acceptability thresholds for the decision makers and considering the entire range of possibilities over which such thresholds are crossed. We aim at improving the knowledge on the applicability and relevance of this approach by enlarging its scope beyond climate uncertainty and single decision makers; thus including demographic shifts, internal system dynamics, and multiple stakeholders at different scales. This vulnerability analysis is part of the Jordan Water Project and makes use of an ambitious multi-agent model developed by its teams with the extensive cooperation of the Ministry of Water and Irrigation of Jordan. The case of Jordan is a relevant example for migration spikes, rapid social changes, resource depletion and climate change impacts. The multi-agent modeling framework used provides a consistent structure to assess the vulnerability of complex water resources systems with distributed acceptability thresholds and stakeholder interaction. A proof of concept and preliminary results are presented for a non-probabilistic vulnerability analysis that involves different types of stakeholders, uncertainties other than climatic and the integration of threshold-based indicators. For each stakeholder (agent) a vulnerability matrix is constructed over a multi-dimensional domain, which includes various hydrologic and/or demographic variables.
Amoodi, Hosam A; Mick, Paul T; Shipp, David B; Friesen, Lendra M; Nedzelski, Julian M; Chen, Joseph M; Lin, Vincent Y W
2012-01-01
The primary purpose of this study was to evaluate a group of postlingually deafened adults, whose aided speech recognition exceeded commonly accepted candidacy criteria for implantation. The study aimed to define performance and qualitative outcomes of cochlear implants in these individuals compared with their optimally fitted hearing aid(s). Retrospective case series. Tertiary referral center. All postlingually deafened subjects (N = 27), who were unsuccessful hearing aid users implanted between 2000 and 2010 with a preimplantation Hearing in Noise Test (HINT) score of 60% or more were included. We compared patients' preoperative performance (HINT score) with hearing aids to postoperative performance with the cochlear implant after 12 months of device use. In addition, the Hearing Handicap Inventory questionnaire was used to quantify the hearing-related handicap change perceived after the implantation. The study group demonstrated significant postoperative improvement on all outcome measures; most notably, the mean HINT score improved from 68.4% (standard deviation, 8.3) to 91.9% (standard deviation, 9.7). Additionally, there was a significant improvement in hearing-related handicap perceived by all patients. The envelope of implantation candidacy criteria continues to expand as shown by this study's cohort. Patient satisfaction and speech recognition results are very encouraging in support of treating those who currently perform at a level above the conventional candidacy threshold but struggle with optimally fitted hearing aids.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-06
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.
2016-01-01
Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach
NASA Astrophysics Data System (ADS)
Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar
2013-06-01
We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.
Cong, Rui; Li, Jing; Wang, Xuejiao
2017-10-01
We determined the diagnostic performance of combinations of shear wave elastography (SWE) and B-mode ultrasound (US) in differentiating malignant from benign breast masses, and we investigated whether performance is affected by mass size. In this prospective study of 315 consecutive patients with 326 breast masses, US and SWE were performed before biopsy. Masses were categorized into two subgroups on the basis of mass size (≤15 mm and >15 mm), and the optimal thresholds for the SWE parameters were determined for each subgroup using receiver operating characteristic curves. The combination proposed here achieved an area under the receiver operating characteristic curve of 0.943, 95.00% sensitivity and 81.18% specificity, which approximated the diagnostic performance of US alone. The performance of the combinations using the subgroups' thresholds did not differ significantly from those based on the entire study group's thresholds, but the optimal thresholds were higher in the subgroup of larger masses. Further research is needed to determine whether mass size affects the performance of combinations of SWE and US. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Health Effects of Air Pollution.
ERIC Educational Resources Information Center
Environmental Education Report and Newsletter, 1985
1985-01-01
Summarizes health hazards associated with air pollution, highlighting the difficulty in establishing acceptable thresholds of exposure. Respiratory disease, asthma, cancer, cardiovascular disease, and other problems are addressed. Indicates that a wide range of effects from any one chemical exists and that there are differences in sensitivity to…
von Helversen, Bettina; Mata, Rui
2012-12-01
We investigated the contribution of cognitive ability and affect to age differences in sequential decision making by asking younger and older adults to shop for items in a computerized sequential decision-making task. Older adults performed poorly compared to younger adults partly due to searching too few options. An analysis of the decision process with a formal model suggested that older adults set lower thresholds for accepting an option than younger participants. Further analyses suggested that positive affect, but not fluid abilities, was related to search in the sequential decision task. A second study that manipulated affect in younger adults supported the causal role of affect: Increased positive affect lowered the initial threshold for accepting an attractive option. In sum, our results suggest that positive affect is a key factor determining search in sequential decision making. Consequently, increased positive affect in older age may contribute to poorer sequential decisions by leading to insufficient search. 2013 APA, all rights reserved
Influence of musical and psychoacoustical training on pitch discrimination.
Micheyl, Christophe; Delhommeau, Karine; Perrot, Xavier; Oxenham, Andrew J
2006-09-01
This study compared the influence of musical and psychoacoustical training on auditory pitch discrimination abilities. In a first experiment, pitch discrimination thresholds for pure and complex tones were measured in 30 classical musicians and 30 non-musicians, none of whom had prior psychoacoustical training. The non-musicians' mean thresholds were more than six times larger than those of the classical musicians initially, and still about four times larger after 2h of training using an adaptive two-interval forced-choice procedure; this difference is two to three times larger than suggested by previous studies. The musicians' thresholds were close to those measured in earlier psychoacoustical studies using highly trained listeners, and showed little improvement with training; this suggests that classical musical training can lead to optimal or nearly optimal pitch discrimination performance. A second experiment was performed to determine how much additional training was required for the non-musicians to obtain thresholds as low as those of the classical musicians from experiment 1. Eight new non-musicians with no prior training practiced the frequency discrimination task for a total of 14 h. It took between 4 and 8h of training for their thresholds to become as small as those measured in the classical musicians from experiment 1. These findings supplement and qualify earlier data in the literature regarding the respective influence of musical and psychoacoustical training on pitch discrimination performance.
2009-01-01
Background Airports represent a complex source type of increasing importance contributing to air toxics risks. Comprehensive atmospheric dispersion models are beyond the scope of many applications, so it would be valuable to rapidly but accurately characterize the risk-relevant exposure implications of emissions at an airport. Methods In this study, we apply a high resolution atmospheric dispersion model (AERMOD) to 32 airports across the United States, focusing on benzene, 1,3-butadiene, and benzo [a]pyrene. We estimate the emission rates required at these airports to exceed a 10-6 lifetime cancer risk for the maximally exposed individual (emission thresholds) and estimate the total population risk at these emission rates. Results The emission thresholds vary by two orders of magnitude across airports, with variability predicted by proximity of populations to the airport and mixing height (R2 = 0.74–0.75 across pollutants). At these emission thresholds, the population risk within 50 km of the airport varies by two orders of magnitude across airports, driven by substantial heterogeneity in total population exposure per unit emissions that is related to population density and uncorrelated with emission thresholds. Conclusion Our findings indicate that site characteristics can be used to accurately predict maximum individual risk and total population risk at a given level of emissions, but that optimizing on one endpoint will be non-optimal for the other. PMID:19426510
NASA Astrophysics Data System (ADS)
Yuan, Chang-Qing; Zhao, Tong-Jun; Zhan, Yong; Zhang, Su-Hua; Liu, Hui; Zhang, Yu-Hong
2009-11-01
Based on the well accepted Hodgkin-Huxley neuron model, the neuronal intrinsic excitability is studied when the neuron is subject to varying environmental temperatures, the typical impact for its regulating ways. With computer simulation, it is found that altering environmental temperature can improve or inhibit the neuronal intrinsic excitability so as to influence the neuronal spiking properties. The impacts from environmental factors can be understood that the neuronal spiking threshold is essentially influenced by the fluctuations in the environment. With the environmental temperature varying, burst spiking is realized for the neuronal membrane voltage because of the environment-dependent spiking threshold. This burst induced by changes in spiking threshold is different from that excited by input currents or other stimulus.
Explanatory style, dispositional optimism, and reported parental behavior.
Hjelle, L A; Busch, E A; Warren, J E
1996-12-01
The relationship between two cognitive personality constructs (explanatory style and dispositional optimism) and retrospective self-reports of maternal and paternal behavior were investigated. College students (62 men and 145 women) completed the Life Orientation Test, Attributional Style Questionnaire, and Parental Acceptance-Rejection Questionnaire in a single session. As predicted, dispositional optimism was positively correlated with reported maternal and paternal warmth/acceptance and negatively correlated with aggression/hostility, neglect/indifference, and undifferentiated rejection during middle childhood. Unexpectedly, explanatory style was found to be more strongly associated with retrospective reports of paternal as opposed to maternal behavior. The implications of these results for future research concerning the developmental antecedents of differences in explanatory style and dispositional optimism are discussed.
Schmitt, Stephen J.; Fram, Miranda S.; Milby Dawson, Barbara J.; Belitz, Kenneth
2008-01-01
Ground-water quality in the approximately 3,340 square mile Middle Sacramento Valley study unit (MSACV) was investigated from June through September, 2006, as part of the California Groundwater Ambient Monitoring and Assessment (GAMA) program. The GAMA Priority Basin Assessment project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The Middle Sacramento Valley study was designed to provide a spatially unbiased assessment of raw ground-water quality within MSACV, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 108 wells in Butte, Colusa, Glenn, Sutter, Tehama, Yolo, and Yuba Counties. Seventy-one wells were selected using a randomized grid-based method to provide statistical representation of the study unit (grid wells), 15 wells were selected to evaluate changes in water chemistry along ground-water flow paths (flow-path wells), and 22 were shallow monitoring wells selected to assess the effects of rice agriculture, a major land use in the study unit, on ground-water chemistry (RICE wells). The ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], gasoline oxygenates and degradates, pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, and carbon-14, and stable isotopes of hydrogen, oxygen, nitrogen, and carbon), and dissolved noble gases also were measured to help identify the sources and ages of the sampled ground water. Quality-control samples (blanks, replicates, laboratory matrix spikes) were collected at approximately 10 percent of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the ground-water samples. Differences between replicate samples were within acceptable ranges, indicating acceptably low variability. Matrix spike recoveries were within acceptable ranges for most constituents. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and are not indicative of compliance or noncompliance with regulatory thresholds. Most constituents that were detected in ground-water samples were found at concentrations below drinking-water thresholds. VOCs were detected in less than one-third and pesticides and pesticide degradates in just over one-half of the grid wells, and all detections of these constituents in samples from all wells of the MSACV study unit were below health-based thresholds. All detections of trace elements in samples from MSACV grid wells were below health-based thresholds, with the exceptions of arsenic and boro
Heist, E Kevin; Herre, John M; Binkley, Philip F; Van Bakel, Adrian B; Porterfield, James G; Porterfield, Linda M; Qu, Fujian; Turkel, Melanie; Pavri, Behzad B
2014-10-15
Detect Fluid Early from Intrathoracic Impedance Monitoring (DEFEAT-PE) is a prospective, multicenter study of multiple intrathoracic impedance vectors to detect pulmonary congestion (PC) events. Changes in intrathoracic impedance between the right ventricular (RV) coil and device can (RVcoil→Can) of implantable cardioverter-defibrillators (ICDs) and cardiac resynchronization therapy ICDs (CRT-Ds) are used clinically for the detection of PC events, but other impedance vectors and algorithms have not been studied prospectively. An initial 75-patient study was used to derive optimal impedance vectors to detect PC events, with 2 vector combinations selected for prospective analysis in DEFEAT-PE (ICD vectors: RVring→Can + RVcoil→Can, detection threshold 13 days; CRT-D vectors: left ventricular ring→Can + RVcoil→Can, detection threshold 14 days). Impedance changes were considered true positive if detected <30 days before an adjudicated PC event. One hundred sixty-two patients were enrolled (80 with ICDs and 82 with CRT-Ds), all with ≥1 previous PC event. One hundred forty-four patients provided study data, with 214 patient-years of follow-up and 139 PC events. Sensitivity for PC events of the prespecified algorithms was as follows: ICD: sensitivity 32.3%, false-positive rate 1.28 per patient-year; CRT-D: sensitivity 32.4%, false-positive rate 1.66 per patient-year. An alternative algorithm, ultimately approved by the US Food and Drug Administration (RVring→Can + RVcoil→Can, detection threshold 14 days), resulted in (for all patients) sensitivity of 21.6% and a false-positive rate of 0.9 per patient-year. The CRT-D thoracic impedance vector algorithm selected in the derivation study was not superior to the ICD algorithm RVring→Can + RVcoil→Can when studied prospectively. In conclusion, to achieve an acceptably low false-positive rate, the intrathoracic impedance algorithms studied in DEFEAT-PE resulted in low sensitivity for the prediction of heart failure events. Copyright © 2014 Elsevier Inc. All rights reserved.
The economics of motion perception and invariants of visual sensitivity.
Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael
2007-06-21
Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.
ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*
Kim, Donghwan; Fessler, Jeffrey A.
2017-01-01
This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242
NASA Astrophysics Data System (ADS)
Nadiv, Roey; Shtein, Michael; Shachar, Gal; Varenik, Maxim; Regev, Oren
2017-07-01
A major challenge in nanocomposite research is to predict the optimal nanomaterial concentration (ONC) yielding a maximal reinforcement in a given property. We present a simple approach to identify the ONC based on our finding that it is typically located in close proximity to an abrupt increase in polymer matrix viscosity, termed the rheological percolation threshold, and thus may be used as an indicator of the ONC. This premise was validated by rheological and fractography studies of composites loaded by nanomaterials including graphene nanoribbons or carbon or tungsten disulfide nanotubes. The correlation between in situ viscosity, the rheological percolation threshold concentration and the nanocomposite fractography demonstrates the utility of the method.
Nadiv, Roey; Shtein, Michael; Shachar, Gal; Varenik, Maxim; Regev, Oren
2017-07-28
A major challenge in nanocomposite research is to predict the optimal nanomaterial concentration (ONC) yielding a maximal reinforcement in a given property. We present a simple approach to identify the ONC based on our finding that it is typically located in close proximity to an abrupt increase in polymer matrix viscosity, termed the rheological percolation threshold, and thus may be used as an indicator of the ONC. This premise was validated by rheological and fractography studies of composites loaded by nanomaterials including graphene nanoribbons or carbon or tungsten disulfide nanotubes. The correlation between in situ viscosity, the rheological percolation threshold concentration and the nanocomposite fractography demonstrates the utility of the method.
NASA Astrophysics Data System (ADS)
Fan, Shuwei; Bai, Liang; Chen, Nana
2016-08-01
As one of the key elements of high-power laser systems, the pulse compression multilayer dielectric grating is required for broader band, higher diffraction efficiency and higher damage threshold. In this paper, the multilayer dielectric film and the multilayer dielectric gratings(MDG) were designed by eigen matrix and optimized with the help of generic algorithm and rigorous coupled wave method. The reflectivity was close to 100% and the bandwith were over 250nm, twice compared to the unoptimized film structure. The simulation software of standing wave field distribution within MDG was developed and the electric field of the MDG was calculated. And the key parameters which affected the electric field distribution were also studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, J; Zhang, L; Samei, E
Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less
Search for {eta}-mesic helium using WASA-at-COSY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moskal, P.; Institut fuer Kernphysik and Juelich Center for Hadron Physics, Forschungszentrum Juelich, Juelich
2010-08-05
The installation of the WASA detector at the cooler synchrotron COSY opened the possibility to search for {eta}-mesic helium with high statistics and high acceptance. A search for the {sup 4}He--{eta} bound state is conducted via an exclusive measurement of the excitation function for the dd{yields}{sup 3}Hep{pi}{sup -} reaction varying continuously the beam momentum around the threshold for the dd{yields}{sup 4}He{eta} reaction. Ramping of the beam momentum and taking advantage of the large acceptance of the WASA detector allows to minimize systematical uncertainities.
How to avoid simulation sickness in virtual environments during user displacement
NASA Astrophysics Data System (ADS)
Kemeny, A.; Colombet, F.; Denoual, T.
2015-03-01
Driving simulation (DS) and Virtual Reality (VR) share the same technologies for visualization and 3D vision and may use the same technics for head movement tracking. They experience also similar difficulties when rendering the displacements of the observer in virtual environments, especially when these displacements are carried out using driver commands, including steering wheels, joysticks and nomad devices. High values for transport delay, the time lag between the action and the corresponding rendering cues and/or visual-vestibular conflict, due to the discrepancies perceived by the human visual and vestibular systems when driving or displacing using a control device, induces the so-called simulation sickness. While the visual transport delay can be efficiently reduced using high frequency frame rate, the visual-vestibular conflict is inherent to VR, when not using motion platforms. In order to study the impact of displacements on simulation sickness, we have tested various driving scenarios in Renault's 5-sided ultra-high resolution CAVE. First results indicate that low speed displacements with longitudinal and lateral accelerations under a given perception thresholds are well accepted by a large number of users and relatively high values are only accepted by experienced users and induce VR induced symptoms and effects (VRISE) for novice users, with a worst case scenario corresponding to rotational displacements. These results will be used for optimization technics at Arts et Métiers ParisTech for motion sickness reduction in virtual environments for industrial, research, educational or gaming applications.
Coyle, Doug; Ko, Yoo-Joung; Coyle, Kathryn; Saluja, Ronak; Shah, Keya; Lien, Kelly; Lam, Henry; Chan, Kelvin K W
2017-04-01
To assess the cost-effectiveness of gemcitabine (G), G + 5-fluorouracil, G + capecitabine, G + cisplatin, G + oxaliplatin, G + erlotinib, G + nab-paclitaxel (GnP), and FOLFIRINOX in the treatment of advanced pancreatic cancer from a Canadian public health payer's perspective, using data from a recently published Bayesian network meta-analysis. Analysis was conducted through a three-state Markov model and used data on the progression of disease with treatment from the gemcitabine arms of randomized controlled trials combined with estimates from the network meta-analysis for the newer regimens. Estimates of health care costs were obtained from local providers, and utilities were derived from the literature. The model estimates the effect of treatment regimens on costs and quality-adjusted life-years (QALYs) discounted at 5% per annum. At a willingness-to-pay (WTP) threshold of greater than $30,666 per QALY, FOLFIRINOX would be the most optimal regimen. For a WTP threshold of $50,000 per QALY, the probability that FOLFIRINOX would be optimal was 57.8%. There was no price reduction for nab-paclitaxel when GnP was optimal. From a Canadian public health payer's perspective at the present time and drug prices, FOLFIRINOX is the optimal regimen on the basis of the cost-effectiveness criterion. GnP is not cost-effective regardless of the WTP threshold. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.
Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L
2017-08-01
Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.
van Hooff, Miranda L; Mannion, Anne F; Staub, Lukas P; Ostelo, Raymond W J G; Fairbank, Jeremy C T
2016-10-01
The achievement of a given change score on a valid outcome instrument is commonly used to indicate whether a clinically relevant change has occurred after spine surgery. However, the achievement of such a change score can be dependent on baseline values and does not necessarily indicate whether the patient is satisfied with the current state. The achievement of an absolute score equivalent to a patient acceptable symptom state (PASS) may be a more stringent measure to indicate treatment success. This study aimed to estimate the score on the Oswestry Disability Index (ODI, version 2.1a; 0-100) corresponding to a PASS in patients who had undergone surgery for degenerative disorders of the lumbar spine. This is a cross-sectional study of diagnostic accuracy using follow-up data from an international spine surgery registry. The sample includes 1,288 patients with degenerative lumbar spine disorders who had undergone elective spine surgery, registered in the EUROSPINE Spine Tango Spine Surgery Registry. The main outcome measure was the ODI (version 2.1a). Surgical data and data from the ODI and Core Outcome Measures Index (COMI) were included to determine the ODI threshold equivalent to PASS at 1 year (±1.5 months; n=780) and 2 years (±2 months; n=508) postoperatively. The symptom-specific well-being item of the COMI was used as the external criterion in the receiver operating characteristic (ROC) analysis to determine the ODI threshold equivalent to PASS. Separate sensitivity analyses were performed based on the different definitions of an "acceptable state" and for subgroups of patients. JF is a copyright holder of the ODI. The ODI threshold for PASS was 22, irrespective of the time of follow-up (area under the curve [AUC]: 0.89 [sensitivity {Se}: 78.3%, specificity {Sp}: 82.1%] and AUC: 0.91 [Se: 80.7%, Sp: 85.6] for the 1- and 2-year follow-ups, respectively). Sensitivity analyses showed that the absolute ODI-22 threshold for the two follow-up time-points were robust. A stricter definition of PASS resulted in lower ODI thresholds, varying from 16 (AUC=0.89; Se: 80.2%, Sp: 82.0%) to 18 (AUC=0.90; Se: 82.4%, Sp: 80.4%) depending on the time of follow-up. An ODI score ≤22 indicates the achievement of an acceptable symptom state and can hence be used as a criterion of treatment success alongside the commonly used change score measures. At the individual level, the threshold could be used to indicate whether or not a patient with a lumbar spine disorder is a "responder" after elective surgery. Copyright © 2016 Elsevier Inc. All rights reserved.
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M
2009-11-01
The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.
Open | SpeedShop: An Open Source Infrastructure for Parallel Performance Analysis
Schulz, Martin; Galarowicz, Jim; Maghrak, Don; ...
2008-01-01
Over the last decades a large number of performance tools has been developed to analyze and optimize high performance applications. Their acceptance by end users, however, has been slow: each tool alone is often limited in scope and comes with widely varying interfaces and workflow constraints, requiring different changes in the often complex build and execution infrastructure of the target application. We started the Open | SpeedShop project about 3 years ago to overcome these limitations and provide efficient, easy to apply, and integrated performance analysis for parallel systems. Open | SpeedShop has two different faces: it provides an interoperable tool set covering themore » most common analysis steps as well as a comprehensive plugin infrastructure for building new tools. In both cases, the tools can be deployed to large scale parallel applications using DPCL/Dyninst for distributed binary instrumentation. Further, all tools developed within or on top of Open | SpeedShop are accessible through multiple fully equivalent interfaces including an easy-to-use GUI as well as an interactive command line interface reducing the usage threshold for those tools.« less
Study of the solar coronal hole rotation
NASA Astrophysics Data System (ADS)
Oghrapishvili, N. B.; Bagashvili, S. R.; Maghradze, D. A.; Gachechiladze, T. Z.; Japaridze, D. R.; Shergelashvili, B. M.; Mdzinarishvili, T. G.; Chargeishvili, B. B.
2018-06-01
Rotation of coronal holes is studied using data from SDO/AIA for 2014 and 2015. A new approach to the treatment of data is applied. Instead of calculated average angular velocities of each coronal hole centroid and then grouping them in latitudinal bins for calculating average rotation rates of corresponding latitudes, we compiled instant rotation rates of centroids and their corresponding heliographic coordinates in one matrix for further processing. Even unfiltered data showed clear differential nature of rotation of coronal holes. We studied possible reasons for distortion of data by the limb effects to eliminate some discrepancies at high latitudes caused by the high order of scattering of data in that region. A study of the longitudinal distribution of angular velocities revealed the optimal longitudinal interval for the best result. We examined different methods of data filtering and realized that filtration using targeting on the local medians of data with a constant threshold is a more acceptable approach that is not biased towards a predefined notion of an expected result. The results showed a differential pattern of rotation of coronal holes.
Improving Earth/Prediction Models to Improve Network Processing
NASA Astrophysics Data System (ADS)
Wagner, G. S.
2017-12-01
The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.
Methodology for balancing design and process tradeoffs for deep-subwavelength technologies
NASA Astrophysics Data System (ADS)
Graur, Ioana; Wagner, Tina; Ryan, Deborah; Chidambarrao, Dureseti; Kumaraswamy, Anand; Bickford, Jeanne; Styduhar, Mark; Wang, Lee
2011-04-01
For process development of deep-subwavelength technologies, it has become accepted practice to use model-based simulation to predict systematic and parametric failures. Increasingly, these techniques are being used by designers to ensure layout manufacturability, as an alternative to, or complement to, restrictive design rules. The benefit of model-based simulation tools in the design environment is that manufacturability problems are addressed in a design-aware way by making appropriate trade-offs, e.g., between overall chip density and manufacturing cost and yield. The paper shows how library elements and the full ASIC design flow benefit from eliminating hot spots and improving design robustness early in the design cycle. It demonstrates a path to yield optimization and first time right designs implemented in leading edge technologies. The approach described herein identifies those areas in the design that could benefit from being fixed early, leading to design updates and avoiding later design churn by careful selection of design sensitivities. This paper shows how to achieve this goal by using simulation tools incorporating various models from sparse to rigorously physical, pattern detection and pattern matching, checking and validating failure thresholds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Qingjun; Feng, Lisha; Wu, Chuanjia
A combustion solution method was developed to fabricate amorphous ZnAlSnO (a-ZATO) for thin-film transistors (TFTs). The properties of a-ZATO films and behaviors of a-ZATO TFTs were studied in detail. An appropriate Al content in the matrix could suppress the formation of oxygen vacancies efficiently and achieve densely amorphous films. The a-ZATO TFTs exhibited acceptable performances, with an on/off current ratio of ∼10{sup 6}, field-effect mobility of 2.33 cm{sup 2}·V{sup −1}·S{sup −1}, threshold voltage of 2.39 V, and subthreshold swing of 0.52 V/decade at an optimal Al content (0.5). The relation between on- and off-resistance of the ZATO TFT was also within the rangemore » expected for fast switching devices. More importantly, the introduced Al with an appropriate content had the ability to evidently enhance the device long-term stability under working bias stress and storage durations. The obtained indium- and gallium-free a-ZATO TFTs are very promising for the next-generation displays.« less
Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo
2010-05-01
The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.
Evaluation of a Teleform-based data collection system: a multi-center obesity research case study.
Jenkins, Todd M; Wilson Boyce, Tawny; Akers, Rachel; Andringa, Jennifer; Liu, Yanhong; Miller, Rosemary; Powers, Carolyn; Ralph Buncher, C
2014-06-01
Utilizing electronic data capture (EDC) systems in data collection and management allows automated validation programs to preemptively identify and correct data errors. For our multi-center, prospective study we chose to use TeleForm, a paper-based data capture software that uses recognition technology to create case report forms (CRFs) with similar functionality to EDC, including custom scripts to identify entry errors. We quantified the accuracy of the optimized system through a data audit of CRFs and the study database, examining selected critical variables for all subjects in the study, as well as an audit of all variables for 25 randomly selected subjects. Overall we found 6.7 errors per 10,000 fields, with similar estimates for critical (6.9/10,000) and non-critical (6.5/10,000) variables-values that fall below the acceptable quality threshold of 50 errors per 10,000 established by the Society for Clinical Data Management. However, error rates were found to widely vary by type of data field, with the highest rate observed with open text fields. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cotté, François-Emery; Mercier, Florence; De Pouvourville, Gérard
2008-12-01
Nonadherence to treatment is an important determinant of long-term outcomes in women with osteoporosis. This study was conducted to investigate the association between adherence and osteoporotic fracture risk and to identify optimal thresholds for good compliance and persistence. A secondary objective was to perform a preliminary evaluation of the cost consequences of adherence. This was a retrospective case-control analysis. Data were derived from the Thales prescription database, which contains information on >1.6 million patients in the primary health care setting in France. Cases were women aged >or=50 years who had an osteoporosis-related fracture in 2006. For each case, 5 matched controls were randomly selected. Both compliance and persistence aspects of treatment adherence were examined. Compliance was estimated based on the medication possession ratio (MPR). Persistence was calculated as the time from the initial filling of a prescription for osteoporosis medication until its discontinuation. The mean (SD) MPR was lower in cases compared with controls (58.8% [34.7%] vs 72.1% [28.8%], respectively; P < 0.001). Cases were more likely than controls to discontinue osteoporosis treatment (50.0% vs 25.3%; P < 0.001), yielding a significantly lower proportion of patients who were still persistent at 1 year (34.1% vs 40.9%; P < 0.001). MPR was the best predictor of fracture risk, with an area under the receiver-operating-characteristic curve that was higher than that for persistence (0.59 vs 0.55). The optimal MPR threshold for predicting fracture risk was >or=68.0%. Compared with less-compliant women, women who achieved this threshold had a 51% reduction in fracture risk. The difference in annual drug expenditure between women achieving this threshold and those who did not was approximately euro300. The optimal threshold for persistence with therapy was at least 6 months. Attaining this threshold was associated with a 28% reduction in fracture risk compared with less-persistent women. In this study, better treatment adherence was associated with a greater reduction in fracture risk. Compliance appeared to predict fracture risk better than did persistence.
NASA Astrophysics Data System (ADS)
Hou, Huirang; Zheng, Dandan; Nie, Laixiao
2015-04-01
For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.
Acquisition of decision making criteria: reward rate ultimately beats accuracy.
Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D
2011-02-01
Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.
NASA Astrophysics Data System (ADS)
Hinsby, Klaus; Markager, Stiig; Kronvang, Brian; Windolf, Jørgen; Sonnenborg, Torben; Sørensen, Lærke
2015-04-01
Nitrate, which typically makes up the major part (~>90%) of dissolved inorganic nitrogen in groundwater and surface water, is the most frequent pollutant responsible for European groundwater bodies failing to meet the good status objectives of the European Water Framework Directive generally when comparing groundwater monitoring data with the nitrate quality standard of the Groundwater Directive (50 mg/l = the WHO drinking water standard). Still, while more than 50 % of the European surface water bodies do not meet the objective of good ecological status "only" 25 % of groundwater bodies do not meet the objective of good chemical status according to the river basin management plans reported by the EU member states. However, based on a study on interactions between groundwater, streams and a Danish estuary we argue that nitrate threshold values for aerobic groundwater often need to be significantly below the nitrate quality standard to ensure good ecological status of associated surface water bodies, and hence that the chemical status of European groundwater is worse than indicated by the present assessments. Here we suggest a methodology for derivation of groundwater and stream threshold values for total nitrogen ("nitrate") in a coastal catchment based on assessment of maximum acceptable nitrogen loadings (thresholds) to the associated vulnerable estuary. The applied method use existing information on agricultural practices and point source emissions in the catchment, groundwater, stream quantity and quality monitoring data that all feed data to an integrated groundwater and surface water modelling tool enabling us to conduct an assessment of total nitrogen loads and threshold concentrations derived to ensure/restore good ecological status of the investigated estuary. For the catchment to the Horsens estuary in Denmark we estimate the stream and groundwater thresholds for total nitrogen to be about 13 and 27 mg/l (~ 12 and 25 mg/l of nitrate). The shown example of deriving nitrogen threshold concentrations is for groundwater and streams in a coastal catchment discharging to a vulnerable estuary in Denmark, but the principles may be applied to large river basins with sub-catchments in several countries such as e.g. the Danube or the Rhine. In this case the relevant countries need to collaborate on derivation of nitrogen thresholds based on e.g. maximum acceptable nitrogen loadings to the Black Sea / the North Sea, and finally agree on thresholds for different parts of the river basin. Phosphorus is another nutrient which frequently results in or contributes to the eutrophication of surface waters. The transport and retention processes of total phosphorus (TP) is more complex than for nitrate (or alternatively total N), and presently we are able to establish TP thresholds for streams but not for groundwater. Derivation of TP thresholds is covered in an accompanying paper by Kronvang et al.
Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M
2017-06-30
There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.
Samanta, Rahul; Kumar, Saurabh; Chik, William; Qian, Pierre; Barry, Michael A; Al Raisi, Sara; Bhaskaran, Abhishek; Farraha, Melad; Nadri, Fazlur; Kizana, Eddy; Thiagalingam, Aravinda; Kovoor, Pramesh; Pouliopoulos, Jim
2017-10-01
Recent studies have demonstrated that intramyocardial adipose tissue (IMAT) may contribute to ventricular electrophysiological remodeling in patients with chronic myocardial infarction. Using an ovine model of myocardial infarction, we aimed to determine the influence of IMAT on scar tissue identification during endocardial contact mapping and optimal voltage-based mapping criteria for defining IMAT dense regions. In 7 sheep, left ventricular endocardial and transmural mapping was performed 84 weeks (15-111 weeks) post-myocardial infarction. Spearman rank correlation coefficient was used to assess the relationship between endocardial contact electrogram amplitude and histological composition of myocardium. Receiver operator characteristic curves were used to derive optimal electrogram thresholds for IMAT delineation during endocardial mapping and to describe the use of endocardial mapping for delineation of IMAT dense regions within scar. Endocardial electrogram amplitude correlated significantly with IMAT (unipolar r =-0.48±0.12, P <0.001; bipolar r =-0.45±0.22, P =0.04) but not collagen (unipolar r =-0.36±0.24, P =0.13; bipolar r =-0.43±0.31, P =0.16). IMAT dense regions of myocardium reliably identified using endocardial mapping with thresholds of <3.7 and <0.6 mV, respectively, for unipolar, bipolar, and combined modalities (single modality area under the curve=0.80, P <0.001; combined modality area under the curve=0.84, P <0.001). Unipolar mapping using optimal thresholding remained significantly reliable (area under the curve=0.76, P <0.001) during mapping of IMAT, confined to putative scar border zones (bipolar amplitude, 0.5-1.5 mV). These novel findings enhance our understanding of the confounding influence of IMAT on endocardial scar mapping. Combined bipolar and unipolar voltage mapping using optimal thresholds may be useful for delineating IMAT dense regions of myocardium, in postinfarct cardiomyopathy. © 2017 American Heart Association, Inc.
Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation
NASA Astrophysics Data System (ADS)
Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong
2018-04-01
Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.
Bilevel thresholding of sliced image of sludge floc.
Chu, C P; Lee, D J
2004-02-15
This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.
Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn
2017-06-01
To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.
Design of Quiet Rotorcraft Approach Trajectories: Verification Phase
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
2010-01-01
Flight testing that is planned for October 2010 will provide an opportunity to evaluate rotorcraft trajectory optimization techniques. The flight test will involve a fully instrumented MD-902 helicopter, which will be flown over an array of microphones. In this work, the helicopter approach trajectory is optimized via a multiobjective genetic algorithm to improve community noise, passenger comfort, and pilot acceptance. Previously developed optimization strategies are modified to accommodate new helicopter data and to increase pilot acceptance. This paper describes the MD-902 trajectory optimization plus general optimization strategies and modifications that are needed to reduce the uncertainty in noise predictions. The constraints that are imposed by the flight test conditions and characteristics of the MD-902 helicopter limit the testing possibilities. However, the insights that will be gained through this research will prove highly valuable.
Optimizing Precipitation Thresholds for Best Correlation Between Dry Lightning and Wildfires
NASA Astrophysics Data System (ADS)
Vant-Hull, Brian; Thompson, Tollisha; Koshak, William
2018-03-01
This work examines how to adjust the definition of "dry lightning" in order to optimize the correlation between dry lightning flash count and the climatology of large (>400 km2) lightning-ignited wildfires over the contiguous United States (CONUS). The National Lightning Detection Network™ and National Centers for Environmental Prediction Stage IV radar-based, gauge-adjusted precipitation data are used to form climatic data sets. For a 13 year analysis period over CONUS, a correlation of 0.88 is found between annual totals of wildfires and dry lightning. This optimal correlation is found by defining dry lightning as follows: on a 0.1° hourly grid, a precipitation threshold of no more than 0.3 mm may accumulate during any hour over a period of 3-4 days preceding the flash. Regional optimized definitions vary. When annual totals are analyzed as done here, no clear advantage is found by weighting positive polarity cloud-to-ground (+CG) lightning differently than -CG lightning. The high variability of dry lightning relative to the precipitation and lightning from which it is derived suggests it would be an independent and useful climate indicator.
Ideal Standards, Acceptance, and Relationship Satisfaction: Latitudes of Differential Effects
Buyukcan-Tetik, Asuman; Campbell, Lorne; Finkenauer, Catrin; Karremans, Johan C.; Kappen, Gesa
2017-01-01
We examined whether the relations of consistency between ideal standards and perceptions of a current romantic partner with partner acceptance and relationship satisfaction level off, or decelerate, above a threshold. We tested our hypothesis using a 3-year longitudinal data set collected from heterosexual newlywed couples. We used two indicators of consistency: pattern correspondence (within-person correlation between ideal standards and perceived partner ratings) and mean-level match (difference between ideal standards score and perceived partner score). Our results revealed that pattern correspondence had no relation with partner acceptance, but a positive linear/exponential association with relationship satisfaction. Mean-level match had a significant positive association with actor’s acceptance and relationship satisfaction up to the point where perceived partner score equaled ideal standards score. Partner effects did not show a consistent pattern. The results suggest that the consistency between ideal standards and perceived partner attributes has a non-linear association with acceptance and relationship satisfaction, although the results were more conclusive for mean-level match. PMID:29033876
NASA Astrophysics Data System (ADS)
Bijl, Piet
2016-10-01
When acquiring a new imaging system and operational task performance is a critical factor for success, it is necessary to specify minimum acceptance requirements that need to be met using a sensor performance model and/or performance tests. Currently, there exist a variety of models and test from different origin (defense, security, road safety, optometry) and they all do different predictions. This study reviews a number of frequently used methods and shows the effects that small changes in procedure or threshold criteria can have on the outcome of a test. For example, a system may meet the acceptance requirements but not satisfy the needs for the operational task, or the choice of test may determine the rank order of candidate sensors. The goal of the paper is to make people aware of the pitfalls associated with the acquisition process, by i) illustrating potential tricks to have a system accepted that is actually not suited for the operational task, and ii) providing tips to avoid this unwanted situation.
Dufour, Simon; Latour, Sylvie; Chicoine, Yvan; Fecteau, Gilles; Forget, Sylvain; Moreau, Jean; Trépanier, André
2012-01-01
A script concordance test (SCT) was developed measuring clinical reasoning of food-ruminant practitioners for whom potential clinical competence difficulties were identified by their provincial professional organization. The SCT was designed to be used as part of a broader evaluation procedure. A scoring key was developed based on answers from a reference panel of 12 experts and using the modified aggregate method commonly used for SCTs. A convenient sample of 29 food-ruminant practitioners was constituted to assess the reliability and precision of the SCT and to determine a fair threshold value for success. Cronbach's α coefficients were computed to evaluate internal reliability. To evaluate SCT precision, a test-retest methodology was used and measures of agreement beyond chance were computed at question and test levels. After optimization, the 36-question SCT yielded acceptable internal reliability (Cronbach's α=0.70). Precision of the SCT at question level was excellent with 33 questions (92%) yielding moderate to almost perfect agreement between administrations. At test level, fair agreement (concordance correlation coefficient=0.32) was observed between administrations. A slight SCT score improvement (M=+2.8 points) on the second administration was in part responsible for some of the disagreement and was potentially a result of an adaptation to the SCT format. Scores distribution was used to determine a fair threshold value for success, while considering the underlying objectives of the examination. The data suggest that the developed SCT can be used as a reliable and precise measurement of clinical reasoning of food-ruminant practitioners.
Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Lozano, Caterina Coll
2012-08-01
A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H2S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods ofat least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H2S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0+/-1.8%, seems not to depend on the amount of H2S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H2S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H2S emissions are dominant.
Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Coll Lozano, Caterina
2012-08-01
A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H 2 S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods of at least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H 2 S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0 ± 1.8%, seems not to depend on the amount of H 2 S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H 2 S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H 2 S emissions are dominant. [Box: see text].
NASA Astrophysics Data System (ADS)
Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook
2004-04-01
We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.
NASA Astrophysics Data System (ADS)
Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.
2017-02-01
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.
Marginally perceptible outcome feedback, motor learning and implicit processes.
Masters, Rich S W; Maxwell, Jon P; Eves, Frank F
2009-09-01
Participants struck 500 golf balls to a concealed target. Outcome feedback was presented at the subjective or objective threshold of awareness of each participant or at a supraliminal threshold. Participants who received fully perceptible (supraliminal) feedback learned to strike the ball onto the target, as did participants who received feedback that was only marginally perceptible (subjective threshold). Participants who received feedback that was not perceptible (objective threshold) showed no learning. Upon transfer to a condition in which the target was unconcealed, performance increased in both the subjective and the objective threshold condition, but decreased in the supraliminal condition. In all three conditions, participants reported minimal declarative knowledge of their movements, suggesting that deliberate hypothesis testing about how best to move in order to perform the motor task successfully was disrupted by the impoverished disposition of the visual outcome feedback. It was concluded that sub-optimally perceptible visual feedback evokes implicit processes.
Robust Adaptive Thresholder For Document Scanning Applications
NASA Astrophysics Data System (ADS)
Hsing, To R.
1982-12-01
In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.
A Queueing Approach to Optimal Resource Replication in Wireless Sensor Networks
2009-04-29
network (an energy- centric approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric ...replication strategies in wireless sensor networks. The model can be used to minimize either the total transmission rate of the network (an energy- centric ...approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric approach). The model explicitly
NASA Astrophysics Data System (ADS)
Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.
2016-08-01
This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.
Optimal maintenance policy incorporating system level and unit level for mechanical systems
NASA Astrophysics Data System (ADS)
Duan, Chaoqun; Deng, Chao; Wang, Bingran
2018-04-01
The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.
Performance breakdown in optimal stimulus decoding
NASA Astrophysics Data System (ADS)
Kostal, Lubomir; Lansky, Petr; Pilarski, Stevan
2015-06-01
Objective. One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. Approach. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. Main results. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. Significance. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.
Performance breakdown in optimal stimulus decoding.
Lubomir Kostal; Lansky, Petr; Pilarski, Stevan
2015-06-01
One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.
Pacing threshold changes after transvenous catheter countershock.
Yee, R; Jones, D L; Klein, G J
1984-02-01
The serial changes in pacing threshold and R-wave amplitude were examined after insertion of a countershock catheter in 12 patients referred for management of recurrent ventricular tachyarrhythmias. In 6 patients, values before and immediately after catheter countershock were monitored. Pacing threshold increased (from 1.4 +/- 0.2 to 2.4 +/- 0.5 V, mean +/- standard error of the mean, p less than 0.05) while the R-wave amplitude decreased (bipolar R wave from 5.9 +/- 1.1 to 3.4 +/- 0.7 mV, p less than 0.01; unipolar R wave recorded from the distal ventricular electrode from 8.9 +/- 1.8 to 4.6 +/- 1.2 mV, p less than 0.01; and proximal ventricular electrode from 7.7 +/- 1.5 to 5.0 +/- 1.0 mV, p less than 0.01). A return to control values occurred within 10 minutes. In all patients, pacing threshold increased by 154 +/- 30% (p less than 0.001) during the first 7 days that the catheter was in place. It is concluded that catheter countershock causes an acute increase in pacing threshold and decrease in R-wave amplitude. A catheter used for countershock may not be acceptable as a backup pacing catheter.
Chen, Can; Li, Chentong; Kang, Yanmei
2018-02-14
Fire blight is one of the most devastating plant diseases in the world. This paper proposes a Filippov fire-blight model incorporating cutting off infected branches and replanting susceptible trees. The Filippov-type model is formulated by considering that no control strategy is taken if the number of infected trees is less than an infected threshold level I c ; further, we cut off infected branches once the number of infected trees exceeds I c ; meanwhile, we replant trees if the number of susceptible trees is less than a susceptible threshold level S c . The global dynamical behaviour of the Filippov system is investigated. It is shown that model solutions ultimately converge to the positive equilibrium that lies in the region above I c , or below I c , or on I=I c , as we vary the susceptible and infected threshold values S c and I c . Our results indicate that proper combinations of the susceptible and infected threshold values based on the threshold policy can lead the number of infected trees to an acceptable level, when complete eradication is not economically desirable. Copyright © 2017 Elsevier Ltd. All rights reserved.
Low threshold and high efficiency solar-pumped laser with Fresnel lens and a grooved Nd:YAG rod
NASA Astrophysics Data System (ADS)
Guan, Zhe; Zhao, Changming; Yang, Suhui; Wang, Yu; Ke, Jieyao; Gao, Fengbin; Zhang, Haiyang
2016-11-01
Sunlight is considered as a new efficient source for direct optical-pumped solid state lasers. High-efficiency solar pumped lasers with low threshold power would be more promising than semiconductor lasers with large solar panel in space laser communication. Here we report a significant advance in solar-pumped laser threshold by pumping Nd:YAG rod with a grooved sidewall. Two-solar pumped laser setups are devised. In both cases, a Fresnel lens is used as the primary sunlight concentrator. Gold-plated conical cavity with a liquid light-guide lens is used as the secondary concentrator to further increase the solar energy concentration. In the first setup, solar pumping a 6mm diameter Nd:YAG rod, maximum laser power of 31.0W/m2 cw at 1064nm is produced, which is higher than the reported record, and the slope efficiency is 4.98% with the threshold power on the surface of Fresnel lens is 200 W. In the second setup, a 5 mm diameter laser rod output power is 29.8W/m2 with a slope efficiency of 4.3%. The threshold power of 102W is obtained, which is 49% lower than the former. Meanwhile, the theoretical calculating of the threshold power and slope efficiency of the solar-pumped laser has been established based on the rate-equation of a four-level system. The results of the finite element analysis by simulation software are verified in experiment. The optimization of the conical cavity by TraceProsoftware and the optimization of the laser resonator by LASCADare useful for the design of a miniaturization solar- pumped laser.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher
Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less
The Impact of Heterogeneous Thresholds on Social Contagion with Multiple Initiators
Karampourniotis, Panagiotis D.; Sreenivasan, Sameet; Szymanski, Boleslaw K.; Korniss, Gyorgy
2015-01-01
The threshold model is a simple but classic model of contagion spreading in complex social systems. To capture the complex nature of social influencing we investigate numerically and analytically the transition in the behavior of threshold-limited cascades in the presence of multiple initiators as the distribution of thresholds is varied between the two extreme cases of identical thresholds and a uniform distribution. We accomplish this by employing a truncated normal distribution of the nodes’ thresholds and observe a non-monotonic change in the cascade size as we vary the standard deviation. Further, for a sufficiently large spread in the threshold distribution, the tipping-point behavior of the social influencing process disappears and is replaced by a smooth crossover governed by the size of initiator set. We demonstrate that for a given size of the initiator set, there is a specific variance of the threshold distribution for which an opinion spreads optimally. Furthermore, in the case of synthetic graphs we show that the spread asymptotically becomes independent of the system size, and that global cascades can arise just by the addition of a single node to the initiator set. PMID:26571486
Bennett, Peter A.; Bennett, George L.; Belitz, Kenneth
2009-01-01
Groundwater quality in the approximately 1,180-square-mile Northern Sacramento Valley study unit (REDSAC) was investigated in October 2007 through January 2008 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within REDSAC and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 66 wells in Shasta and Tehama Counties. Forty-three of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 23 were selected to aid in evaluation of specific water-quality issues (understanding wells). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial constituents. Naturally occurring isotopes (tritium, and carbon-14, and stable isotopes of nitrogen and oxygen in nitrate, stable isotopes of hydrogen and oxygen of water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled ground water. In total, over 275 constituents and field water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and sampmatrix spikes) were collected at approximately 8 to 11 percent of the wells, and the results for these samples were used to evaluate the quality of the data obtained from the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the groundwater samples. Differences between replicate samples were within acceptable ranges for nearly all compounds, indicating acceptably low variability. Matrix-spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, raw groundwater typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared with regulatory and nonregulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and with aesthetic and technical thresholds established by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and do not indicate compliance or noncompliance with those thresholds. The concentrations of most constituents detected in groundwater samples from REDSAC were below drinking-water thresholds. Volatile organic compounds (VOC) and pesticides were detected in less than one-quarter of the samples and were generally less than a hundredth of any health-based thresholds. NDMA was detected in one grid well above the NL-CA. Concentrations of all nutrients and trace elements in samples from REDSAC wells were below the health-based thresholds except those of arsenic in three samples, which were above the USEPA maximum contaminant level (MCL-US). However
Diagnostic depressive symptoms of the mixed bipolar episode.
Cassidy, F; Ahearn, E; Murry, E; Forest, K; Carroll, B J
2000-03-01
There is not yet consensus on the best diagnostic definition of mixed bipolar episodes. Many have suggested the DSM-III-R/-IV definition is too rigid. We propose alternative criteria using data from a large patient cohort. We evaluated 237 manic in-patients using DSM-III-R criteria and the Scale for Manic States (SMS). A bimodally distributed factor of dysphoric mood has been reported from the SMS data. We used both the factor and the DSM-III-R classifications to identify candidate depressive symptoms and then developed three candidate depressive symptom sets. Using ROC analysis we determined the optimal threshold number of symptoms in each set and compared the three ROC solutions. The optimal solution was tested against the DSM-III-R classification for crossvalidation. The optimal ROC solution was a set, derived from both the DSM-III-R and the SMS, and the optimal threshold for diagnosis was two or more symptoms. Applying this set iteratively to the DSM-III-R classification produced the identical ROC solution. The prevalence of mixed episodes in the cohort was 13.9% by DSM-III-R, 20.2% by the dysphoria factor and 27.4% by the new ROC solution. A diagnostic set of six dysphoric symptoms (depressed mood, anhedonia, guilt, suicide, fatigue and anxiety), with a threshold of two symptoms, is proposed for a mixed episode. This new definition has a foundation in clinical data, in the proved diagnostic performance of the qualifying symptoms, and in ROC validation against two previous definitions that each have face validity.
Nkpaa, K W; Patrick-Iwuanyanwu, K C; Wegwu, M O; Essien, E B
2016-01-01
This study was designed to investigate the human health risk through consumption of seafood from contaminated sites in Kaa, B-Dere, and Bodo City all in Ogoniland. The potential non-carcinogenic health risk for consumers were investigated by assessing the estimated daily intake and target hazard quotients for Cr, Cd, Zn, Pb, Mn, and Fe while carcinogenic health effect from Cr, Cd, and Pb was also estimated. The estimated daily intake from seafood consumption was below the threshold values for Cr, Mn, and Zn while they exceeded the threshold for Cd, Pb, and Fe. The target hazard quotients for Zn and Cr were below 1. Target hazard quotients values for Cd, Pb, Mn, and Fe were greater than 1 except for Fe level in Liza falcipinis from Kaa. Furthermore, estimation of carcinogenic risk for Cr in all samples under study exceeded the accepted risk level of 10E-4. Also, Cd carcinogenic risk level for L. falcipinis and Callinectes pallidus collected from B-Dere and C. pallidus collected from Bodo City was 1.1E-3 which also exceeded the accepted risk level of 10E-4 for Cd. Estimation of carcinogenic risk for Pb was within the acceptable range of 10E-4. Consumers of seafood from these sites in Ogoniland may be exposed to metal pollution.
Frank, T
2001-04-01
The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using Sennheiser HDA 200 earphones should equal the 1998 interim ISO RETSPLs from 8 to 16 kHz. Further, because the differences between repeated thresholds were well within +/-10 dB and had an extremely low false-positive rate in reference to the ASHA 1994 criteria for a significant threshold shift due to ototoxicity, a Sennheiser HDA 200 earphone can be used for serial monitoring to determine whether significant high-frequency threshold shifts have occurred for patients receiving potentially ototoxic drug therapy.
Perceptual precision of passive body tilt is consistent with statistically optimal cue integration
Karmali, Faisal; Nicoucar, Keyvan; Merfeld, Daniel M.
2017-01-01
When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information, matching maximum-likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5 Hz in six normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5 Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, whereas the potential contribution of nonvestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without nonvestibular cues) can explain the observed performance enhancements as can a model that includes nonvestibular contributions. NEW & NOTEWORTHY We found that human whole body self-motion direction-recognition thresholds measured during dynamic roll tilts were significantly lower than those predicted by a conventional maximum-likelihood weighting of the roll angular velocity and quasistatic roll tilt cues. Here, we show that two models can each match this “apparent” better-than-optimal performance: 1) inclusion of a somatosensory contribution and 2) inclusion of a dynamic sensory interaction between canal and otolith cues via a Kalman filter model. PMID:28179477
Kirchner, J; Kickuth, R; Laufer, U; Noack, M; Liermann, D
2000-05-01
Ultrafast detector technology enables bolus-triggered application of contrast media. In a prospective study we investigated the benefit of this new method with the intention of optimizing enhancement during examination of the chest and abdomen. In total, we examined 548 patients under standardized conditions. All examinations were performed on a Somatom Plus 4 Power CT system (Siemens Corp., Forchheim, Germany) using the CARE-Bolus software. This produces repetitive low-dose test images (e.g. for the lung: 140 kV, 43 mA, TI 0.5 s) and measures the Hounsfield attenuation in a pre-selected region of interest. After exceeding a defined threshold, a diagnostic spiral CT examination was begun automatically. The data obtained from 321 abdominal CT and 179 lung CT examinations were correlated with different parameters such as age, weight and height of the patients and parameters of vascular access. In a group of 80 patients, the injection of contrast medium was stopped after reaching a pre-defined threshold of an increase of 100 HU over the baseline. Then, we assessed the maximal enhancement of liver, pulmonal artery trunk and aortic arch. There was no correlation between bolus geometry and age, body surface or weight. In helical CT of the abdomen the threshold was reached after a mean trigger time of 27 s (range 13-67 s) and only 65 ml (range 41-105 ml) of contrast medium were administered. In helical CT of the lung the threshold was reached after 21 s (range 12-48 s) and the mean amount of administered contrast medium was 48 ml (range 38-71 ml). Bolus triggering allows optimized enhancement of the organs and reduces the dose of contrast material required compared with standard administration. Copyright 2000 The Royal College of Radiologists.
Method and apparatus for analog pulse pile-up rejection
De Geronimo, Gianluigi
2013-12-31
A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.
Method and apparatus for analog pulse pile-up rejection
De Geronimo, Gianluigi
2014-11-18
A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.
Bahouth, George; Digges, Kennerly; Schulman, Carl
2012-01-01
This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132
ERIC Educational Resources Information Center
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.
2009-01-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…
ERIC Educational Resources Information Center
Snodgrass, Michael; Shevrin, Howard
2006-01-01
Although the veridicality of unconscious perception is increasingly accepted, core issues remain unresolved [Jack, A., & Shallice, T. (2001). Introspective physicalism as an approach to the science of consciousness. "Cognition, 79," 161-196], and sharp disagreement persists regarding fundamental methodological and theoretical issues. The most…
Demographic and habitat requirements for conservation of bull trout
Bruce E. Rieman; John D. Mclntyre
1993-01-01
Elements in bull trout biology, population dynamics, habitat, and biotic interactions important to conservation of the species are identified. Bull trout appear to have more specific habitat requirements than other salmonids, but no critical thresholds of acceptable habitat condition were found. Size, temporal variation, and spatial distribution are likely to influence...
Evaluation of an Airborne Spacing Concept, On-Board Spacing Tool, and Pilot Interface
NASA Technical Reports Server (NTRS)
Swieringa, Kurt; Murdoch, Jennifer L.; Baxley, Brian; Hubbs, Clay
2011-01-01
The number of commercial aircraft operations is predicted to increase in the next ten years, creating a need for improved operational efficiency. Two areas believed to offer significant increases in efficiency are optimized profile descents and dependent parallel runway operations. It is envisioned that during both of these types of operations, flight crews will precisely space their aircraft behind preceding aircraft at air traffic control assigned intervals to increase runway throughput and maximize the use of existing infrastructure. This paper describes a human-in-the-loop experiment designed to study the performance of an onboard spacing algorithm and pilots ratings of the usability and acceptability of an airborne spacing concept that supports dependent parallel arrivals. Pilot participants flew arrivals into the Dallas Fort-Worth terminal environment using one of three different simulators located at the National Aeronautics and Space Administration s (NASA) Langley Research Center. Scenarios were flown using Interval Management with Spacing (IM-S) and Required Time of Arrival (RTA) control methods during conditions of no error, error in the forecast wind, and offset (disturbance) to the arrival flow. Results indicate that pilots delivered their aircraft to the runway threshold within +/- 3.5 seconds of their assigned arrival time and reported that both the IM-S and RTA procedures were associated with low workload levels. In general, pilots found the IM-S concept, procedures, speeds, and interface acceptable; with 92% of pilots rating the procedures as complete and logical, 218 out of 240 responses agreeing that the IM-S speeds were acceptable, and 63% of pilots reporting that the displays were easy to understand and displayed in appropriate locations. The 22 (out of 240) responses, indicating that the commanded speeds were not acceptable and appropriate occurred during scenarios containing wind error and offset error. Concerns cited included the occurrence of multiple speed changes within a short time period, speed changes required within twenty miles of the runway, and an increase in airspeed followed shortly by a decrease in airspeed. Within this paper, appropriate design recommendations are provided, and the need for continued, iterative human-centered design is discussed.
NASA Astrophysics Data System (ADS)
Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier
2018-06-01
Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.
VirSSPA- a virtual reality tool for surgical planning workflow.
Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T
2009-03-01
A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.
Shriner, Susan A; VanDalen, Kaci K; Root, J Jeffrey; Sullivan, Heather J
2016-02-01
The availability of a validated commercial assay is an asset for any wildlife investigation. However, commercial products are often developed for use in livestock and are not optimized for wildlife. Consequently, it is incumbent upon researchers and managers to apply commercial products appropriately to optimize program outcomes. We tested more than 800 serum samples from mallards for antibodies to influenza A virus with the IDEXX AI MultiS-Screen Ab test to evaluate assay performance. Applying the test per manufacturer's recommendations resulted in good performance with 84% sensitivity and 100% specificity. However, performance was improved to 98% sensitivity and 98% specificity by increasing the recommended cut-off. Using this alternative threshold for identifying positive and negative samples would greatly improve sample classification, especially for field samples collected months after infection when antibody titers have waned from the initial primary immune response. Furthermore, a threshold that balances sensitivity and specificity reduces estimation bias in seroprevalence estimates. Published by Elsevier B.V.
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.
Optimizing the Respiratory Pump: Harnessing Inspiratory Resistance to Treat Systemic Hypotension
2011-06-01
Lurie KG, Vicaut E, Martin D, Gueugniaud PY, Petit JL, Payen D. Evaluation of an impedance threshold device in pa- tients receiving active compression...inspiratory im- pedance threshold device for out-of-hospital cardiac arrest. Cir- culation 2003;108(18):2201-2205. 32. Lindstrom DA, Parquette BA. Use...Lurie KG, Voelckel WG, Zielinski T, McKnite S, Lindstrom P, Peterson C, et al. Improving standard cardiopulmonary resuscitation with an inspiratory
NASA Astrophysics Data System (ADS)
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
The importance of decision onset
Grinband, Jack; Ferrera, Vincent
2015-01-01
The neural mechanisms of decision making are thought to require the integration of evidence over time until a response threshold is reached. Much work suggests that response threshold can be adjusted via top-down control as a function of speed or accuracy requirements. In contrast, the time of integration onset has received less attention and is believed to be determined mostly by afferent or preprocessing delays. However, a number of influential studies over the past decade challenge this assumption and begin to paint a multifaceted view of the phenomenology of decision onset. This review highlights the challenges involved in initiating the integration of evidence at the optimal time and the potential benefits of adjusting integration onset to task demands. The review outlines behavioral and electrophysiolgical studies suggesting that the onset of the integration process may depend on properties of the stimulus, the task, attention, and response strategy. Most importantly, the aggregate findings in the literature suggest that integration onset may be amenable to top-down regulation, and may be adjusted much like response threshold to exert cognitive control and strategically optimize the decision process to fit immediate behavioral requirements. PMID:26609111
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
Three validation metrics for automated probabilistic image segmentation of brain tumours
Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.
2005-01-01
SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482
NASA Astrophysics Data System (ADS)
Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken
2016-08-01
NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.
Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G
2016-10-01
Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.
Effective sextic superpotential and B-L violation in NMSGUT
NASA Astrophysics Data System (ADS)
Aulakh, C. S.; Awasthi, R. L.; Krishna, Shri
2017-10-01
We list operators of the superpotential of the effective MSSM that emerge from the NMSGUT up to sextic degree. We give illustrative expressions for the coefficients in terms of NMSGUT parameters. We also estimate the impact of GUT scale threshold corrections on these effective operators in view of the demonstration that B violation via quartic superpotential terms can be suppressed to acceptable levels after including such corrections in the NMSGUT. We find a novel B, B-L violating quintic operator that leads to the decay mode n→ e^- K^+. We also remark that the threshold corrections to the Type-I seesaw mechanism make the deviation of right-handed neutrino masses from the GUT scale more natural while Type-II seesaw neutrino masses, which earlier tended to utterly negligible receive threshold enhancement. Our results are of relevance for analysing B-L violating operator-based, sphaleron-safe, baryogenesis.
Study of communications data compression methods
NASA Technical Reports Server (NTRS)
Jones, H. W.
1978-01-01
A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.
Löffler, Frank E.; Tiedje, James M.; Sanford, Robert A.
1999-01-01
Measurements of the hydrogen consumption threshold and the tracking of electrons transferred to the chlorinated electron acceptor (fe) reliably detected chlororespiratory physiology in both mixed cultures and pure cultures capable of using tetrachloroethene, cis-1,2-dichloroethene, vinyl chloride, 2-chlorophenol, 3-chlorobenzoate, 3-chloro-4-hydroxybenzoate, or 1,2-dichloropropane as an electron acceptor. Hydrogen was consumed to significantly lower threshold concentrations of less than 0.4 ppmv compared with the values obtained for the same cultures without a chlorinated compound as an electron acceptor. The fe values ranged from 0.63 to 0.7, values which are in good agreement with theoretical calculations based on the thermodynamics of reductive dechlorination as the terminal electron-accepting process. In contrast, a mixed methanogenic culture that cometabolized 3-chlorophenol exhibited a significantly lower fe value, 0.012. PMID:10473415
Navarro-Mesa, Juan L.; Juliá-Serdá, Gabriel; Ramírez-Ávila, G. Marcelo; Ravelo-García, Antonio G.
2018-01-01
Our contribution focuses on the characterization of sleep apnea from a cardiac rate point of view, using Recurrence Quantification Analysis (RQA), based on a Heart Rate Variability (HRV) feature selection process. Three parameters are crucial in RQA: those related to the embedding process (dimension and delay) and the threshold distance. There are no overall accepted parameters for the study of HRV using RQA in sleep apnea. We focus on finding an overall acceptable combination, sweeping a range of values for each of them simultaneously. Together with the commonly used RQA measures, we include features related to recurrence times, and features originating in the complex network theory. To the best of our knowledge, no author has used them all for sleep apnea previously. The best performing feature subset is entered into a Linear Discriminant classifier. The best results in the “Apnea-ECG Physionet database” and the “HuGCDN2014 database” are, according to the area under the receiver operating characteristic curve, 0.93 (Accuracy: 86.33%) and 0.86 (Accuracy: 84.18%), respectively. Our system outperforms, using a relatively small set of features, previously existing studies in the context of sleep apnea. We conclude that working with dimensions around 7–8 and delays about 4–5, and using for the threshold distance the Fixed Amount of Nearest Neighbours (FAN) method with 5% of neighbours, yield the best results. Therefore, we would recommend these reference values for future work when applying RQA to the analysis of HRV in sleep apnea. We also conclude that, together with the commonly used vertical and diagonal RQA measures, there are newly used features that contribute valuable information for apnea minutes discrimination. Therefore, they are especially interesting for characterization purposes. Using two different databases supports that the conclusions reached are potentially generalizable, and are not limited by database variability. PMID:29621264
Martín-González, Sofía; Navarro-Mesa, Juan L; Juliá-Serdá, Gabriel; Ramírez-Ávila, G Marcelo; Ravelo-García, Antonio G
2018-01-01
Our contribution focuses on the characterization of sleep apnea from a cardiac rate point of view, using Recurrence Quantification Analysis (RQA), based on a Heart Rate Variability (HRV) feature selection process. Three parameters are crucial in RQA: those related to the embedding process (dimension and delay) and the threshold distance. There are no overall accepted parameters for the study of HRV using RQA in sleep apnea. We focus on finding an overall acceptable combination, sweeping a range of values for each of them simultaneously. Together with the commonly used RQA measures, we include features related to recurrence times, and features originating in the complex network theory. To the best of our knowledge, no author has used them all for sleep apnea previously. The best performing feature subset is entered into a Linear Discriminant classifier. The best results in the "Apnea-ECG Physionet database" and the "HuGCDN2014 database" are, according to the area under the receiver operating characteristic curve, 0.93 (Accuracy: 86.33%) and 0.86 (Accuracy: 84.18%), respectively. Our system outperforms, using a relatively small set of features, previously existing studies in the context of sleep apnea. We conclude that working with dimensions around 7-8 and delays about 4-5, and using for the threshold distance the Fixed Amount of Nearest Neighbours (FAN) method with 5% of neighbours, yield the best results. Therefore, we would recommend these reference values for future work when applying RQA to the analysis of HRV in sleep apnea. We also conclude that, together with the commonly used vertical and diagonal RQA measures, there are newly used features that contribute valuable information for apnea minutes discrimination. Therefore, they are especially interesting for characterization purposes. Using two different databases supports that the conclusions reached are potentially generalizable, and are not limited by database variability.
A new temperature threshold detector - Application to missile monitoring
NASA Astrophysics Data System (ADS)
Coston, C. J.; Higgins, E. V.
Comprehensive thermal surveys within the case of solid propellant ballistic missile flight motors are highly desirable. For example, a problem involving motor failures due to insulator cracking at motor ignition, which took several years to solve, could have been identified immediately on the basis of a suitable thermal survey. Using conventional point measurements, such as those utilizing typical thermocouples, for such a survey on a full scale motor is not feasible because of the great number of sensors and measurements required. An alternate approach recognizes that temperatures below a threshold (which depends on the material being monitored) are acceptable, but higher temperatures exceed design margins. In this case hot spots can be located by a grid of wire-like sensors which are sensitive to temperature above the threshold anywhere along the sensor. A new type of temperature threshold detector is being developed for flight missile use. The considered device consists of KNO3 separating copper and Constantan metals. Above the KNO3 MP, galvanic action provides a voltage output of a few tenths of a volt.
Rich Sliding Motion and Dynamics in a Filippov Plant-Disease System
NASA Astrophysics Data System (ADS)
Chen, Can; Chen, Xi
In order to reduce the spread of plant diseases and maintain the number of infected trees below an economic threshold, we choose the number of infected trees and the number of susceptible plants as the control indexes on whether to implement control strategies. Then a Filippov plant-disease model incorporating cutting off infected branches and replanting susceptible trees is proposed. Based on the theory of Filippov system, the sliding mode dynamics and conditions for the existence of all the possible equilibria and Lotka-Volterra cycles are presented. We find that model solutions ultimately approach the positive equilibrium that lies in the region above the infected threshold value TI, or the periodic trajectories that lie in the region below TI, or the pseudo-attractor ET = (TS,TI), as we vary the susceptible and infected threshold values. It indicates that the plant-disease transmission is tolerable if the trajectories approach ET = (TS,TI) or the periodic trajectories lie in the region below TI. Hence an acceptable level of the number of infected trees can be achieved when the susceptible and infected threshold values are chosen appropriately.
NASA Astrophysics Data System (ADS)
Young, B. A.; Gao, Xiaosheng; Srivatsan, T. S.
2009-10-01
In this paper we compare and contrast the crack growth rate of a nickel-base superalloy (Alloy 690) in the Pressurized Water Reactor (PWR) environment. Over the last few years, a preponderance of test data has been gathered on both Alloy 690 thick plate and Alloy 690 tubing. The original model, essentially based on a small data set for thick plate, compensated for temperature, load ratio and stress-intensity range but did not compensate for the fatigue threshold of the material. As additional test data on both plate and tube product became available the model was gradually revised to account for threshold properties. Both the original and revised models generated acceptable results for data that were above 1 × 10 -11 m/s. However, the test data at the lower growth rates were over-predicted by the non-threshold model. Since the original model did not take the fatigue threshold into account, this model predicted no operating stress below which the material would effectively undergo fatigue crack growth. Because of an over-prediction of the growth rate below 1 × 10 -11 m/s, due to a combination of low stress, small crack size and long rise-time, the model in general leads to an under-prediction of the total available life of the components.
Comparative Pessimism or Optimism: Depressed Mood, Risk-Taking, Social Utility and Desirability.
Milhabet, Isabelle; Le Barbenchon, Emmanuelle; Cambon, Laurent; Molina, Guylaine
2015-03-05
Comparative optimism can be defined as a self-serving, asymmetric judgment of the future. It is often thought to be beneficial and socially accepted, whereas comparative pessimism is correlated with depression and socially rejected. Our goal was to examine the social acceptance of comparative optimism and the social rejection of comparative pessimism in two dimensions of social judgment, social desirability and social utility, considering the attributions of dysphoria and risk-taking potential (studies 2 and 3) on outlooks on the future. In three experiments, the participants assessed either one (study 1) or several (studies 2 and 3) fictional targets in two dimensions, social utility and social desirability. Targets exhibiting comparatively optimistic or pessimistic outlooks on the future were presented as non-depressed, depressed, or neither (control condition) (study 1); non-depressed or depressed (study 2); and non-depressed or in control condition (study 3). Two significant results were obtained: (1) social rejection of comparative pessimism in the social desirability dimension, which can be explained by its depressive feature; and (2) comparative optimism was socially accepted on the social utility dimension, which can be explained by the perception that comparatively optimistic individuals are potential risk-takers.
Diao, Wen-wen; Ni, Dao-feng; Li, Feng-rong; Shang, Ying-ying
2011-03-01
Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency, figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Thresholds with filter settings of 30 - 1500 Hz and 30 - 3000 Hz in children aged 2 - 33 months were recorded by click, tone burst ABR. A total of 18 patients (8 male/10 female), 22 ears were included. The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0.5 kHz and 2.0 kHz (t values were 2.238 and 2.217, P < 0.05), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.
Dowie, Jack
2004-05-01
In many health decision making situations there is a requirement that the effectiveness of interventions, usually their 'clinical' effectiveness, be established, as well as their cost-effectiveness. Often indeed this is effectively a prior requirement for their cost-effectiveness being investigated. If, however, one accepts the ethical argument for using a threshold incremental cost-effectiveness ratio (ICER) for interventions that are more effective but more costly (i.e. fall in the NE quadrant of the cost-effectiveness plane), one should apply the same decision rule in the SW quadrant, where the intervention is less effective but less costly. This implication is present in most standard treatments of cost-effectiveness analysis, including recent stochastic versions, and had gone relatively unquestioned within the discipline until the recent suggestion that the ICER threshold might be 'kinked'. A kinked threshold would, O'Brien et al. argue, better reflect the asymmetrical individual preferences found in empirical studies of consumer's willingness to pay and willingness to accept and justify different decision rules in the NE and SW quadrants. We reject the validity of such asymmetric preferences in the context of public health care decisions and consider and counter the two main 'ethical' objections that probably underlie the asymmetry in this case--the objection to 'taking away' and the objection to being required to undergo treatment that is less effective than no treatment at all. Copyright 2004 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
Nair, Harish; Verma, Vasundhara R; Theodoratou, Evropi; Zgaga, Lina; Huda, Tanvir; Simões, Eric A F; Wright, Peter F; Rudan, Igor; Campbell, Harry
2011-04-13
Respiratory Syncytial Virus (RSV) is the leading cause of acute lower respiratory infections (ALRI) in children. It is estimated to cause approximately 33.8 million new episodes of ALRI in children annually, 96% of these occurring in developing countries. It is also estimated to result in about 53,000 to 199,000 deaths annually in young children. Currently there are several vaccine and immunoprophylaxis candidates against RSV in the developmental phase targeting active and passive immunization. We used a modified CHNRI methodology for setting priorities in health research investments. This was done in two stages. In Stage I, we systematically reviewed the literature related to emerging vaccines against RSV relevant to 12 criteria of interest. In Stage II, we conducted an expert opinion exercise by inviting 20 experts (leading basic scientists, international public health researchers, international policy makers and representatives of pharmaceutical companies). The policy makers and industry representatives accepted our invitation on the condition of anonymity, due to the sensitive nature of their involvement in such exercises. They answered questions from the CHNRI framework and their "collective optimism" towards each criterion was documented on a scale from 0 to 100%. In the case of candidate vaccines for active immunization of infants against RSV, the experts expressed very low levels of optimism for low product cost, affordability and low cost of development; moderate levels of optimism regarding the criteria of answerability, likelihood of efficacy, deliverability, sustainability and acceptance to end users for the interventions; and high levels of optimism regarding impact on equity and acceptance to health workers. While considering the candidate vaccines targeting pregnant women, the panel expressed low levels of optimism for low product cost, affordability, answerability and low development cost; moderate levels of optimism for likelihood of efficacy, deliverability, sustainability and impact on equity; high levels of optimism regarding acceptance to end users and health workers. The group also evaluated immunoprophylaxis against RSV using monoclonal antibodies and expressed no optimism towards low product cost; very low levels of optimism regarding deliverability, affordability, sustainability, low implementation cost and impact on equity; moderate levels of optimism against the criteria of answerability, likelihood of efficacy, acceptance to end-users and health workers; and high levels of optimism regarding low development cost. They felt that either of these vaccines would have a high impact on reducing burden of childhood ALRI due to RSV and reduce the overall childhood ALRI burden by a maximum of about 10%. Although monoclonal antibodies have proven to be effective in providing protection to high-risk infants, their introduction in resource poor settings might be limited by high cost associated with them. Candidate vaccines for active immunization of infants against RSV hold greatest promise. Introduction of a low cost vaccine against RSV would reduce the inequitable distribution of burden due to childhood ALRI and will most likely have a high impact on morbidity and mortality due to severe ALRI.
Eicher, Manuela; Ribi, Karin; Senn-Dubey, Catherine; Senn, Stefanie; Ballabeni, Pierluigi; Betticher, Daniel
2018-04-14
We developed 2 intensity levels of a complex intervention for interprofessional supportive care in cancer (IPSC-C) to facilitate resilience and reduce unmet supportive care needs. We aimed to test the feasibility, acceptability, and preliminary effectiveness of both intensity levels in routine practice. In a randomized, noncomparative phase II trial, newly diagnosed patients received either low (LI-IPSC-C) or high (HI-IPSC-C) intensity interventions. Low-intensity-interprofessional supportive care in cancer (LI-IPSC-C) consisted of 3 electronic assessments of resilience, unmet supportive care needs, mood, and coping effort over 16 weeks with an immediate feedback to clinicians including tailored intervention recommendations to facilitate resilience and supportive care. High-intensity-interprofessional supportive care in cancer (HI-IPSC-C) added 5 structured consultations (face-to-face and telephone) provided by specialized nurses. Primary outcome was a change ≥5 in resilience score on the Connor-Davidson Resilience Scale (CD-RISC). Secondary outcomes were unmet supportive care needs, mood, and coping effort. We assessed feasibility by clinician-provided tailored interventions as recommended and acceptability through qualitative interviews with clinicians and patients. In the LI-IPSC-C arm, 11 of 41, in the HI-IPSC-C arm 17 of 43, patients increased resilience scores by ≥5. Relatively more patients decreased unmet needs in HI-IPSC-C arm. Mood, in both arms, and coping effort, in HI-IPSC-C arm, improved meaningfully. Feasibility was limited for the LI-IPSC-C arm, mainly due to lack of time; acceptability was high in both arms. Neither LI-IPSC-C nor HI-IPSC-C interventions reached the desired threshold. HI-IPSC-C showed positive effects on secondary outcomes and was feasible. Resilience as measured by the CD-RISC may not be the optimal outcome measure for this intervention. Copyright © 2018 John Wiley & Sons, Ltd.
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
ERIC Educational Resources Information Center
Oberle, Eva; Schonert-Reichl, Kimberly A.; Thomson, Kimberly C.
2010-01-01
Past studies have investigated relationships between peer acceptance and peer-rated social behaviors. However, relatively little is known about the manner in which indices of well-being such as optimism and positive affect may predict peer acceptance above and beyond peer ratings of antisocial and prosocial behaviors. Early adolescence--roughly…
Weather and place-based human behavior: recreational preferences and sensitivity
NASA Astrophysics Data System (ADS)
de Freitas, C. R.
2015-01-01
This study examines the links between biometeorological variables and the behavior of beach recreationists along with their rating of overall weather conditions. To identify and describe significance of on-site atmospheric conditions, two separate forms of response are examined. The first is sensory perception of the immediate atmospheric surround expressed verbally, which was the subject of earlier work. In the research reported here, on-site observations of behavior that reflect the effects of weather and climate are examined. By employing, independently, separate indicators of on-site experience, the reliability of each is examined and interpreted and apparent threshold conditions verified. The study site is King's Beach located on the coast of Queensland, Australia. On-site observations of atmospheric variables and beach user behavior are made for the daylight hours of 45 days spread over a 12-month period. The results show that behavioral data provide reliable and meaningful indications of the significance of the atmospheric environment for leisure. Atmospheric conditions within the zone of acceptability are those that the beach users can readily cope with or modify by a range of minor behavioral adjustments. Optimal weather conditions appear to be those requiring no specific behavioral adjustment. Attendance levels reflect only the outer limits of acceptability of the meteorological environment, while duration of visit enables calibration of levels of approval in so far as it reflects rating of on-site weather within a broad zone of tolerance. In a broad theoretical sense, the results add to an understanding of the relationship between weather and human behavior. This information is potentially useful in effective tourism management and planning.
Co-extrusion of food grains-banana pulp for nutritious snacks: optimization of process variables.
Mridula, D; Sethi, Swati; Tushir, Surya; Bhadwal, Sheetal; Gupta, R K; Nanda, S K
2017-08-01
Present study was undertaken to optimize the process conditions for development of food grains (maize, defatted soy flour, sesame seed)-banana based nutritious expanded snacks using extrusion processing. Experiments were designed using Box-Behnken design with banana pulp (8-24 g), screw speed (300-350 rpm) and feed moisture (14-16% w.b.). Seven responses viz. expansion ratio (ER), bulk density (BD), water absorption index (WAI), protein, minerals, iron and sensory acceptability were considered for optimizing independent parameters. ER, BD, WAI, protein content, total minerals, iron content, and overall acceptability ranged 2.69-3.36, 153.43-238.83 kg/m 3 , 4.56-4.88 g/g, 15.19-15.52%, 2.06-2.27%, 4.39-4.67 mg/100 g (w.b.) and 6.76-7.36, respectively. ER was significantly affected by all three process variables while BD was influenced by banana pulp and screw speed only. Studied process variables did not affected colour quality except 'a' value with banana pulp and screw speed. Banana pulp had positive correlation with water solubility index, total minerals and iron content and negative with WAI, protein and overall acceptability. Based upon multiple response analysis, optimized conditions were 8 g banana pulp, 350 rpm screw speed and 14% feed moisture indicating the protein, calorie, iron content and overall sensory acceptability in sample as 15.46%, 401 kcal/100 g, 4.48 mg/100 g and 7.6 respectively.
Succession of hide–seek and pursuit–evasion at heterogeneous locations
Gal, Shmuel; Casas, Jérôme
2014-01-01
Many interactions between searching agents and their elusive targets are composed of a succession of steps, whether in the context of immune systems, predation or counterterrorism. In the simplest case, a two-step process starts with a search-and-hide phase, also called a hide-and-seek phase, followed by a round of pursuit–escape. Our aim is to link these two processes, usually analysed separately and with different models, in a single game theory context. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hide strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses it's hiding place so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal position for the hider is where it has the highest probability of escaping once spotted. PMID:24621817
Acceptance criteria for urban dispersion model evaluation
NASA Astrophysics Data System (ADS)
Hanna, Steven; Chang, Joseph
2012-05-01
The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude <0.67 (i.e., the relative mean bias is less than a factor of 2); the normalized mean-square error should be <6 (i.e., the random scatter is less than about 2.4 times the mean); and the fraction of predictions that are within a factor of two of the observations (FAC2) should be >0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be <0.50, when the threshold is three times the instrument's limit of quantification (LOQ). An overall criterion is then applied that the total set of acceptance criteria should be satisfied in at least half of the field experiments. These acceptance criteria are applied to evaluations of the US Department of Defense's Joint Effects Model (JEM) with tracer data from US urban field experiments in Salt Lake City (U2000), Oklahoma City (JU2003), and Manhattan (MSG05 and MID05). JEM includes the SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance criteria are satisfied at over half of the field experiments.
Optimization of lens layout for THz signal free-space delivery
NASA Astrophysics Data System (ADS)
Yu, Jimmy; Zhou, Wen
2018-03-01
We investigate how to extend the air-space distance for Terahertz (THz) signal by using optimized lens layout. After a delivery over 129.6 cm air-space we realize the BER of 10 Gb/s QPSK signal at 450 GHz smaller than 1 ×10-4 with this optimized lens layout. If only two lenses are employed, the BER is higher than forward error correction (FEC) threshold at the input power of 15 dBm into the photodiode.
Student-Centredness: The Link between Transforming Students and Transforming Ourselves
ERIC Educational Resources Information Center
Blackie, Margaret A. L.; Case, Jennifer M.; Jawitz, Jeff
2010-01-01
It is widely accepted in the higher education literature that a student-centred approach is pedagogically superior to a teacher-centred approach. In this paper, we explore the notion of student-centredness as a threshold concept and the implications this might have for academic staff development. We argue that the term student-centred in the…
Chance-Constrained Guidance With Non-Convex Constraints
NASA Technical Reports Server (NTRS)
Ono, Masahiro
2011-01-01
Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.
How to Assess the Value of Medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066
How to assess the value of medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.
A derivation of the stable cavitation threshold accounting for bubble-bubble interactions.
Guédra, Matthieu; Cornu, Corentin; Inserra, Claude
2017-09-01
The subharmonic emission of sound coming from the nonlinear response of a bubble population is the most used indicator for stable cavitation. When driven at twice their resonance frequency, bubbles can exhibit subharmonic spherical oscillations if the acoustic pressure amplitude exceeds a threshold value. Although various theoretical derivations exist for the subharmonic emission by free or coated bubbles, they all rest on the single bubble model. In this paper, we propose an analytical expression of the subharmonic threshold for interacting bubbles in a homogeneous, monodisperse cloud. This theory predicts a shift of the subharmonic resonance frequency and a decrease of the corresponding pressure threshold due to the interactions. For a given sonication frequency, these results show that an optimal value of the interaction strength (i.e. the number density of bubbles) can be found for which the subharmonic threshold is minimum, which is consistent with recently published experiments conducted on ultrasound contrast agents. Copyright © 2017 Elsevier B.V. All rights reserved.
Agathian, G; Semwal, A D; Sharma, G K
2015-07-01
The aim of the experiment was to optimize barrel temperature (122 to 178 ± 0.5 °C) and red kidney bean flour percentage (KBF) (12 to 68 ± 0.5 %) based on physical properties of extrudates like flash off percentage, water absorption index (WAI), water solubility index (WSI), bulk density (BD), radial expansion ratio (RER) and overall acceptability (OAA) using single screw extruder. The study was carried out by central composite rotatable design (CCRD) using Response surface methodology (RSM) and moisture content of feed was kept as constant 16.0 ± 0.5 % throughout experiments. Mathematical models for various responses were found to fit significantly (P < 0.05) for prediction. Optimization of experimental conditions was carried out using numerical optimization technique and the optimum barrel temperature and kidney bean flour percentage were 120 °C (T1) & 142.62 °C (T2 = T3) and 20 % respectively with desirability value of 0.909. Experiments were carried out using predicted values and verified using t-test and coefficient of variation percentage. Extruded snack prepared with rice flour (80 %) and kidney bean flour (20 %) at optimized conditions was accepted by the taste panellists and above 20 % KB incorporation was found to decrease overall acceptability score.
Transport temperatures observed during the commercial transportation of animals.
Fiore, Gianluca; Hofherr, Johann; Natale, Fabrizio; Mainetti, Sergio; Ruotolo, Espedito
2012-01-01
Current temperature standards and those proposed by the European Food Safety Authority (EFSA) were compared with the actual practices of commercial transport in the European Union. Temperature and humidity records recorded for a year on 21 vehicles over 905 journeys were analysed. Differences in temperature and humidity recorded by sensors at four different positions in the vehicles exceeded 10°C between the highest and lowest temperatures in nearly 7% of cases. The number and position of temperature sensors are important to ensure the correct representation of temperature conditions in the different parts of a vehicle. For all journeys and all animal categories, a relatively high percentage of beyond threshold temperatures can be observed in relation to the temperature limits of 30°C and 5°C. Most recorded temperature values lie within the accepted tolerance of ±5°C stipulated in European Community Regulation (EC) 1/2005. The temperature thresholds proposed by EFSA would result in a higher percentage of non-compliant conditions which are more pronounced at the lower threshold, compared to the thresholds laid down in Regulation (EC) 1/2005. With respect to the different animal categories, the non-compliant temperature occurrences were more frequent in pigs and sheep, in particular with regard to the thresholds proposed by EFSA.
Rational Design of Plasmonic Nanoparticles for Enhanced Cavitation and Cell Perforation.
Lachaine, Rémi; Boutopoulos, Christos; Lajoie, Pierre-Yves; Boulais, Étienne; Meunier, Michel
2016-05-11
Metallic nanoparticles are routinely used as nanoscale antenna capable of absorbing and converting photon energy with subwavelength resolution. Many applications, notably in nanomedicine and nanobiotechnology, benefit from the enhanced optical properties of these materials, which can be exploited to image, damage, or destroy targeted cells and subcellular structures with unprecedented precision. Modern inorganic chemistry enables the synthesis of a large library of nanoparticles with an increasing variety of shapes, composition, and optical characteristic. However, identifying and tailoring nanoparticles morphology to specific applications remains challenging and limits the development of efficient nanoplasmonic technologies. In this work, we report a strategy for the rational design of gold plasmonic nanoshells (AuNS) for the efficient ultrafast laser-based nanoscale bubble generation and cell membrane perforation, which constitute one of the most crucial challenges toward the development of effective gene therapy treatments. We design an in silico rational design framework that we use to tune AuNS morphology to simultaneously optimize for the reduction of the cavitation threshold while preserving the particle structural integrity. Our optimization procedure yields optimal AuNS that are slightly detuned compared to their plasmonic resonance conditions with an optical breakdown threshold 30% lower than randomly selected AuNS and 13% lower compared to similarly optimized gold nanoparticles (AuNP). This design strategy is validated using time-resolved bubble spectroscopy, shadowgraphy imaging and electron microscopy that confirm the particle structural integrity and a reduction of 51% of the cavitation threshold relative to optimal AuNP. Rationally designed AuNS are finally used to perforate cancer cells with an efficiency of 61%, using 33% less energy compared to AuNP, which demonstrate that our rational design framework is readily transferable to a cell environment. The methodology developed here thus provides a general strategy for the systematic design of nanoparticles for nanomedical applications and should be broadly applicable to bioimaging and cell nanosurgery.
NASA Astrophysics Data System (ADS)
Song, Chen; Zhong-Cheng, Wu; Hong, Lv
2018-03-01
Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.
Murphy, Colin T; Galloway, Thomas J; Handorf, Elizabeth A; Egleston, Brian L; Wang, Lora S; Mehra, Ranee; Flieder, Douglas B; Ridge, John A
2016-01-10
To estimate the overall survival (OS) impact from increasing time to treatment initiation (TTI) for patients with head and neck squamous cell carcinoma (HNSCC). Using the National Cancer Data Base (NCDB), we examined patients who received curative therapy for the following sites: oral tongue, oropharynx, larynx, and hypopharynx. TTI was the number of days from diagnosis to initiation of curative treatment. The effect of TTI on OS was determined by using Cox regression models (MVA). Recursive partitioning analysis (RPA) identified TTI thresholds via conditional inference trees to estimate the greatest differences in OS on the basis of randomly selected training and validation sets, and repeated this 1,000 times to ensure robustness of TTI thresholds. A total of 51,655 patients were included. On MVA, TTI of 61 to 90 days versus less than 30 days (hazard ratio [HR], 1.13; 95% CI, 1.08 to 1.19) independently increased mortality risk. TTI of 67 days appeared as the optimal threshold on the training RPA, statistical significance was confirmed in the validation set (P < .001), and the 67-day TTI was the optimal threshold in 54% of repeated simulations. Overall, 96% of simulations validated two optimal TTI thresholds, with ranges of 46 to 52 days and 62 to 67 days. The median OS for TTI of 46 to 52 days or fewer versus 53 to 67 days versus greater than 67 days was 71.9 months (95% CI, 70.3 to 73.5 months) versus 61 months (95% CI, 57 to 66.1 months) versus 46.6 months (95% CI, 42.8 to 50.7 months), respectively (P < .001). In the most recent year with available data (2011), 25% of patients had TTI of greater than 46 days. TTI independently affects survival. One in four patients experienced treatment delay. TTI of greater than 46 to 52 days introduced an increased risk of death that was most consistently detrimental beyond 60 days. Prolonged TTI is currently affecting survival. © 2015 by American Society of Clinical Oncology.
Murphy, Colin T.; Handorf, Elizabeth A.; Egleston, Brian L.; Wang, Lora S.; Mehra, Ranee; Flieder, Douglas B.; Ridge, John A.
2016-01-01
Purpose To estimate the overall survival (OS) impact from increasing time to treatment initiation (TTI) for patients with head and neck squamous cell carcinoma (HNSCC). Methods Using the National Cancer Data Base (NCDB), we examined patients who received curative therapy for the following sites: oral tongue, oropharynx, larynx, and hypopharynx. TTI was the number of days from diagnosis to initiation of curative treatment. The effect of TTI on OS was determined by using Cox regression models (MVA). Recursive partitioning analysis (RPA) identified TTI thresholds via conditional inference trees to estimate the greatest differences in OS on the basis of randomly selected training and validation sets, and repeated this 1,000 times to ensure robustness of TTI thresholds. Results A total of 51,655 patients were included. On MVA, TTI of 61 to 90 days versus less than 30 days (hazard ratio [HR], 1.13; 95% CI, 1.08 to 1.19) independently increased mortality risk. TTI of 67 days appeared as the optimal threshold on the training RPA, statistical significance was confirmed in the validation set (P < .001), and the 67-day TTI was the optimal threshold in 54% of repeated simulations. Overall, 96% of simulations validated two optimal TTI thresholds, with ranges of 46 to 52 days and 62 to 67 days. The median OS for TTI of 46 to 52 days or fewer versus 53 to 67 days versus greater than 67 days was 71.9 months (95% CI, 70.3 to 73.5 months) versus 61 months (95% CI, 57 to 66.1 months) versus 46.6 months (95% CI, 42.8 to 50.7 months), respectively (P < .001). In the most recent year with available data (2011), 25% of patients had TTI of greater than 46 days. Conclusion TTI independently affects survival. One in four patients experienced treatment delay. TTI of greater than 46 to 52 days introduced an increased risk of death that was most consistently detrimental beyond 60 days. Prolonged TTI is currently affecting survival. PMID:26628469
Melamed, N; Hiersch, L; Gabbay-Benziv, R; Bardin, R; Meizner, I; Wiznitzer, A; Yogev, Y
2015-07-01
To assess the accuracy and determine the optimal threshold of sonographic cervical length (CL) for the prediction of preterm delivery (PTD) in women with twin pregnancies presenting with threatened preterm labor (PTL). This was a retrospective study of women with twin pregnancies who presented with threatened PTL and underwent sonographic measurement of CL in a tertiary center. The accuracy of CL in predicting PTD in women with twin pregnancies was compared with that in a control group of women with singleton pregnancies. Overall, 218 women with a twin pregnancy and 1077 women with a singleton pregnancy, who presented with PTL, were included in the study. The performance of CL as a predictive test for PTD was similar in twins and singletons, as reflected by the similar correlation between CL and the examination-to-delivery interval (r, 0.30 vs 0.29; P = 0.9), the similar association of CL with risk of PTD, and the similar areas under the receiver-operating characteristics curves for differing delivery outcomes (range, 0.653-0.724 vs 0.620-0.682, respectively; P = 0.3). The optimal threshold of CL for any given target sensitivity or specificity was lower in twin than in singleton pregnancies. However, in order to achieve a negative predictive value of 95%, a higher threshold (28-30 mm) should be used in twin pregnancies. Using this twin-specific CL threshold, women with twins who present with PTL are more likely to have a positive CL test, and therefore to require subsequent interventions, than are women with singleton pregnancies with PTL (55% vs 4.2%, respectively). In women with PTL, the performance of CL as a test for the prediction of PTD is similar in twin and singleton pregnancies. However, the optimal threshold of CL for the prediction of PTD appears to be higher in twin pregnancies, mainly owing to the higher baseline risk for PTD in these pregnancies. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Gariano, S. L.; Brunetti, M. T.; Iovine, G.; Melillo, M.; Peruccacci, S.; Terranova, O.; Vennari, C.; Guzzetti, F.
2015-01-01
Empirical rainfall thresholds are tools to forecast the possible occurrence of rainfall-induced shallow landslides. Accurate prediction of landslide occurrence requires reliable thresholds, which need to be properly validated before their use in operational warning systems. We exploited a catalogue of 200 rainfall conditions that have resulted in at least 223 shallow landslides in Sicily, southern Italy, in the 11-year period 2002-2011, to determine regional event duration-cumulated event rainfall (ED) thresholds for shallow landslide occurrence. We computed ED thresholds for different exceedance probability levels and determined the uncertainty associated to the thresholds using a consolidated bootstrap nonparametric technique. We further determined subregional thresholds, and we studied the role of lithology and seasonal periods in the initiation of shallow landslides in Sicily. Next, we validated the regional rainfall thresholds using 29 rainfall conditions that have resulted in 42 shallow landslides in Sicily in 2012. We based the validation on contingency tables, skill scores, and a receiver operating characteristic (ROC) analysis for thresholds at different exceedance probability levels, from 1% to 50%. Validation of rainfall thresholds is hampered by lack of information on landslide occurrence. Therefore, we considered the effects of variations in the contingencies and the skill scores caused by lack of information. Based on the results obtained, we propose a general methodology for the objective identification of a threshold that provides an optimal balance between maximization of correct predictions and minimization of incorrect predictions, including missed and false alarms. We expect that the methodology will increase the reliability of rainfall thresholds, fostering the operational use of validated rainfall thresholds in operational early warning system for regional shallow landslide forecasting.
Roach, Shane M.; Song, Dong; Berger, Theodore W.
2012-01-01
Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947
High power single mode 980 nm AlGaInAs/AlGaAs quantum well lasers with a very low threshold current
NASA Astrophysics Data System (ADS)
Zhen, Dong; Cuiluan, Wang; Hongqi, Jing; Suping, Liu; Xiaoyu, Ma
2013-11-01
To achieve low threshold current as well as high single mode output power, a graded index separate confinement heterostructure (GRIN-SCH) AlGaInAs/AlGaAs quantum well laser with an optimized ridge waveguide was fabricated. The threshold current was reduced to 8 mA. An output power of 76 mW was achieved at 100 mA current at room temperature, with a slope efficiency of 0.83 W/A and a horizon divergent angle of 6.3°. The maximum single mode output power of the device reached as high as 450 mW.
Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610
NASA Astrophysics Data System (ADS)
Michels, François; Mazzoni, Federico; Becucci, Maurizio; Müller-Dethlefs, Klaus
2017-10-01
An improved detection scheme is presented for threshold ionization spectroscopy with simultaneous recording of the Zero Electron Kinetic Energy (ZEKE) and Mass Analysed Threshold Ionisation (MATI) signals. The objective is to obtain accurate dissociation energies for larger molecular clusters by simultaneously detecting the fragment and parent ion MATI signals with identical transmission. The scheme preserves an optimal ZEKE spectral resolution together with excellent separation of the spontaneous ion and MATI signals in the time-of-flight mass spectrum. The resulting improvement in sensitivity will allow for the determination of dissociation energies in clusters with substantial mass difference between parent and daughter ions.
Anaerobic threshold, is it a magic number to determine fitness for surgery?
Older, Paul
2013-02-21
The use of cardiopulmonary exercise testing (CPET) to evaluate cardiac and respiratory function was pioneered as part of preoperative assessment in the mid 1990s. Surgical procedures have changed since then. The patient population may have aged; however, the physiology has remained the same. The use of an accurate physiological evaluation remains as germane today as it was then. Certainly no 'magic' is involved. The author recognizes that not everyone accepts the classical theories of the anaerobic threshold (AT) and that there is some discussion around lactate and exercise. The article looks at aerobic capacity as an important predictor of perioperative mortality and also looks at some aspects of CPET relative to surgical risk evaluation.
77 FR 65506 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-29
...We propose to supersede an existing airworthiness directive (AD) that applies to certain The Boeing Company Model 757-200 and - 200PF series airplanes. The existing AD currently requires modification of the nacelle strut and wing structure, and repair of any damage found during the modification. Since we issued that AD, a compliance time error involving the optional threshold formula was discovered, which could allow an airplane to exceed the acceptable compliance time for addressing the unsafe condition. This proposed AD would specify a maximum compliance time limit that overrides the optional threshold formula results. We are proposing this AD to prevent fatigue cracking in primary strut structure and consequent reduced structural integrity of the strut.
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex; Trawick, David
1991-01-01
The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.
Koplan, Bruce A; Gilligan, David M; Nguyen, Luc S; Lau, Theodore K; Thackeray, Lisa M; Berg, Kellie Chase
2008-11-01
An automatic capture (AC) algorithm adjusts ventricular pacing output to capture the ventricle while optimizing output to 0.5 V above threshold. AC maintains this output and confirms capture on a beat-to-beat basis in bipolar and unipolar pacing and sensing. To assess the AC algorithm and its impact on device longevity. Patients implanted with a pacemaker were randomized 1:1 to have the AC feature on or off for 12 months. Two threshold tests were conducted at each visit- automatic threshold and manual threshold. Average ventricular voltage output and projected device longevity were compared between AC on and off using nonparametric tests. Nine hundred ten patients were enrolled and underwent device implantation. Average ventricular voltage output was 1.6 V for the AC on arm (n = 444) and 3.1 V for the AC off arm (n = 446) (P < 0.001). Projected device longevity was 10.3 years for AC on and 8.9 years for AC off (P < 0.0001), or a 16% increase in longevity for AC on. The proportion of patients in whom there was a difference between automatic threshold and manual threshold of
2012-05-01
undergo wavefront-guided (WFG) photorefractive keratectomy ( PRK ), WFG laser in situ keratomileusis ( LASIK ), wavefront optimized (WFO) PRK or WFO...Military, Refractive Surgery, PRK , LASIK , Night Vision, Wavefront Optimized, Wavefront Guided, Visual Performance, Quality of Vision, Outcomes...military. In a prospective, randomized treatment trial we will enroll 224 nearsighted soldiers to WFG photorefractive keratectomy ( PRK ), WFG LASIK , WFO PRK
2013-05-01
and Sensors Directorate. • Study participants and physicians select treatment: PRK or LASIK . WFG vs . WFO treatment modality is randomized. The...to undergo wavefront-guided (WFG) photorefractive keratectomy ( PRK ), WFG laser in situ keratomileusis ( LASIK ), wavefront optimized (WFO) PRK or WFO...TERMS Military, Refractive Surgery, PRK , LASIK , Night Vision, Wavefront Optimized, Wavefront Guided, Visual Performance, Quality of Vision, Outcomes
Blind One-Bit Compressive Sampling
2013-01-17
14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
Ferrari, Matthew J.; Fram, Miranda S.; Belitz, Kenneth
2008-01-01
Ground-water quality in the approximately 950 square kilometer (370 square mile) Central Sierra study unit (CENSIE) was investigated in May 2006 as part of the Priority Basin Assessment project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Assessment project was developed in response to the Ground-Water Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). This study was designed to provide a spatially unbiased assessment of the quality of raw ground water used for drinking-water supplies within CENSIE, and to facilitate statistically consistent comparisons of ground-water quality throughout California. Samples were collected from thirty wells in Madera County. Twenty-seven of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and three were selected to aid in evaluation of specific water-quality issues (understanding wells). Ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], gasoline oxygenates and degradates, pesticides and pesticide degradates), constituents of special interest (N-nitrosodimethylamine, perchlorate, and 1,2,3-trichloropropane), naturally occurring inorganic constituents [nutrients, major and minor ions, and trace elements], radioactive constituents, and microbial indicators. Naturally occurring isotopes [tritium, and carbon-14, and stable isotopes of hydrogen, oxygen, nitrogen, and carbon], and dissolved noble gases also were measured to help identify the sources and ages of the sampled ground water. In total, over 250 constituents and water-quality indicators were investigated. Quality-control samples (blanks, replicates, and samples for matrix spikes) were collected at approximately one-sixth of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Results from field blanks indicated contamination was not a noticeable source of bias in the data for ground-water samples. Differences between replicate samples were within acceptable ranges, indicating acceptably low variability. Matrix spike recoveries were within acceptable ranges for most constituents. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH), and thresholds established for aesthetic concerns (Secondary Maximum Contaminant Levels, SMCL-CA) by CDPH. Therefore, any comparisons of the results of this study to drinking-water standards only is for illustrative purposes and is not indicative of compliance or non-compliance to those standards. Most constituents that were detected in ground-water samples were found at concentrations below drinking-water standards or thresholds. Six constituents? fluoride, arsenic, molybdenum, uranium, gross-alpha radioactivity, and radon-222?were detected at concentrations higher than thresholds set for health-based regulatory purposes. Three additional constituents?pH, iron and manganese?were detected at concentrations above thresholds set for aesthetic concerns. Volatile organic compounds (VOCs) and pesticides, were detected in less than one-third of the samples and generally at less than one one-hundredth of a health-based threshold.
Schmitt, Stephen J.; Milby Dawson, Barbara J.; Belitz, Kenneth
2009-01-01
Groundwater quality in the approximately 1,600 square-mile Antelope Valley study unit (ANT) was investigated from January to April 2008 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within ANT, and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 57 wells in Kern, Los Angeles, and San Bernardino Counties. Fifty-six of the wells were selected using a spatially distributed, randomized, grid-based method to provide statistical representation of the study area (grid wells), and one additional well was selected to aid in evaluation of specific water-quality issues (understanding well). The groundwater samples were analyzed for a large number of organic constituents (volatile organic compounds [VOCs], gasoline additives and degradates, pesticides and pesticide degradates, fumigants, and pharmaceutical compounds), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), and radioactive constituents (gross alpha and gross beta radioactivity, radium isotopes, and radon-222). Naturally occurring isotopes (strontium, tritium, and carbon-14, and stable isotopes of hydrogen and oxygen in water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 239 constituents and water-quality indicators (field parameters) were investigated. Quality-control samples (blanks, replicates, and samples for matrix spikes) were collected at 12 percent of the wells, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the groundwater samples. Differences between replicate samples generally were within acceptable ranges, indicating acceptably low variability. Matrix spike recoveries were within acceptable ranges for most compoundsThis study did not evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw groundwater. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only, and are not indicative of compliance or non-compliance with drinking water standards. Most constituents that were detected in groundwater samples were found at concentrations below drinking-water thresholds. Volatile organic compounds (VOCs) were detected in about one-half of the samples and pesticides detected in about one-third of the samples; all detections of these constituents were below health-based thresholds. Most detections of trace elements and nutrients in samples from ANT wells were below health-based thresholds. Exceptions include: one detection of nitrite plus nitr
Rejection Thresholds in Solid Chocolate-Flavored Compound Coating
Harwood, Meriel L.; Ziegler, Gregory R.; Hayes, John E.
2012-01-01
Classical detection thresholds do not predict liking, as they focus on the presence or absence of a sensation. Recently however, Prescott and colleagues described a new method, the rejection threshold, where a series of forced choice preference tasks are used to generate a dose-response function to determine hedonically acceptable concentrations. That is, how much is too much? To date, this approach has been used exclusively in liquid foods. Here, we determined group rejection thresholds in solid chocolate-flavored compound coating for bitterness. The influences of self-identified preferences for milk or dark chocolate, as well as eating style (chewers versus melters) on rejection thresholds were investigated. Stimuli included milk chocolate-flavored compound coating spiked with increasing amounts of sucrose octaacetate (SOA), a bitter GRAS additive. Paired preference tests (blank vs. spike) were used to determine the proportion of the group that preferred the blank. Across pairs, spiked samples were presented in ascending concentration. We were able to quantify and compare differences between two self-identified market segments. The rejection threshold for the dark chocolate preferring group was significantly higher than the milk chocolate preferring group (p = 0.01). Conversely, eating style did not affect group rejection thresholds (p = 0.14), although this may reflect the amount of chocolate given to participants. Additionally, there was no association between chocolate preference and eating style (p = 0.36). Present work supports the contention that this method can be used to examine preferences within specific market segments and potentially individual differences as they relate to ingestive behavior. PMID:22924788
Zemek, Allison; Garg, Rohit; Wong, Brian J. F.
2014-01-01
Objectives/Hypothesis Characterizing the mechanical properties of structural cartilage grafts used in rhinoplasty is valuable because softer engineered tissues are more time- and cost-efficient to manufacture. The aim of this study is to quantitatively identify the threshold mechanical stability (e.g., Young’s modulus) of columellar, L-strut, and alar cartilage replacement grafts. Study Design Descriptive, focus group survey. Methods Ten mechanical phantoms of identical size (5 × 20 × 2.3 mm) and varying stiffness (0.360 to 0.85 MPa in 0.05 MPa increments) were made from urethane. A focus group of experienced rhinoplasty surgeons (n = 25, 5 to 30 years in practice) were asked to arrange the phantoms in order of increasing stiffness. Then, they were asked to identify the minimum acceptable stiffness that would still result in favorable surgical outcomes for three clinical applications: columellar, L-strut, and lateral crural replacement grafts. Available surgeons were tested again after 1 week to evaluate intra-rater consistency. Results For each surgeon, the threshold stiffness for each clinical application differed from the threshold values derived by logistic regression by no more than 0.05 MPa (accuracy to within 10%). Specific thresholds were 0.56, 0.59, and 0.49 MPa for columellar, L-strut, and alar grafts, respectively. For comparison, human nasal septal cartilage is approximately 0.8 MPa. Conclusions There was little inter- and intra-rater variation of the identified threshold values for adequate graft stiffness. The identified threshold values will be useful for the design of tissue-engineered or semisynthetic cartilage grafts for use in structural nasal surgery. PMID:20513022
Comparing population and incident data for optimal air ambulance base locations in Norway.
Røislien, Jo; van den Berg, Pieter L; Lindner, Thomas; Zakariassen, Erik; Uleberg, Oddvar; Aardal, Karen; van Essen, J Theresia
2018-05-24
Helicopter emergency medical services are important in many health care systems. Norway has a nationwide physician manned air ambulance service servicing a country with large geographical variations in population density and incident frequencies. The aim of the study was to compare optimal air ambulance base locations using both population and incident data. We used municipality population and incident data for Norway from 2015. The 428 municipalities had a median (5-95 percentile) of 4675 (940-36,264) inhabitants and 10 (2-38) incidents. Optimal helicopter base locations were estimated using the Maximal Covering Location Problem (MCLP) optimization model, exploring the number and location of bases needed to cover various fractions of the population for time thresholds 30 and 45 min, in green field scenarios and conditioned on the existing base structure. The existing bases covered 96.90% of the population and 91.86% of the incidents for time threshold 45 min. Correlation between municipality population and incident frequencies was -0.0027, and optimal base locations varied markedly between the two data types, particularly when lowering the target time. The optimal solution using population density data put focus on the greater Oslo area, where one third of Norwegians live, while using incident data put focus on low population high incident areas, such as northern Norway and winter sport resorts. Using population density data as a proxy for incident frequency is not recommended, as the two data types lead to different optimal base locations. Lowering the target time increases the sensitivity to choice of data.
COST-BENEFIT Analysis in Railway Noise Control
NASA Astrophysics Data System (ADS)
OERTLI, J.
2000-03-01
A method to calculate the network-wide costs of realizing different noise control possibilities and their benefits in terms of noise reduction for lineside inhabitants has been implemented in Switzerland. These studies have shown that an optimal cost distribution consists of spending 65% of the available finances on rolling stock improvement, 30% on noise control barriers and 5% on insulated windows. This mix protects 70% of the lineside population for 30% of the cost necessary to attain threshold levels for all inhabitants. This noise control strategy has been accepted by the federal traffic and environment agencies involved and will save billions of Swiss francs. The success of the calculation methodology has prompted development of a Europe-wide decision support system to the same effect. Along two freight freeways the relationship between rolling stock improvement, noise barriers, insulated windows, operational measures and track characteristics is being studied. The decision support system will allow determination of those combinations with the best cost-benefit ratios. The study is currently being undertaken as a joint venture by the railways of Switzerland, France, Germany and the Netherlands as well as the European Rail Research Institute. The results constitute part of the negotiating strategy of the railways with European and national legislators.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Missing value imputation strategies for metabolomics data.
Armitage, Emily Grace; Godzien, Joanna; Alonso-Herranz, Vanesa; López-Gonzálvez, Ángeles; Barbas, Coral
2015-12-01
The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k-means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a "gray area" and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k-means nearest neighbor and the best approximation of positioning real zeros. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hagen, Wim J H; Wan, William; Briggs, John A G
2017-02-01
Cryo-electron tomography (cryoET) allows 3D structural information to be obtained from cells and other biological samples in their close-to-native state. In combination with subtomogram averaging, detailed structures of repeating features can be resolved. CryoET data is collected as a series of images of the sample from different tilt angles; this is performed by physically rotating the sample in the microscope between each image. The angles at which the images are collected, and the order in which they are collected, together are called the tilt-scheme. Here we describe a "dose-symmetric tilt-scheme" that begins at low tilt and then alternates between increasingly positive and negative tilts. This tilt-scheme maximizes the amount of high-resolution information maintained in the tomogram for subsequent subtomogram averaging, and may also be advantageous for other applications. We describe implementation of the tilt-scheme in combination with further data-collection refinements including setting thresholds on acceptable drift and improving focus accuracy. Requirements for microscope set-up are introduced, and a macro is provided which automates the application of the tilt-scheme within SerialEM. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Crossing the quasi-threshold manifold of a noise-driven excitable system
Zhu, Jinjie; Liu, Xianbin
2017-01-01
We consider the noise-induced escapes in an excitable system possessing a quasi-threshold manifold, along which there exists a certain point of minimal quasi-potential. In the weak noise limit, the optimal escaping path turns out to approach this particular point asymptotically, making it analogous to an ordinary saddle. Numerical simulations are performed and an elaboration on the effect of small but finite noise is given, which shows that the ridges where the prehistory probability distribution peaks are located mainly within the region where the quasi-potential increases gently. The cases allowing anisotropic noise are discussed and we found that varying the noise term in the slow variable would dramatically raise the whole level of quasi-potentials, leading to significant changes in both patterns of optimal paths and exit locations. PMID:28588411
Environmental statistics and optimal regulation.
Sivak, David A; Thomson, Matt
2014-09-01
Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies--such as constitutive expression or graded response--for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture
NASA Astrophysics Data System (ADS)
Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea
2014-05-01
Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.
NASA Astrophysics Data System (ADS)
Saidi, Hosni; Msahli, Melek; Ben Dhafer, Rania; Ridene, Said
2017-12-01
Band structure and optical gain properties of [111]-oriented AlGaInAs/AlGaInAs-delta-InGaAs multi-quantum wells, subjected to piezoelectric field, for the near-infrared lasers diodes applications was proposed and investigated in this paper. By using genetic algorithm based on optimization technique we demonstrate that the structural parameters can be conveniently optimized to achieve high-efficiency laser diode performance at room temperature. In this work, significant optical gain for the wished emission wavelength at 1.55 μm and low threshold injection current are the optimization target. The end result of this optimization is a laser diode based on InP substrate using quaternary compound material of AlGaInAs in both quantum wells and barriers with different composition. It has been shown that the transverse electric polarized optical gain which reaches 3500 cm-1 may be acquired for λ = 1.55 μm with a threshold carrier density Nth≈1.31018cm-3, which is very promising to serve as an alternative active region for high-efficiency near-infrared lasers. Finally, from the design presented here, we show that it is possible to apply this technique to a different III-V compound semiconductors and wavelength ranging from deep-ultra-violet to far infrared.
Multiantenna Relay Beamforming Design for QoS Discrimination in Two-Way Relay Networks
Xiong, Ke; Zhang, Yu; Li, Dandan; Zhong, Zhangdui
2013-01-01
This paper investigates the relay beamforming design for quality of service (QoS) discrimination in two-way relay networks. The purpose is to keep legitimate two-way relay users exchange their information via a helping multiantenna relay with QoS guarantee while avoiding the exchanged information overhearing by unauthorized receiver. To this end, we propose a physical layer method, where the relay beamforming is jointly designed with artificial noise (AN) which is used to interfere in the unauthorized user's reception. We formulate the joint beamforming and AN (BFA) design into an optimization problem such that the received signal-to-interference-ratio (SINR) at the two legitimate users is over a predefined QoS threshold while limiting the received SINR at the unauthorized user which is under a certain secure threshold. The objective of the optimization problem is to seek the optimal AN and beamforming vectors to minimize the total power consumed by the relay node. Since the optimization problem is nonconvex, we solve it by using semidefinite program (SDP) relaxation. For comparison, we also study the optimal relay beamforming without using AN (BFO) under the same QoS discrimination constraints. Simulation results show that both the proposed BFA and BFO can achieve the QoS discrimination of the two-way transmission. However, the proposed BFA yields significant power savings and lower infeasible rates compared with the BFO method. PMID:24391459
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-03-01
A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.
Schiffelers, Marie-Jeanne W A; Blaauboer, Bas J; Bakker, Wieger E; Beken, Sonja; Hendriksen, Coenraad F M; Koëter, Herman B W M; Krul, Cyrille
2014-06-01
Pharmaceuticals and chemicals are subjected to regulatory safety testing accounting for approximately 25% of laboratory animal use in Europe. This testing meets various objections and has led to the development of a range of 3R models to Replace, Reduce or Refine the animal models. However, these models must overcome many barriers before being accepted for regulatory risk management purposes. This paper describes the barriers and drivers and options to optimize this acceptance process as identified by two expert panels, one on pharmaceuticals and one on chemicals. To untangle the complex acceptance process, the multilevel perspective on technology transitions is applied. This perspective defines influences at the micro-, meso- and macro level which need alignment to induce regulatory acceptance of a 3R model. This paper displays that there are many similar mechanisms within both sectors that prevent 3R models from becoming accepted for regulatory risk assessment and management. Shared barriers include the uncertainty about the value of the new 3R models (micro level), the lack of harmonization of regulatory requirements and acceptance criteria (meso level) and the high levels of risk aversion (macro level). In optimizing the process commitment, communication, cooperation and coordination are identified as critical drivers. Copyright © 2014 Elsevier Inc. All rights reserved.
Tarabichi, Majd; Shohat, Noam; Kheir, Michael M; Adelani, Muyibat; Brigati, David; Kearns, Sean M; Patel, Pankajkumar; Clohisy, John C; Higuera, Carlos A; Levine, Brett R; Schwarzkopf, Ran; Parvizi, Javad; Jiranek, William A
2017-09-01
Although HbA1c is commonly used for assessing glycemic control before surgery, there is no consensus regarding its role and the appropriate threshold in predicting adverse outcomes. This study was designed to evaluate the potential link between HbA1c and subsequent periprosthetic joint infection (PJI), with the intention of determining the optimal threshold for HbA1c. This is a multicenter retrospective study, which identified 1645 diabetic patients who underwent primary total joint arthroplasty (1004 knees and 641 hips) between 2001 and 2015. All patients had an HbA1c measured within 3 months of surgery. The primary outcome of interest was a PJI at 1 year based on the Musculoskeletal Infection Society criteria. Secondary outcomes included orthopedic (wound and mechanical complications) and nonorthopedic complications (sepsis, thromboembolism, genitourinary, and cardiovascular complications). A regression analysis was performed to determine the independent influence of HbA1c for predicting PJI. Overall 22 cases of PJI occurred at 1 year (1.3%). HbA1c at a threshold of 7.7 was distinct for predicting PJI (area under the curve, 0.65; 95% confidence interval, 0.51-0.78). Using this threshold, PJI rates increased from 0.8% (11 of 1441) to 5.4% (11 of 204). In the stepwise logistic regression analysis, PJI remained the only variable associated with higher HbA1c (odds ratio, 1.5; confidence interval, 1.2-2.0; P = .0001). There was no association between high HbA1c levels and other complications assessed. High HbA1c levels are associated with an increased risk for PJI. A threshold of 7.7% seems to be more indicative of infection than the commonly used 7% and should perhaps be the goal in preoperative patient optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
Comparison of memory thresholds for planar qudit geometries
NASA Astrophysics Data System (ADS)
Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad
2017-11-01
We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.
NASA Astrophysics Data System (ADS)
Hu, Hang; Yu, Hong; Zhang, Yongzhi
2013-03-01
Cooperative spectrum sensing, which can greatly improve the ability of discovering the spectrum opportunities, is regarded as an enabling mechanism for cognitive radio (CR) networks. In this paper, we employ a double threshold detection method in energy detector to perform spectrum sensing, only the CR users with reliable sensing information are allowed to transmit one bit local decision to the fusion center. Simulation results will show that our proposed double threshold detection method could not only improve the sensing performance but also save the bandwidth of the reporting channel compared with the conventional detection method with one threshold. By weighting the sensing performance and the consumption of system resources in a utility function that is maximized with respect to the number of CR users, it has been shown that the optimal number of CR users is related to the price of these Quality-of-Service (QoS) requirements.
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI
Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A
2016-01-01
Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658
2017-01-01
Objective To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Methods Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. Results VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. Conclusion We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia. PMID:29201816
Park, Chul-Hyun; Kim, Don-Kyu; Lee, Yong-Taek; Yi, Youbin; Lee, Jung-Sang; Kim, Kunwoo; Park, Jung Ho; Yoon, Kyung Jae
2017-10-01
To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia.
Sommer, Martin; Norden, Christoph; Schmack, Lars; Rothkegel, Holger; Lang, Nicolas; Paulus, Walter
2013-05-01
Directional sensitivity is relevant for the excitability threshold of the human primary motor cortex, but its importance for externally induced plasticity is unknown. To study the influence of current direction on two paradigms inducing neuroplasticity by repetitive transcranial magnetic stimulation (rTMS). We studied short-lasting after-effects induced in the human primary motor cortex of 8 healthy subjects, using 5 Hz rTMS applied in six blocks of 200 pulses each, at 90% active motor threshold. We controlled for intensity, frequency, waveform and spinal effects. Only biphasic pulses with the effective component delivered in an anterioposterior direction (henceforth posteriorly directed) in the brain yielded an increase of motor-evoked potential (MEP) amplitudes outlasting rTMS. MEP latencies and F-wave amplitudes remained unchanged. Biphasic pulses directed posteroanterior (i.e. anteriorly) were ineffective, as were monophasic pulses from either direction. A 1 Hz study in a group of 12 healthy subjects confirmed facilitation after posteriorly directed biphasic pulses only. The anisotropy of the human primary motor cortex is relevant for induction of plasticity by subtreshold rTMS, with a current flow opposite to that providing lowest excitability thresholds. This is consistent with the idea of TMS primarily targeting cortical columns of the phylogenetically new M1 in the anterior bank of the central sulcus. For these, anteriorly directed currents are soma-depolarizing, therefore optimal for low thresholds, whereas posteriorly directed currents are soma-hyperpolarizing, likely dendrite-depolarizing and bested suited for induction of plasticity. Our findings should help focus and enhance rTMS effects in experimental and clinical settings. Copyright © 2013 Elsevier Inc. All rights reserved.
Response threshold variance as a basis of collective rationality
Yamamoto, Tatsuhiro
2017-01-01
Determining the optimal choice among multiple options is necessary in various situations, and the collective rationality of groups has recently become a major topic of interest. Social insects are thought to make such optimal choices by collecting individuals' responses relating to an option's value (=a quality-graded response). However, this behaviour cannot explain the collective rationality of brains because neurons can make only ‘yes/no’ responses on the basis of the response threshold. Here, we elucidate the basic mechanism underlying the collective rationality of such simple units and show that an ant species uses this mechanism. A larger number of units respond ‘yes’ to the best option available to a collective decision-maker using only the yes/no mechanism; thus, the best option is always selected by majority decision. Colonies of the ant Myrmica kotokui preferred the better option in a binary choice experiment. The preference of a colony was demonstrated by the workers, which exhibited variable thresholds between two options' qualities. Our results demonstrate how a collective decision-maker comprising simple yes/no judgement units achieves collective rationality without using quality-graded responses. This mechanism has broad applicability to collective decision-making in brain neurons, swarm robotics and human societies. PMID:28484636
Response threshold variance as a basis of collective rationality.
Yamamoto, Tatsuhiro; Hasegawa, Eisuke
2017-04-01
Determining the optimal choice among multiple options is necessary in various situations, and the collective rationality of groups has recently become a major topic of interest. Social insects are thought to make such optimal choices by collecting individuals' responses relating to an option's value (=a quality-graded response). However, this behaviour cannot explain the collective rationality of brains because neurons can make only 'yes/no' responses on the basis of the response threshold. Here, we elucidate the basic mechanism underlying the collective rationality of such simple units and show that an ant species uses this mechanism. A larger number of units respond 'yes' to the best option available to a collective decision-maker using only the yes/no mechanism; thus, the best option is always selected by majority decision. Colonies of the ant Myrmica kotokui preferred the better option in a binary choice experiment. The preference of a colony was demonstrated by the workers, which exhibited variable thresholds between two options' qualities. Our results demonstrate how a collective decision-maker comprising simple yes/no judgement units achieves collective rationality without using quality-graded responses. This mechanism has broad applicability to collective decision-making in brain neurons, swarm robotics and human societies.
A new method for automated discontinuity trace mapping on rock mass 3D surface model
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
Müllner, Marie; Schlattl, Helmut; Hoeschen, Christoph; Dietrich, Olaf
2015-12-01
To demonstrate the feasibility of gold-specific spectral CT imaging for the detection of liver lesions in humans at low concentrations of gold as targeted contrast agent. A Monte Carlo simulation study of spectral CT imaging with a photon-counting and energy-resolving detector (with 6 energy bins) was performed in a realistic phantom of the human abdomen. The detector energy thresholds were optimized for the detection of gold. The simulation results were reconstructed with the K-edge imaging algorithm; the reconstructed gold-specific images were filtered and evaluated with respect to signal-to-noise ratio and contrast-to-noise ratio (CNR). The simulations demonstrate the feasibility of spectral CT with CNRs of the specific gold signal between 2.7 and 4.8 after bilateral filtering. Using the optimized bin thresholds increases the CNRs of the lesions by up to 23% compared to bin thresholds described in former studies. Gold is a promising new CT contrast agent for spectral CT in humans; minimum tissue mass fractions of 0.2 wt% of gold are required for sufficient image contrast. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Nykänen, Esa-Pekka A; Dunning, Hanna E; Aryeetey, Richmond N O; Robertson, Aileen; Parlesak, Alexandr
2018-04-07
The Ghanaian population suffers from a double burden of malnutrition. Cost of food is considered a barrier to achieving a health-promoting diet. Food prices were collected in major cities and in rural areas in southern Ghana. Linear programming (LP) was used to calculate nutritionally optimized diets (food baskets (FBs)) for a low-income Ghanaian family of four that fulfilled energy and nutrient recommendations in both rural and urban settings. Calculations included implementing cultural acceptability for families living in extreme and moderate poverty (food budget under USD 1.9 and 3.1 per day respectively). Energy-appropriate FBs minimized for cost, following Food Balance Sheets (FBS), lacked key micronutrients such as iodine, vitamin B12 and iron for the mothers. Nutritionally adequate FBs were achieved in all settings when optimizing for a diet cheaper than USD 3.1. However, when delimiting cost to USD 1.9 in rural areas, wild foods had to be included in order to meet nutritional adequacy. Optimization suggested to reduce roots, tubers and fruits and to increase cereals, vegetables and oil-bearing crops compared with FBS. LP is a useful tool to design culturally acceptable diets at minimum cost for low-income Ghanaian families to help advise national authorities how to overcome the double burden of malnutrition.
Robertson, Aileen
2018-01-01
The Ghanaian population suffers from a double burden of malnutrition. Cost of food is considered a barrier to achieving a health-promoting diet. Food prices were collected in major cities and in rural areas in southern Ghana. Linear programming (LP) was used to calculate nutritionally optimized diets (food baskets (FBs)) for a low-income Ghanaian family of four that fulfilled energy and nutrient recommendations in both rural and urban settings. Calculations included implementing cultural acceptability for families living in extreme and moderate poverty (food budget under USD 1.9 and 3.1 per day respectively). Energy-appropriate FBs minimized for cost, following Food Balance Sheets (FBS), lacked key micronutrients such as iodine, vitamin B12 and iron for the mothers. Nutritionally adequate FBs were achieved in all settings when optimizing for a diet cheaper than USD 3.1. However, when delimiting cost to USD 1.9 in rural areas, wild foods had to be included in order to meet nutritional adequacy. Optimization suggested to reduce roots, tubers and fruits and to increase cereals, vegetables and oil-bearing crops compared with FBS. LP is a useful tool to design culturally acceptable diets at minimum cost for low-income Ghanaian families to help advise national authorities how to overcome the double burden of malnutrition. PMID:29642444
Optimizing interconnections to maximize the spectral radius of interdependent networks
NASA Astrophysics Data System (ADS)
Chen, Huashan; Zhao, Xiuyan; Liu, Feng; Xu, Shouhuai; Lu, Wenlian
2017-03-01
The spectral radius (i.e., the largest eigenvalue) of the adjacency matrices of complex networks is an important quantity that governs the behavior of many dynamic processes on the networks, such as synchronization and epidemics. Studies in the literature focused on bounding this quantity. In this paper, we investigate how to maximize the spectral radius of interdependent networks by optimally linking k internetwork connections (or interconnections for short). We derive formulas for the estimation of the spectral radius of interdependent networks and employ these results to develop a suite of algorithms that are applicable to different parameter regimes. In particular, a simple algorithm is to link the k nodes with the largest k eigenvector centralities in one network to the node in the other network with a certain property related to both networks. We demonstrate the applicability of our algorithms via extensive simulations. We discuss the physical implications of the results, including how the optimal interconnections can more effectively decrease the threshold of epidemic spreading in the susceptible-infected-susceptible model and the threshold of synchronization of coupled Kuramoto oscillators.
NASA Astrophysics Data System (ADS)
Liang, Juhua; Tang, Sanyi; Cheke, Robert A.
2016-07-01
Pest resistance to pesticides is usually managed by switching between different types of pesticides. The optimal switching time, which depends on the dynamics of the pest population and on the evolution of the pesticide resistance, is critical. Here we address how the dynamic complexity of the pest population, the development of resistance and the spraying frequency of pulsed chemical control affect optimal switching strategies given different control aims. To do this, we developed novel discrete pest population growth models with both impulsive chemical control and the evolution of pesticide resistance. Strong and weak threshold conditions which guarantee the extinction of the pest population, based on the threshold values of the analytical formula for the optimal switching time, were derived. Further, we addressed switching strategies in the light of chosen economic injury levels. Moreover, the effects of the complex dynamical behaviour of the pest population on the pesticide switching times were also studied. The pesticide application period, the evolution of pesticide resistance and the dynamic complexity of the pest population may result in complex outbreak patterns, with consequent effects on the pesticide switching strategies.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
The optimal structure-conductivity relation in epoxy-phthalocyanine nanocomposites.
Huijbregts, L J; Brom, H B; Brokken-Zijp, J C M; Kemerink, M; Chen, Z; Goeje, M P de; Yuan, M; Michels, M A J
2006-11-23
Phthalcon-11 (aquocyanophthalocyaninatocobalt (III)) forms semiconducting nanocrystals that can be dispersed in epoxy coatings to obtain a semiconducting material with a low percolation threshold. We investigated the structure-conductivity relation in this composite and the deviation from its optimal realization by combining two techniques. The real parts of the electrical conductivity of a Phthalcon-11/epoxy coating and of Phthalcon-11 powder were measured by dielectric spectroscopy as a function of frequency and temperature. Conducting atomic force microscopy (C-AFM) was applied to quantify the conductivity through the coating locally along the surface. This combination gives an excellent tool to visualize the particle network. We found that a large fraction of the crystals is organized in conducting channels of fractal building blocks. In this picture, a low percolation threshold automatically leads to a conductivity that is much lower than that of the filler. Since the structure-conductivity relation for the found network is almost optimal, a drastic increase in the conductivity of the coating cannot be achieved by changing the particle network, but only by using a filler with a higher conductivity level.
Using price-volume agreements to manage pharmaceutical leakage and off-label promotion.
Zhang, Hui; Zaric, Gregory S
2015-09-01
Unapproved or "off-label" uses of prescription drugs are quite common. The extent of this use may be influenced by the promotional efforts of manufacturers. This paper investigates how a manufacturer makes promotional decisions in the presence of a price-volume agreement. We developed an optimization model in which the manufacturer maximizes its expected profit by choosing the level of marketing effort to promote uses for different indications. We considered several ways a volume threshold is determined. We also compared models in which off-label uses are reimbursed and those in which they are forbidden to illustrate the impact of off-label promotion on the optimal decisions and on the decision maker's performance. We found that the payer chooses a threshold which may be the same as the manufacturer's optimal decision. We also found that the manufacturer not only considers the promotional cost in promoting off-label uses but also considers the health benefit of off-label uses. In some situations, using a price-volume agreement to control leakage may be a better idea than simply preventing leakage without using the agreement, from a social welfare perspective.
Optimization of the highly strained InGaAs/GaAs quantum well lasers grown by MOVPE
NASA Astrophysics Data System (ADS)
Su, Y. K.; Chen, W. C.; Wan, C. T.; Yu, H. C.; Chuang, R. W.; Tsai, M. C.; Cheng, K. Y.; Hu, C.; Tsau, Seth
2008-07-01
In this article, we study the highly compressive-strained InGaAs/GaAs quantum wells and the broad-area lasers grown by MOVPE. Several epitaxial parameters were optimized, including the growth temperature, pressure and group V to group III (V/III) ratio. Grown with the optimized epitaxial parameters, the highly strained In 0.39Ga 0.61As/GaAs lasers could be continuously operated at 1.22 μm and their threshold current density Jth was 140 A/cm 2. To the best of our knowledge, the demonstrated InGaAs QW laser has the lowest threshold current per quantum well (Jth/QW) of 46.7 A/cm 2. The fitted characteristic temperature ( T0) was 146.2 K, indicating the good electron confinement ability. Furthermore, by lowering the growth temperature down to 475 °C and the TBAs/III ratio to 5, the emission wavelength of the In 0.42Ga 0.58As/GaAs quantum wells was as long as 1245 nm and FWHM was 43 meV.
NASA Astrophysics Data System (ADS)
Cobourn, K. M.; Peckham, S. D.
2011-12-01
The vulnerability of agri-environmental systems to ecological threshold events depends on the combined influence of economic factors and natural drivers, such as climate and disturbance. This analysis builds an integrated ecologic-economic model to evaluate the behavioral response of agricultural producers to changing and uncertain natural conditions. The model explicitly reflects the effect of producer behavior on the likelihood of a threshold event that threatens the ecological and/or economic sustainability of the agri-environmental system. The foundation of the analysis is a threshold indicator that incorporates the population dynamics of a species that supports economic production and an episodic disturbance regime-in this case rangeland grass that is grazed by livestock and is subject to wildfire. This ecological indicator is integrated into an economic model in which producers choose grazing intensity given the state of the grass population and a set of economic parameters. We examine two model variants that characterize differing economic circumstances. The first characterizes the optimal grazing regime assuming that the system is managed by a single planner whose objective is to maximize the aggregate long-run returns of producers in the system. The second examines the case in which individual producers choose their own stocking rates in order to maximize their private economic benefit. The results from the first model variant illustrate the difference between an ecologic and an economic threshold. Failure to cross an ecological threshold does not necessarily ensure that the system remains economically viable: Economic sustainability, defined as the ability of the system to support optimal production into the infinite future, requires that the net growth rate of the supporting population exceeds the level required for ecological sustainability by an amount that depends on the market price of livestock and grazing efficiency. The results from the second model variant define the circumstances under which a system that is otherwise ecologically sustainable is driven over a threshold by the actions of economic agents. The difference between the two model solutions identifies bounds between which the viability of livestock production over the long-run is uncertain and depends upon the policy setting in which the agri-environmental system operates.
Zhang, Yifei; Kang, Jian
2017-11-01
The building of biomass combined heat and power (CHP) plants is an effective means of developing biomass energy because they can satisfy demands for winter heating and electricity consumption. The purpose of this study was to analyse the effect of the distribution density of a biomass CHP plant network on heat utilisation efficiency in a village-town system. The distribution density is determined based on the heat transmission threshold, and the heat utilisation efficiency is determined based on the heat demand distribution, heat output efficiency, and heat transmission loss. The objective of this study was to ascertain the optimal value for the heat transmission threshold using a multi-scheme comparison based on an analysis of these factors. To this end, a model of a biomass CHP plant network was built using geographic information system tools to simulate and generate three planning schemes with different heat transmission thresholds (6, 8, and 10 km) according to the heat demand distribution. The heat utilisation efficiencies of these planning schemes were then compared by calculating the gross power, heat output efficiency, and heat transmission loss of the biomass CHP plant for each scenario. This multi-scheme comparison yielded the following results: when the heat transmission threshold was low, the distribution density of the biomass CHP plant network was high and the biomass CHP plants tended to be relatively small. In contrast, when the heat transmission threshold was high, the distribution density of the network was low and the biomass CHP plants tended to be relatively large. When the heat transmission threshold was 8 km, the distribution density of the biomass CHP plant network was optimised for efficient heat utilisation. To promote the development of renewable energy sources, a planning scheme for a biomass CHP plant network that maximises heat utilisation efficiency can be obtained using the optimal heat transmission threshold and the nonlinearity coefficient for local roads. Copyright © 2017 Elsevier Ltd. All rights reserved.
1979-09-30
Aeling, J.L. and Chalet, M.D.: Collagen : Basic Science and Re:ated Diseases. Accepted for Publication. J. of Assoc. of Military Derm., 1979...102 Minoxidil as an Antihypertensive in Patients Refractory to Available Medications (T) (P) (PR). ...... . 037 75/107 A Comparison of the Results of...of Systemic Scleroderma with Minoxidil (U-1858) (0) .... ........................... 100 78/115 The Effect of Immunotherapy on the Tissue Threshold
Qualification of an Acceptable Alternative to Halon 1211 DOD Flightline Extinguishers
2008-09-01
Minimum Performance Standard MSDS Material Safety Data Sheet NATO North Atlantic Treaty Organization NAWCAD Naval Air Warfare Center Aircraft...included chronic and acute occupational exposure limits and cardiotoxicity. Alternatives that were carcinogens or that had any adverse developmental...gear to fight the fires Does not exceed pain threshold for exposed skin Each unit tested met this performance objective 2. Firefighting
ERIC Educational Resources Information Center
Gou, J.; Smith, J.; Valero, J.; Rubio, I.
2011-01-01
This paper reports on a clinical trial evaluating outcomes of a frequency-lowering technique for adolescents and young adults with severe to profound hearing impairment. Outcomes were defined by changes in aided thresholds, speech perception, and acceptance. The participants comprised seven young people aged between 13 and 25 years. They were…
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
Mehta, Dipakkumar; Kumar, M H Sathish; Sabikhi, Latha
2017-11-01
The current work aimed to formulate smoothie by optimizing varying levels of soy protein isolate (1.5-2.5% w/w), sucralose (150-190 ppm) and pectin (0.3-0.5% w/w) along with milk, legume (chickpea), vegetable (carrot), fruit (mango), honey and trisodium citrate by response surface methodology on the basis of sensory (color and appearance, flavor, consistency, sweetness and overall acceptability) and physical (expressible serum and viscosity) responses. Soy protein isolate and pectin levels influenced color and appearance, flavor, consistency and overall acceptability significantly. Soy protein isolate and pectin showed a positive correlation with viscosity of smoothie with reduced expressible serum. Smoothie was optimized with 1.8% (w/w) soy protein isolate, 166.8 ppm sucralose, and 0.5% (w/w) pectin with acceptable quality. One serving (325 ml) of optimized smoothie provides approximately 23% protein, 27% dietary fiber of the recommended daily values and provides approximately 74 kcal per 100 ml of smoothie, which renders smoothie as a high protein, high fiber, grab-and-go breakfast option.
Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J
2018-07-01
Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whereas MMSE at threshold <25/30 appeared to have the best true negative rate. The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Percolation in insect nest networks: Evidence for optimal wiring
NASA Astrophysics Data System (ADS)
Valverde, Sergi; Corominas-Murtra, Bernat; Perna, Andrea; Kuntz, Pascale; Theraulaz, Guy; Solé, Ricard V.
2009-06-01
Optimization has been shown to be a driving force for the evolution of some biological structures, such as neural maps in the brain or transport networks. Here we show that insect networks also display characteristic traits of optimality. By using a graph representation of the chamber organization of termite nests and a disordered lattice model, it is found that these spatial nests are close to a percolation threshold. This suggests that termites build efficient systems of galleries spanning most of the nest volume at low cost. The evolutionary consequences are outlined.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance (Delta)V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this (Delta)V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An example demonstrates the dV savings from the feasible solution to the optimal solution.
DiFranza, Joseph; Ursprung, W W Sanouri; Lauzon, Béatrice; Bancej, Christina; Wellman, Robert J; Ziedonis, Douglas; Kim, Sun S; Gervais, André; Meltzer, Bruce; McKay, Colleen E; O'Loughlin, Jennifer; Okoli, Chizimuzo T C; Fortuna, Lisa R; Tremblay, Michèle
2010-05-01
The Diagnostic and Statistical Manual diagnostic criteria for nicotine dependence (DSM-ND) are based on the proposition that dependence is a syndrome that can be diagnosed only when a minimum of 3 of the 7 proscribed features are present. The DSM-ND criteria are an accepted research measure, but the validity of these criteria has not been subjected to a systematic evaluation. To systematically review evidence of validity and reliability for the DSM-ND criteria, a literature search was conducted of 16 national and international databases. Each article with original data was independently reviewed by two or more reviewers. In total, 380 potentially relevant articles were examined and 169 were reviewed in depth. The DSM-ND criteria have seen wide use in research settings, but sensitivity and specificity are well below the accepted standards for clinical applications. Predictive validity is generally poor. The 7 DSM-ND criteria are regarded as having face validity, but no data support a 3-symptom ND diagnostic threshold, or a 4-symptom withdrawal syndrome threshold. The DSM incorrectly states that daily smoking is a prerequisite for withdrawal symptoms. The DSM shows poor to modest concurrence with all other measures of nicotine dependence, smoking behaviors and biological measures of tobacco use. The data support the DSM-ND criteria as a valid measure of nicotine dependence severity for research applications. However, the data do not support the central premise of a 3-symptom diagnostic threshold, and no data establish that the DSM-ND criteria provide an accurate diagnosis of nicotine dependence. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.
Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong
2011-09-01
Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.
Laser-induced retinal injury thresholds: variation with retinal irradiated area
NASA Astrophysics Data System (ADS)
Lund, David J.; Schulmeister, Karl; Seiser, Bernhard; Edthofer, Florian
2005-04-01
The retinal injury threshold for exposure to a laser source varies as a function of the irradiated area on the retina. Currently accepted guidelines for the safe use of lasers provide that the MPE will increase as the diameter of the irradiated area for retinal diameters between 25 mm and 1700 mm, based on the ED50 data available in the late 1970s. Recent studies by Zuclich and Lund produced data showing that the ED50 for ns-duration exposures at 532 nm and ms duration exposures at 590 nm varied as the square of the diameter of the irradiated area on the retina. This paper will discuss efforts to resolve the disagreement between the new data and the earlier data though an analysis of all accessible data relating the retinal injury threshold to the diameter of the incident beam on the retina and through simulations using computer models of laser-induced injury. The results show that the retinal radiant exposure required to produce retinal injury is a function of both exposure duration and retinal irradiance diameter and that the current guidelines for irradiance diameter dependence do not accurately reflect the variation of the threshold data.
Measurement of the helicity asymmetry E in ω → π + π - π 0 photoproduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akbar, Z.; Roy, P.; Park, S.
The double-polarization observablemore » $E$ was studied for the reaction $$\\gamma p\\to p\\omega$$ using the CEBAF Large Acceptance Spectrometer (CLAS) in Hall B at the Thomas Jefferson National Accelerator Facility and the longitudinally-polarized frozen-spin target (FROST). The observable was measured from the charged decay mode of the meson, $$\\omega\\to\\pi^+\\pi^-\\pi^0$$, using a circularly-polarized tagged-photon beam with energies ranging from the $$\\omega$$ threshold at 1.1 to 2.3 GeV. A partial-wave analysis within the Bonn-Gatchina framework found dominant contributions from the $3/2^+$ partial wave near threshold, which is identified with the sub-threshold $$N(1720)\\,3/2^+$$ nucleon resonance. To describe the entire data set, which consisted of $$\\omega$$ differential cross sections and a large variety of polarization observables, further contributions from other nucleon resonances were found to be necessary. Here, with respect to non-resonant mechanisms, $$\\pi$$ exchange in the $t$-channel was found to remain small across the analyzed energy range, while pomeron $t$-channel exchange gradually grew from the reaction threshold to dominate all other contributions above $$W \\approx 2$$ GeV.« less
Metter, E J; Granville, R L; Kussman, M J
1997-04-01
The study determines the extent to which payment thresholds for reporting malpractice claims to the National Practitioner Data Bank identifies substandard health care delivery in the Department of Defense. Relevant data were available on 2,291 of 2,576 medical malpractice claims reported to the closed medical malpractice case data base of the Office of the Assistant Secretary of Defense (Health Affairs). Amount paid was analyzed as a diagnostic test using standard of care assessment from each military Surgeon General office as the criterion. Using different paid threshold amounts per claim as a positive test, the sensitivity of identifying substandard care declined from 0.69 for all paid cases to 0.41 for claims over $40,000. Specificity increased from 0.75 for all paid claims to 0.89 for claims over $40,000. Positive and negative predictive values and likelihood ratio were similar at all thresholds. Malpractice case payment was of limited value for identifying substandard medical practice. All paid claims missed about 30% of substandard care, and reported about 25% of acceptable medical practice.
Measurement of the helicity asymmetry E in ω → π + π - π 0 photoproduction
Akbar, Z.; Roy, P.; Park, S.; ...
2017-12-28
The double-polarization observablemore » $E$ was studied for the reaction $$\\gamma p\\to p\\omega$$ using the CEBAF Large Acceptance Spectrometer (CLAS) in Hall B at the Thomas Jefferson National Accelerator Facility and the longitudinally-polarized frozen-spin target (FROST). The observable was measured from the charged decay mode of the meson, $$\\omega\\to\\pi^+\\pi^-\\pi^0$$, using a circularly-polarized tagged-photon beam with energies ranging from the $$\\omega$$ threshold at 1.1 to 2.3 GeV. A partial-wave analysis within the Bonn-Gatchina framework found dominant contributions from the $3/2^+$ partial wave near threshold, which is identified with the sub-threshold $$N(1720)\\,3/2^+$$ nucleon resonance. To describe the entire data set, which consisted of $$\\omega$$ differential cross sections and a large variety of polarization observables, further contributions from other nucleon resonances were found to be necessary. Here, with respect to non-resonant mechanisms, $$\\pi$$ exchange in the $t$-channel was found to remain small across the analyzed energy range, while pomeron $t$-channel exchange gradually grew from the reaction threshold to dominate all other contributions above $$W \\approx 2$$ GeV.« less
Evidence of hearing loss in a “normally-hearing” college-student population
Le Prell, C. G.; Hensley, B.N.; Campbell, K. C. M.; Hall, J. W.; Guire, K.
2011-01-01
We report pure-tone hearing threshold findings in 56 college students. All subjects reported normal hearing during telephone interviews, yet not all subjects had normal sensitivity as defined by well-accepted criteria. At one or more test frequencies (0.25–8 kHz), 7% of ears had thresholds ≥25 dB HL and 12% had thresholds ≥20 dB HL. The proportion of ears with abnormal findings decreased when three-frequency pure-tone-averages were used. Low-frequency PTA hearing loss was detected in 2.7% of ears and high-frequency PTA hearing loss was detected in 7.1% of ears; however, there was little evidence for “notched” audiograms. There was a statistically reliable relationship in which personal music player use was correlated with decreased hearing status in male subjects. Routine screening and education regarding hearing loss risk factors are critical as college students do not always self-identify early changes in hearing. Large-scale systematic investigations of college students’ hearing status appear to be warranted; the current sample size was not adequate to precisely measure potential contributions of different sound sources to the elevated thresholds measured in some subjects. PMID:21288064
Precipitation phase partitioning variability across the Northern Hemisphere
NASA Astrophysics Data System (ADS)
Jennings, K. S.; Winchell, T. S.; Livneh, B.; Molotch, N. P.
2017-12-01
Precipitation phase drives myriad hydrologic, climatic, and biogeochemical processes. Despite its importance, many of the land surface models used to simulate such processes and their sensitivity to climate warming rely on simple, spatially uniform air temperature thresholds to partition rainfall and snowfall. Our analysis of a 29-year dataset with 18.7 million observations of precipitation phase from 12,143 stations across the Northern Hemisphere land surface showed marked spatial variability in the near-surface air temperature at which precipitation is equally likely to fall as rain and snow, the 50% rain-snow threshold. This value averaged 1.0°C and ranged from -0.4°C to 2.4°C for 95% of the stations analyzed. High-elevation continental areas such as the Rocky Mountains of the western U.S. and the Tibetan Plateau of central Asia generally exhibited the warmest thresholds, in some cases exceeding 3.0°C. Conversely, the coldest thresholds were observed on the Pacific Coast of North America, the southeast U.S., and parts of Eurasia, with values dropping below -0.5°C. Analysis of the meteorological conditions during storm events showed relative humidity exerted the strongest control on phase partitioning, with surface pressure playing a secondary role. Lower relative humidity and surface pressure were both associated with warmer 50% rain-snow thresholds. Additionally, we trained a binary logistic regression model on the observations to classify rain and snow events and found including relative humidity as a predictor variable significantly increased model performance between 0.6°C and 3.8°C when phase partitioning is most uncertain. We then used the optimized model and a spatially continuous reanalysis product to map the 50% rain-snow threshold across the Northern Hemisphere. The map reproduced patterns in the observed thresholds with a mean bias of 0.5°C relative to the station data. The above results suggest land surface models could be improved by incorporating relative humidity into their precipitation phase prediction schemes or by using a spatially variable, optimized rain-snow temperature threshold. This is particularly important for climate warming simulations where misdiagnosing a shift from snow to rain or inaccurately quantifying snowfall fraction would likely lead to biased results.
Wang, Yi-Ting; Sung, Pei-Yuan; Lin, Peng-Lin; Yu, Ya-Wen; Chung, Ren-Hua
2015-05-15
Genome-wide association studies (GWAS) have become a common approach to identifying single nucleotide polymorphisms (SNPs) associated with complex diseases. As complex diseases are caused by the joint effects of multiple genes, while the effect of individual gene or SNP is modest, a method considering the joint effects of multiple SNPs can be more powerful than testing individual SNPs. The multi-SNP analysis aims to test association based on a SNP set, usually defined based on biological knowledge such as gene or pathway, which may contain only a portion of SNPs with effects on the disease. Therefore, a challenge for the multi-SNP analysis is how to effectively select a subset of SNPs with promising association signals from the SNP set. We developed the Optimal P-value Threshold Pedigree Disequilibrium Test (OPTPDT). The OPTPDT uses general nuclear families. A variable p-value threshold algorithm is used to determine an optimal p-value threshold for selecting a subset of SNPs. A permutation procedure is used to assess the significance of the test. We used simulations to verify that the OPTPDT has correct type I error rates. Our power studies showed that the OPTPDT can be more powerful than the set-based test in PLINK, the multi-SNP FBAT test, and the p-value based test GATES. We applied the OPTPDT to a family-based autism GWAS dataset for gene-based association analysis and identified MACROD2-AS1 with genome-wide significance (p-value=2.5×10(-6)). Our simulation results suggested that the OPTPDT is a valid and powerful test. The OPTPDT will be helpful for gene-based or pathway association analysis. The method is ideal for the secondary analysis of existing GWAS datasets, which may identify a set of SNPs with joint effects on the disease.
Weiss, Robert A; Ross, E Victor; Tanghetti, Emil A; Vasily, David B; Childs, James J; Smirnov, Mikhail Z; Altshuler, Gregory B
2011-02-01
An arc lamp-based device providing optimized spectrum and pulse shape was characterized and compared with two pulsed dye laser (PDL) systems using a vascular phantom. Safety and effectiveness for facial telangiectasia are presented in clinical case studies. An optimized pulsed light source's (OPL) spectral and power output were characterized and compared with two 595 nm PDL devices. Purpuric threshold fluences were determined for the OPL and PDLs on Fitzpatrick type II normal skin. A vascular phantom comprising blood-filled quartz capillaries beneath porcine skin was treated by the devices at their respective purpuric threshold fluences for 3 ms pulse widths, while vessel temperatures were monitored with an infrared (IR) camera. Patients with Fitzpatrick skin types II-III received a split-face treatment with the OPL and a 595 nm PDL. The OPL provided a dual-band output spectrum from 500 to 670 nm and 850-1,200 nm, pulse widths from 3 to 100 ms, and fluences to 80 J/cm(2). The smooth output power measured during all pulse widths provides unambiguous vessel size selectivity. Percent energy in the near infra-red increased with decreasing output power from 45% to 60% and contributed 15-26% to heating of deep vessels, respectively. At purpuric threshold fluences the ratio of OPL to PDL vessel temperature rise was 1.7-2.8. OPL treatments of facial telangiectasia were well-tolerated by patients demonstrating significant improvements comparable to PDL with no downtime. Intense pulsed light (IPL) and PDL output pulse and spectral profiles are important for selective treatment of vessels in vascular lesions. The OPL's margin between purpuric threshold fluence and treatment fluence for deeper, larger vessels was greater than the corresponding margin with PDLs. The results warrant further comparison studies with IPLs and other PDLs. Copyright © 2011 Wiley-Liss, Inc.
Rosenbaum, Daniel G; Askin, Gulce; Beneck, Debra M; Kovanlikaya, Arzu
2017-10-01
The role of magnetic resonance imaging (MRI) in pediatric appendicitis is increasing; MRI findings predictive of appendiceal perforation have not been specifically evaluated. To assess the performance of MRI in differentiating perforated from non-perforated appendicitis. A retrospective review of pediatric patients undergoing contrast-enhanced MRI and subsequent appendectomy was performed, with surgicopathological confirmation of perforation. Appendiceal diameter and the following 10 MRI findings were assessed: appendiceal restricted diffusion, wall defect, appendicolith, periappendiceal free fluid, remote free fluid, restricted diffusion within free fluid, abscess, peritoneal enhancement, ileocecal wall thickening and ileus. Two-sample t-test and chi-square tests were used to analyze continuous and discrete data, respectively. Sensitivity and specificity for individual MRI findings were calculated and optimal thresholds for measures of accuracy were selected. Seventy-seven patients (mean age: 12.2 years) with appendicitis were included, of whom 22 had perforation. The perforated group had a larger mean appendiceal diameter and mean number of MRI findings than the non-perforated group (12.3 mm vs. 8.6 mm; 5.0 vs. 2.0, respectively). Abscess, wall defect and restricted diffusion within free fluid had the greatest specificity for perforation (1.00, 1.00 and 0.96, respectively) but low sensitivity (0.36, 0.25 and 0.32, respectively). The receiver operator characteristic curve for total number of MRI findings had an area under the curve of 0.92, with an optimal threshold of 3.5. A threshold of any 4 findings had the best ability to accurately discriminate between perforated and non-perforated cases, with a sensitivity of 82% and specificity of 85%. Contrast-enhanced MRI can differentiate perforated from non-perforated appendicitis. The presence of multiple findings increases diagnostic accuracy, with a threshold of any four findings optimally discriminating between perforated and non-perforated cases. These results may help guide management decisions as MRI assumes a greater role in the work-up of pediatric appendicitis.
Ross, Eric L; Cinti, Sandro K; Hutton, David W
2016-07-01
Preexposure prophylaxis (PrEP) is effective at preventing HIV infection among men who have sex with men (MSM), but there is uncertainty about how to identify high-risk MSM who should receive PrEP. We used a mathematical model to assess the cost-effectiveness of using the HIV Incidence Risk Index for MSM (HIRI-MSM) questionnaire to target PrEP to high-risk MSM. We simulated strategies of no PrEP, PrEP available to all MSM, and eligibility thresholds set to HIRI-MSM scores between 5 and 45, in increments of 5 (where a higher score predicts greater HIV risk). Based on the iPrEx, IPERGAY, and PROUD trials, we evaluated PrEP efficacies from 44% to 86% and annual costs from $5900 to 8700. We designate strategies with incremental cost-effectiveness ratio (ICER) ≤$100,000/quality-adjusted life-year (QALY) as "cost-effective." Over 20 years, making PrEP available to all MSM is projected to prevent 33.5% of new HIV infections, with an ICER of $1,474,000/QALY. Increasing the HIRI-MSM score threshold reduces the prevented infections, but improves cost-effectiveness. A threshold score of 25 is projected to be optimal (most QALYs gained while still being cost-effective) over a wide range of realistic PrEP efficacies and costs. At low cost and high efficacy (IPERGAY), thresholds of 15 or 20 are optimal across a range of other input assumptions; at high cost and low efficacy (iPrEx), 25 or 30 are generally optimal. The HIRI-MSM provides a clinically actionable means of guiding PrEP use. Using a score of 25 to determine PrEP eligibility could facilitate cost-effective use of PrEP among high-risk MSM who will benefit from it most.
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2017-09-01
Wind profilers have been widely adopted to observe the wind field information in the atmosphere for different purposes. But accuracy of its observation has limitations due to various noises or disturbances and hence need to be further improved. In this paper, the data measured under strong wind conditions, using a 1290-MHz boundary layer profiler (BLP), are quality controlled via a composite quality control (QC) procedure proposed by the authors. Then, through the comparison with the data measured by radiosonde flights (balloon observations), the critical thresholds in the composite QC procedure, including consensus average threshold T 1 and vertical shear threshold T 3 , are systematically discussed. And the performance of the BLP operated under precipitation is also evaluated. It is found that to ensure the high accuracy and high data collectable rate, the optimal range of subsets is determined to be 4 m/s. Although the number of data rejected by the combined algorithm of vertical shear examination and small median test is quite limited, it is proved that the algorithm is quite useful to recognize the outlier with a large discrepancy. And the optimal wind shear threshold T 3 can be recommended as 5 ms -1 /100m. During patchy precipitation, the quality of data measured by the four oblique beams (using the DBS measuring technique) can still be ensured. After the BLP data are quality controlled by the composite QC procedure, the output can show good agreement with the balloon observation.
A Topological Criterion for Filtering Information in Complex Brain Networks
Latora, Vito; Chavez, Mario
2017-01-01
In many biological systems, the network of interactions between the elements can only be inferred from experimental measurements. In neuroscience, non-invasive imaging tools are extensively used to derive either structural or functional brain networks in-vivo. As a result of the inference process, we obtain a matrix of values corresponding to a fully connected and weighted network. To turn this into a useful sparse network, thresholding is typically adopted to cancel a percentage of the weakest connections. The structural properties of the resulting network depend on how much of the inferred connectivity is eventually retained. However, how to objectively fix this threshold is still an open issue. We introduce a criterion, the efficiency cost optimization (ECO), to select a threshold based on the optimization of the trade-off between the efficiency of a network and its wiring cost. We prove analytically and we confirm through numerical simulations that the connection density maximizing this trade-off emphasizes the intrinsic properties of a given network, while preserving its sparsity. Moreover, this density threshold can be determined a-priori, since the number of connections to filter only depends on the network size according to a power-law. We validate this result on several brain networks, from micro- to macro-scales, obtained with different imaging modalities. Finally, we test the potential of ECO in discriminating brain states with respect to alternative filtering methods. ECO advances our ability to analyze and compare biological networks, inferred from experimental data, in a fast and principled way. PMID:28076353
Chaos-based wireless communication resisting multipath effects.
Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso
2017-09-01
In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.
Chaos-based wireless communication resisting multipath effects
NASA Astrophysics Data System (ADS)
Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso
2017-09-01
In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.
Optimal Network Modularity for Information Diffusion
NASA Astrophysics Data System (ADS)
Nematzadeh, Azadeh; Ferrara, Emilio; Flammini, Alessandro; Ahn, Yong-Yeol
2014-08-01
We investigate the impact of community structure on information diffusion with the linear threshold model. Our results demonstrate that modular structure may have counterintuitive effects on information diffusion when social reinforcement is present. We show that strong communities can facilitate global diffusion by enhancing local, intracommunity spreading. Using both analytic approaches and numerical simulations, we demonstrate the existence of an optimal network modularity, where global diffusion requires the minimal number of early adopters.
Optimization of a hardware implementation for pulse coupled neural networks for image applications
NASA Astrophysics Data System (ADS)
Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal
2010-04-01
Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendell, Mark J.; Fisk, William J.
Background - The goal of this project, with a focus on commercial buildings in California, was to develop a new framework for evidence-based minimum ventilation rate (MVR) standards that protect occupants in buildings while also considering energy use and cost. This was motivated by research findings suggesting that current prescriptive MVRs in commercial buildings do not provide occupants with fully safe and satisfactory indoor environments. Methods - The project began with a broad review in several areas ? the diverse strategies now used for standards or guidelines for MVRs or for environmental contaminant exposures, current knowledge about adverse human effectsmore » associated with VRs, and current knowledge about contaminants in commercial buildings, including their their presence, their adverse human effects, and their relationships with VRs. Based on a synthesis of the reviewed information, new principles and approaches are proposed for setting evidence-based VRs standards for commercial buildings, considering a range of human effects including health, performance, and acceptability of air. Results ? A review and evaluation is first presented of current approaches to setting prescriptive building ventilation standards and setting acceptable limits for human contaminant exposures in outdoor air and occupational settings. Recent research on approaches to setting acceptable levels of environmental exposures in evidence-based MVR standards is also described. From a synthesis and critique of these materials, a set of principles for setting MVRs is presented, along with an example approach based on these principles. The approach combines two sequential strategies. In a first step, an acceptable threshold is set for each adverse outcome that has a demonstrated relationship to VRs, as an increase from a (low) outcome level at a high reference ventilation rate (RVR, the VR needed to attain the best achievable levels of the adverse outcome); MVRs required to meet each specific outcome threshold are estimated; and the highest of these MVRs, which would then meet all outcome thresholds, is selected as the target MVR. In a second step, implemented only if the target MVR from step 1 is judged impractically high, costs and benefits are estimated and this information is used in a risk management process. Four human outcomes with substantial quantitative evidence of relationships to VRs are identified for initial consideration in setting MVR standards. These are: building-related symptoms (sometimes called sick building syndrome symptoms), poor perceived indoor air quality, and diminished work performance, all with data relating them directly to VRs; and cancer and non-cancer chronic outcomes, related indirectly to VRs through specific VR-influenced indoor contaminants. In an application of step 1 for offices using a set of example outcome thresholds, a target MVR of 9 L/s (19 cfm) per person was needed. Because this target MVR was close to MVRs in current standards, use of a cost/benefit process seemed unnecessary. Selection of more stringent thresholds for one or more human outcomes, however, could raise the target MVR to 14 L/s (30 cfm) per person or higher, triggering the step 2 risk management process. Consideration of outdoor air pollutant effects would add further complexity to the framework. For balancing the objective and subjective factors involved in setting MVRs in a cost-benefit process, it is suggested that a diverse group of stakeholders make the determination after assembling as much quantitative data as possible.« less
The risk of radiation exposure to the eyes of the interventional pain physician.
Fish, David E; Kim, Andrew; Ornelas, Christopher; Song, Sungchan; Pangarkar, Sanjog
2011-01-01
It is widely accepted that the use of medical imaging continues to grow across the globe as does the concern for radiation safety. The danger of lens opacities and cataract formation related to radiation exposure is well documented in the medical literature. However, there continues to be controversy regarding actual dose thresholds of radiation exposure and whether these thresholds are still relevant to cataract formation. Eye safety and the risk involved for the interventional pain physician is not entirely clear. Given the available literature on measured radiation exposure to the interventionist, and the controversy regarding dose thresholds, it is our current recommendation that the interventional pain physician use shielded eyewear. As the breadth of interventional procedures continues to grow, so does the radiation risk to the interventional pain physician. In this paper, we attempt to outline the risk of cataract formation in the scope of practice of an interventional pain physician and describe techniques that may help reduce them.
The Risk of Radiation Exposure to the Eyes of the Interventional Pain Physician
Fish, David E.; Kim, Andrew; Ornelas, Christopher; Song, Sungchan; Pangarkar, Sanjog
2011-01-01
It is widely accepted that the use of medical imaging continues to grow across the globe as does the concern for radiation safety. The danger of lens opacities and cataract formation related to radiation exposure is well documented in the medical literature. However, there continues to be controversy regarding actual dose thresholds of radiation exposure and whether these thresholds are still relevant to cataract formation. Eye safety and the risk involved for the interventional pain physician is not entirely clear. Given the available literature on measured radiation exposure to the interventionist, and the controversy regarding dose thresholds, it is our current recommendation that the interventional pain physician use shielded eyewear. As the breadth of interventional procedures continues to grow, so does the radiation risk to the interventional pain physician. In this paper, we attempt to outline the risk of cataract formation in the scope of practice of an interventional pain physician and describe techniques that may help reduce them. PMID:22091381
Schumann, Karina; Ross, Michael
2010-11-01
Despite wide acceptance of the stereotype that women apologize more readily than men, there is little systematic evidence to support this stereotype or its supposed bases (e.g., men's fragile egos). We designed two studies to examine whether gender differences in apology behavior exist and, if so, why. In Study 1, participants reported in daily diaries all offenses they committed or experienced and whether an apology had been offered. Women reported offering more apologies than men, but they also reported committing more offenses. There was no gender difference in the proportion of offenses that prompted apologies. This finding suggests that men apologize less frequently than women because they have a higher threshold for what constitutes offensive behavior. In Study 2, we tested this threshold hypothesis by asking participants to evaluate both imaginary and recalled offenses. As predicted, men rated the offenses as less severe than women did. These different ratings of severity predicted both judgments of whether an apology was deserved and actual apology behavior.
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
Development of an acceptable factor to estimate chronic end points from acute toxicity data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venman, B.C.; Flaga, C.
1985-12-01
Acceptable daily intake (ADI) values are routinely developed for threshold toxicants from NOAELs determined from human or animal chronic or subchronic data. These NOAELs are then divided by appropriate uncertainty factors ranging from 10 to 1000 depending on the quality of the data. However, for the vast majority of chemicals used industrially, adequate toxicity data needed to use this process are not available. Thus, a procedure to estimate a chronic toxicity endpoint from acute toxicity data, such as an oral rat LD50, becomes necessary. An acute-to-chronic application factor of 0.0001 was developed, which when multiplied by an oral LD50 formore » an individual chemical, yields a surrogate chronic NOAEL. This figure can then be used to estimate an acceptable daily exposure for humans. The process used to estimate this application factor is detailed.« less
Timko, C. Alix; Zucker, Nancy L.; Herbert, James D.; Rodriguez, Daniel; Merwin, Rhonda M.
2016-01-01
Family based-treatments have the most empirical support in the treatment of adolescent anorexia nervosa; yet, a significant percentage of adolescents and their families do not respond to manualized family based treatment (FBT). The aim of this open trial was to conduct a preliminary evaluation of an innovative family-based approach to the treatment of anorexia: Acceptance-based Separated Family Treatment (ASFT). Treatment was grounded in Acceptance and Commitment Therapy (ACT), delivered in a separated format, and included an ACT-informed skills program. Adolescents (ages 12–18) with anorexia or sub-threshold anorexia and their families received 20 treatment sessions over 24 weeks. Outcome indices included eating disorder symptomatology reported by the parent and adolescent, percentage of expected body weight achieved, and changes in psychological acceptance/avoidance. Half of the adolescents (48.0%) met criteria for full remission at the end of treatment, 29.8% met criteria for partial remission, and 21.3% did not improve. Overall, adolescents had a significant reduction in eating disorder symptoms and reached expected body weight. Treatment resulted in changes in psychological acceptance in the expected direction for both parents and adolescents. This open trial provides preliminary evidence for the feasibility, acceptability, and efficacy of ASFT for adolescents with anorexia. Directions for future research are discussed. PMID:25898341
Anaerobic threshold, is it a magic number to determine fitness for surgery?
2013-01-01
The use of cardiopulmonary exercise testing (CPET) to evaluate cardiac and respiratory function was pioneered as part of preoperative assessment in the mid 1990s. Surgical procedures have changed since then. The patient population may have aged; however, the physiology has remained the same. The use of an accurate physiological evaluation remains as germane today as it was then. Certainly no ‘magic’ is involved. The author recognizes that not everyone accepts the classical theories of the anaerobic threshold (AT) and that there is some discussion around lactate and exercise. The article looks at aerobic capacity as an important predictor of perioperative mortality and also looks at some aspects of CPET relative to surgical risk evaluation. PMID:24472514
Ver Elst, K; Vermeiren, S; Schouwers, S; Callebaut, V; Thomson, W; Weekx, S
2013-12-01
CLSI recommends a minimal citrate tube fill volume of 90%. A validation protocol with clinical and analytical components was set up to determine the tube fill threshold for international normalized ratio of prothrombin time (PT-INR), activated partial thromboplastin time (aPTT) and fibrinogen. Citrated coagulation samples from 16 healthy donors and eight patients receiving vitamin K antagonists (VKA) were evaluated. Eighty-nine tubes were filled to varying volumes of >50%. Coagulation tests were performed on ACL TOP 500 CTS(®) . Receiver Operating Characteristic (ROC) plot, with Total error (TE) and critical difference (CD) as possible acceptance criteria, was used to determine the fill threshold. Receiving Operating Characteristic was the most accurate with CD for PT-INR and TE for aPTT resulting in thresholds of 63% for PT and 80% for aPTT. By adapted ROC, based on threshold setting at a point of 100% sensitivity at a maximum specificity, CD was best for PT and TE for aPTT resulting in thresholds of 73% for PT and 90% for aPTT. For fibrinogen, the method was only valid with the TE criterion at a 63% fill volume. In our study, we validated the minimal citrate tube fill volumes of 73%, 90% and 63% for PT-INR, aPTT and fibrinogen, respectively. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.
2015-07-01
The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.
Numerical investigation of the inertial cavitation threshold under multi-frequency ultrasound.
Suo, Dingjie; Govind, Bala; Zhang, Shengqi; Jing, Yun
2018-03-01
Through the introduction of multi-frequency sonication in High Intensity Focused Ultrasound (HIFU), enhancement of efficiency has been noted in several applications including thrombolysis, tissue ablation, sonochemistry, and sonoluminescence. One key experimental observation is that multi-frequency ultrasound can help lower the inertial cavitation threshold, thereby improving the power efficiency. However, this has not been well corroborated by the theory. In this paper, a numerical investigation on the inertial cavitation threshold of microbubbles (MBs) under multi-frequency ultrasound irradiation is conducted. The relationships between the cavitation threshold and MB size at various frequencies and in different media are investigated. The results of single-, dual and triple frequency sonication show reduced inertial cavitation thresholds by introducing additional frequencies which is consistent with previous experimental work. In addition, no significant difference is observed between dual frequency sonication with various frequency differences. This study, not only reaffirms the benefit of using multi-frequency ultrasound for various applications, but also provides a possible route for optimizing ultrasound excitations for initiating inertial cavitation. Copyright © 2017 Elsevier B.V. All rights reserved.
High efficiency low threshold current 1.3 μm InAs quantum dot lasers on on-axis (001) GaP/Si
NASA Astrophysics Data System (ADS)
Jung, Daehwan; Norman, Justin; Kennedy, M. J.; Shang, Chen; Shin, Bongki; Wan, Yating; Gossard, Arthur C.; Bowers, John E.
2017-09-01
We demonstrate highly efficient, low threshold InAs quantum dot lasers epitaxially grown on on-axis (001) GaP/Si substrates using molecular beam epitaxy. Electron channeling contrast imaging measurements show a threading dislocation density of 7.3 × 106 cm-2 from an optimized GaAs template grown on GaP/Si. The high-quality GaAs templates enable as-cleaved quantum dot lasers to achieve a room-temperature continuous-wave (CW) threshold current of 9.5 mA, a threshold current density as low as 132 A/cm2, a single-side output power of 175 mW, and a wall-plug-efficiency of 38.4% at room temperature. As-cleaved QD lasers show ground-state CW lasing up to 80 °C. The application of a 95% high-reflectivity coating on one laser facet results in a CW threshold current of 6.7 mA, which is a record-low value for any kind of Fabry-Perot laser grown on Si.
Pellacci, Benedetta; Verzini, Gianmaria
2018-05-01
We study the positive principal eigenvalue of a weighted problem associated with the Neumann spectral fractional Laplacian. This analysis is related to the investigation of the survival threshold in population dynamics. Our main result concerns the optimization of such threshold with respect to the fractional order [Formula: see text], the case [Formula: see text] corresponding to the standard Neumann Laplacian: when the habitat is not too fragmented, the principal positive eigenvalue can not have local minima for [Formula: see text]. As a consequence, the best strategy for survival is either following the diffusion with [Formula: see text] (i.e. Brownian diffusion), or with the lowest possible s (i.e. diffusion allowing long jumps), depending on the size of the domain. In addition, we show that analogous results hold for the standard fractional Laplacian in [Formula: see text], in periodic environments.
Noise-induced escape in an excitable system
NASA Astrophysics Data System (ADS)
Khovanov, I. A.; Polovinkin, A. V.; Luchinsky, D. G.; McClintock, P. V. E.
2013-03-01
We consider the stochastic dynamics of escape in an excitable system, the FitzHugh-Nagumo (FHN) neuronal model, for different classes of excitability. We discuss, first, the threshold structure of the FHN model as an example of a system without a saddle state. We then develop a nonlinear (nonlocal) stability approach based on the theory of large fluctuations, including a finite-noise correction, to describe noise-induced escape in the excitable regime. We show that the threshold structure is revealed via patterns of most probable (optimal) fluctuational paths. The approach allows us to estimate the escape rate and the exit location distribution. We compare the responses of a monostable resonator and monostable integrator to stochastic input signals and to a mixture of periodic and stochastic stimuli. Unlike the commonly used local analysis of the stable state, our nonlocal approach based on optimal paths yields results that are in good agreement with direct numerical simulations of the Langevin equation.
NASA Astrophysics Data System (ADS)
Han, Xifeng; Zhou, Wen
2018-03-01
Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.
Abu Dayyeh, Barham K; Kumar, Nitin; Edmundowicz, Steven A; Jonnalagadda, Sreenivasa; Larsen, Michael; Sullivan, Shelby; Thompson, Christopher C; Banerjee, Subhas
2015-09-01
The increasing global burden of obesity and its associated comorbidities has created an urgent need for additional treatment options to fight this pandemic. Endoscopic bariatric therapies (EBTs) provide an effective and minimally invasive treatment approach to obesity that would increase treatment options beyond surgery, medications, and lifestyle measures. This systematic review and meta-analysis were performed by the American Society for Gastrointestinal Endoscopy (ASGE) Bariatric Endoscopy Task Force comprising experts in the subject area and the ASGE Technology Committee Chair to specifically assess whether acceptable performance thresholds outlined by an ASGE Preservation and Incorporation of Valuable endoscopic Innovations (PIVI) document for clinical adoption of available EBTs have been met. After conducting a comprehensive search of several English-language databases, we performed direct meta-analyses by using random-effects models to assess whether the Orbera intragastric balloon (IGB) (Apollo Endosurgery, Austin, Tex) and the EndoBarrier duodenal-jejunal bypass sleeve (DJBS) (GI Dynamics, Lexington, Mass) have met the PIVI thresholds. The meta-analyses results indicate that the Orbera IGB meets the PIVI thresholds for both primary and nonprimary bridge obesity therapy. Based on a meta-analysis of 17 studies including 1683 patients, the percentage of excess weight loss (%EWL) with the Orbera IGB at 12 months was 25.44% (95% confidence interval [CI], 21.47%-29.41%) (random model) with a mean difference in %EWL over controls of 26.9% (95% CI, 15.66%-38.24%; P ≤ .01) in 3 randomized, controlled trials. Furthermore, the pooled percentage of total body weight loss (% TBWL) after Orbera IGB implantation was 12.3% (95% CI, 7.9%–16.73%), 13.16% (95% CI, 12.37%–13.95%), and 11.27% (95% CI, 8.17%–14.36%) at 3, 6, and 12 months after implantation, respectively, thus exceeding the PIVI threshold of 5% TBWL for nonprimary (bridge) obesity therapy. With the data available, the DJBS liner does appear to meet the %EWL PIVI threshold at 12 months, resulting in 35% EWL (95% CI, 24%-46%) but does not meet the 15% EWL over control required by the PIVI. We await review of the pivotal trial data on the efficacy and safety of this device. Data are insufficient to evaluate PIVI thresholds for any other EBT at this time. Both evaluated EBTs had ≤5% incidence of serious adverse events as set by the PIVI document to indicate acceptable safety profiles. Our task force consequently recognizes the Orbera IGB for meeting the PIVI criteria for the management of obesity. As additional data from the other EBTs become available, we will update our recommendations accordingly.
Anadón, A; Martínez-Larrañaga, M R; Martínez, M A
2001-10-01
Authorization of plant protection products/agrochemicals/pesticides in the European Union is done on the basis of their toxicological properties. This paper reviews the current legislation for placing an agrochemical on the market (ie a new substance or a existing active substance), and the toxicology studies needed for inclusion of a substance in any of the annexes of the Council Directive of the European Economic Community 91/414/ EEC. Risk analysis and its steps is discussed. The "threshold toxicity" employed to allow risk characterisation of plant protection products is described, such as acceptable daily intake, acceptable operator exposure level, acute reference dose, and maximum admissible concentration in water.
Nault, Brian A; Huseth, Anders S
2016-08-01
Onion thrips, Thrips tabaci Lindeman (Thysanoptera: Thripidae), is a highly destructive pest of onion, Allium cepa L., and its management relies on multiple applications of foliar insecticides. Development of insecticide resistance is common in T. tabaci populations, and new strategies are needed to relax existing levels of insecticide use, but still provide protection against T. tabaci without compromising marketable onion yield. An action threshold-based insecticide program combined with or without a thrips-resistant onion cultivar was investigated as an improved approach for managing T. tabaci infestations in commercial onion fields. Regardless of cultivar type, the average number of insecticide applications needed to manage T. tabaci infestations in the action-threshold based program was 4.3, while the average number of sprays in the standard weekly program was 7.2 (a 40% reduction). The mean percent reduction in numbers of applications following the action threshold treatment in the thrips-resistant onion cultivar, 'Advantage', was 46.7% (range 40-50%) compared with the standard program, whereas the percentage reduction in applications in action threshold treatments in the thrips-susceptible onion cultivar, 'Santana', was 34.3% (range 13-50%) compared with the standard program, suggesting a benefit of the thrips-resistant cultivar. Marketable bulb yields for both 'Advantage' and 'Santana' in the action threshold-based program were nearly identical to those in the standard program, indicating that commercially acceptable bulb yields will be generated with fewer insecticide sprays following an action threshold-based program, saving money, time and benefiting the environment. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOT National Transportation Integrated Search
2015-06-01
This Technical Report on Prototype Intelligent Network Flow Optimization (INFLO) Dynamic Speed Harmonization and Queue Warning is the final report for the project. It describes the prototyping, acceptance testing and small-scale demonstration of the ...
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
Optimal control of photoelectron emission by realistic waveforms
NASA Astrophysics Data System (ADS)
Solanpää, J.; Ciappina, M. F.; Räsänen, E.
2017-09-01
Recent experimental techniques in multicolor waveform synthesis allow the temporal shaping of strong femtosecond laser pulses with applications in the control of quantum mechanical processes in atoms, molecules, and nanostructures. Prediction of the shapes of the optimal waveforms can be done computationally using quantum optimal control theory. In this work we demonstrate the control of above-threshold photoemission of one-dimensional hydrogen model with pulses feasible for experimental waveform synthesis. By mixing different spectral channels and thus lowering the intensity requirements for individual channels, the resulting optimal pulses can extend the cutoff energies by at least up to 50% and bring up the electron yield by several orders of magnitude. Insights into the electron dynamics for optimized photoelectron emission are obtained with a semiclassical two-step model.