The limits of thresholds: silica and the politics of science, 1935 to 1990.
Markowitz, G; Rosner, D
1995-01-01
Since the 1930s threshold limit values have been presented as an objectively established measure of US industrial safety. However, there have been important questions raised regarding the adequacy of these thresholds for protecting workers from silicosis. This paper explores the historical debates over silica threshold limit values and the intense political negotiation that accompanied their establishment. In the 1930s and early 1940s, a coalition of business, public health, insurance, and political interests formed in response to a widely perceived "silicosis crisis." Part of the resulting program aimed at containing the crisis was the establishment of threshold limit values. Yet silicosis cases continued to be documented. By the 1960s these cases had become the basis for a number of revisions to the thresholds. In the 1970s, following a National Institute for Occupational Safety and Health recommendation to lower the threshold limit value for silica and to eliminate sand as an abrasive in blasting, industry fought attempts to make the existing values more stringent. This paper traces the process by which threshold limit values became part of a compromise between the health of workers and the economic interests of industry. Images p254-a p256-a p257-a p259-a PMID:7856788
Randomness fault detection system
NASA Technical Reports Server (NTRS)
Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)
1996-01-01
A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Inhalation hazards; threshold limit values for... SURFACE WORK AREAS OF UNDERGROUND COAL MINES Airborne Contaminants § 71.700 Inhalation hazards; threshold... containing quartz, and asbestos dust) in excess of, on the basis of a time-weighted average, the threshold...
Health hazards of ultrafine metal and metal oxide powders
NASA Technical Reports Server (NTRS)
Boylen, G. W., Jr.; Chamberlin, R. I.; Viles, F. J.
1969-01-01
Study reveals that suggested threshold limit values are from two to fifty times lower than current recommended threshold limit values. Proposed safe limits of exposure to the ultrafine dusts are based on known toxic potential of various materials as determined in particle size ranges.
[The new German general threshold limit value for dust--pro and contra the adoption in Austria].
Godnic-Cvar, Jasminka; Ponocny, Ivo
2004-01-01
Since it has been realised that inhalation of inert dust is one of the important confounding variables for the development of chronic bronchitis, the threshold values for occupational exposure to these dusts needs to be further decreased. The German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (MAK-Commission) has set a new threshold (MAK-Value) for inert dusts (4 mg/m3 for inhalable dust, 1.5 mg/m3 for respirable dust) in 1997. This value is much lower than the threshold values currently used world-wide. The aim of the present article is to assess the scientific plausibility of the methodology (databases and statistics) used to set these new German MAK-Values, regarding their adoption in Austria. Although we believe that it is substantial to lower the MAK-Value for inert dust in order to prevent the development of chronic bronchitis as a consequence of occupational exposure to inert dusts, the applied methodology used by the German MAK-Commission in 1997 to set the new MAK-Values does not justify the reduction of the threshold limit value. A carefully designed study to establish an appropriate scientific basis for setting a new threshold value for inert dusts in the workplace should be carried out. Meanwhile, at least the currently internationally applied threshold values should be adopted in Austria.
Spirtas, R; Steinberg, M; Wands, R C; Weisburger, E K
1986-01-01
The Chemical Substances Threshold Limit Value Committee of the American Conference of Governmental Industrial Hygienists has refined its procedures for evaluating carcinogens. Types of epidemiologic and toxicologic evidence used are reviewed and a discussion is presented on how the Committee evaluates data on carcinogenicity. Although it has not been conclusively determined whether biological thresholds exist for all types of carcinogens, the Committee will continue to develop guidelines for permissible exposures to carcinogens. The Committee will continue to use the safety factor approach to setting Threshold Limit Values for carcinogens, despite its shortcomings. A compilation has been developed for lists of substances considered to be carcinogenic by several scientific groups. The Committee will use this information to help to identify and classify carcinogens for its evaluation. PMID:3752326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spirtas, R.; Steinberg, M.; Wands, R.C.
1986-10-01
The Chemical Substances Threshold Limit Value Committee of the American Conference of Governmental Industrial Hygienists has refined its procedures for evaluating carcinogens. Types of epidemiologic and toxicologic evidence used are reviewed and a discussion is presented on how the Committee evaluates data on carcinogenicity. Although it has not been conclusively determined whether biological thresholds exist for all types of carcinogens, the Committee will continue to develop guidelines for permissible exposures to carcinogens. The Committee will continue to use the safety factor approach to setting Threshold Limit Values for carcinogens, despite its shortcomings. A compilation has been developed for lists ofmore » substances considered to be carcinogenic by several scientific groups. The Committee will use this information to help to identify and classify carcinogens for its evaluation.« less
Code of Federal Regulations, 2014 CFR
2014-07-01
... gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND... limit values adopted by the American Conference of Governmental Industrial Hygienists in “Threshold...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND SURFACE WORK AREAS OF UNDERGROUND...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND SURFACE WORK AREAS OF UNDERGROUND...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND SURFACE WORK AREAS OF UNDERGROUND...
Devlin, Michelle; Painting, Suzanne; Best, Mike
2007-01-01
The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation
Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
Kim, Sung-Jin; Yokokawa, Ryuji; Takayama, Shuichi
2012-12-03
This paper reveals a critical limitation in the electro-hydraulic analogy between a microfluidic membrane-valve (μMV) and an electronic transistor. Unlike typical transistors that have similar on and off threshold voltages, in hydraulic μMVs, the threshold pressures for opening and closing are significantly different and can change, even for the same μMVs depending on overall circuit design and operation conditions. We explain, in particular, how the negative values of the closing threshold pressures significantly constrain operation of even simple hydraulic μMV circuits such as autonomously switching two-valve microfluidic oscillators. These understandings have significant implications in designing self-regulated microfluidic devices.
Estimation of risks by chemicals produced during laser pyrolysis of tissues
NASA Astrophysics Data System (ADS)
Weber, Lothar W.; Spleiss, Martin
1995-01-01
Use of laser systems in minimal invasive surgery results in formation of laser aerosol with volatile organic compounds of possible health risk. By use of currently identified chemical substances an overview on possibly associated risks to human health is given. The class of the different identified alkylnitriles seem to be a laser specific toxicological problem. Other groups of chemicals belong to the Maillard reaction type, the fatty acid pyrolysis type, or even the thermally activated chemolysis. In relation to the available different threshold limit values the possible exposure ranges of identified substances are discussed. A rough estimation results in an exposure range of less than 1/100 for almost all substances with given human threshold limit values without regard of possible interactions. For most identified alkylnitriles, alkenes, and heterocycles no threshold limit values are given for lack of, until now, practical purposes. Pyrolysis of anaesthetized organs with isoflurane gave no hints for additional pyrolysis products by fragment interactions with resulting VOCs. Measurements of pyrolysis gases resulted in detection of small amounts of NO additionally with NO2 formation at plasma status.
Laabs, V; Leake, C; Botham, P; Melching-Kollmuß, S
2015-10-01
Non-relevant metabolites are defined in the EU regulation for plant protection product authorization and a detailed definition of non-relevant metabolites is given in an EU Commission DG Sanco (now DG SANTE - Health and Food Safety) guidance document. However, in water legislation at EU and member state level non-relevant metabolites of pesticides are either not specifically regulated or diverse threshold values are applied. Based on their inherent properties, non-relevant metabolites should be regulated based on substance-specific and toxicity-based limit values in drinking and groundwater like other anthropogenic chemicals. Yet, if a general limit value for non-relevant metabolites in drinking and groundwater is favored, an application of a Threshold of Toxicological Concern (TTC) concept for Cramer class III compounds leads to a threshold value of 4.5 μg L(-1). This general value is exemplarily shown to be protective for non-relevant metabolites, based on individual drinking water limit values derived for a set of 56 non-relevant metabolites. A consistent definition of non-relevant metabolites of plant protection products, as well as their uniform regulation in drinking and groundwater in the EU, is important to achieve legal clarity for all stakeholders and to establish planning security for development of plant protection products for the European market. Copyright © 2015 Elsevier Inc. All rights reserved.
Kim, Sung-Jin; Yokokawa, Ryuji; Takayama, Shuichi
2012-01-01
This paper reveals a critical limitation in the electro-hydraulic analogy between a microfluidic membrane-valve (μMV) and an electronic transistor. Unlike typical transistors that have similar on and off threshold voltages, in hydraulic μMVs, the threshold pressures for opening and closing are significantly different and can change, even for the same μMVs depending on overall circuit design and operation conditions. We explain, in particular, how the negative values of the closing threshold pressures significantly constrain operation of even simple hydraulic μMV circuits such as autonomously switching two-valve microfluidic oscillators. These understandings have significant implications in designing self-regulated microfluidic devices. PMID:23284181
Weeks, James L
2006-06-01
The Mine Safety and Health Administration (MSHA) proposes to issue citations for non-compliance with the exposure limit for respirable coal mine dust when measured exposure exceeds the exposure limit with a "high degree of confidence." This criterion threshold value (CTV) is derived from the sampling and analytical error of the measurement method. This policy is based on a combination of statistical and legal reasoning: the one-tailed 95% confidence limit of the sampling method, the apparent principle of due process and a standard of proof analogous to "beyond a reasonable doubt." This policy raises the effective exposure limit, it is contrary to the precautionary principle, it is not a fair sharing of the burden of uncertainty, and it employs an inappropriate standard of proof. Its own advisory committee and NIOSH have advised against this policy. For longwall mining sections, it results in a failure to issue citations for approximately 36% of the measured values that exceed the statutory exposure limit. Citations for non-compliance with the respirable dust standard should be issued for any measure exposure that exceeds the exposure limit.
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
Efficacy of the ejector flow-meter. A scavenging device for anaesthetic gases.
Obel, D; Jørgensen, S; Ferguson, A; Frandsen, K
1985-01-01
Measurements of air concentrations of nitrous oxide and halothane in the breathing zone of the anaesthetist and the operating-room nurse were carried out during inhalation anaesthesia with a Mapleson D system. Gas removal was performed from inside the breathing system at the same rate as that of the fresh gas inflow by means of an ejector flow-meter. The concentrations of nitrous oxide and halothane were maintained below the Danish Threshold Limit Values of 100 and 5 parts per million, respectively, by using this type of scavenging. When these anaesthetics were used simultaneously, the reduced Threshold Limit Values were not exceeded during endotracheal anaesthesia.
NASA Technical Reports Server (NTRS)
Janoudi, A.; Poff, K. L.
1990-01-01
The relationship between the amount of light and the amount of response for any photobiological process can be based on the number of incident quanta per unit time (fluence rate-response) or on the number of incident quanta during a given period of irradiation (fluence-response). Fluence-response and fluence rate-response relationships have been measured for second positive phototropism by seedlings of Arabidopsis thaliana. The fluence-response relationships exhibit a single limiting threshold at about 0.01 micromole per square meter when measured at fluence rates from 2.4 x 10(-5) to 6.5 x 10(-3) micromoles per square meter per second. The threshold values in the fluence rate-response curves decrease with increasing time of irradiation, but show a common fluence threshold at about 0.01 micromole per square meter. These thresholds are the same as the threshold of about 0.01 micromole per square meter measured for first positive phototropism. Based on these data, it is suggested that second positive curvature has a threshold in time of about 10 minutes. Moreover, if the times of irradiation exceed the time threshold, there is a single limiting fluence threshold at about 0.01 micromole per square meter. Thus, the limiting fluence threshold for second positive phototropism is the same as the fluence threshold for first positive phototropism. Based on these data, we suggest that this common fluence threshold for first positive and second positive phototropism is set by a single photoreceptor pigment system.
Designing and Teaching Business & Society Courses from a Threshold Concept Approach
ERIC Educational Resources Information Center
Vidal, Natalia; Smith, Renae; Spetic, Wellington
2015-01-01
This article examines the redesign of an undergraduate course in Business & Society using a threshold concept approach. Business & Society courses may be troublesome for students because they depart from the premise that business is limited to creating value for shareholders. We argue that Business & Society courses contain a web of…
Modeling T-cell proliferation: an investigation of the consequences of the Hayflick limit.
Pilyugin, S; Mittler, J; Antia, R
1997-05-07
Somatic cells, including immune cells such as T-cells have a limited capacity for proliferation and can only replicate for a finite number of generations (known as the Hayflick limit) before dying. In this paper we use mathematical models to investigate the consequences of introducing a Hayflick limit on the dynamics of T-cells stimulated with specific antigen. We show that while the Hayflick limit does not alter the dynamics of T-cell response to antigen over the short term, it may have a profound effect on the long-term immune response. In particular we show that over the long term the Hayflick limit may be important in determining whether an immune response can be maintained to a persistent antigen (or parasite). The eventual outcome is determined by the magnitude of the Hayflick limit, the extent to which antigen reduces the input of T-cells from the thymus, and the rate of antigen-induced proliferation of T-cells. Counter to what might be expected we show that the persistence of an immune response (immune memory) requires the density of persistent antigen to be less than a defined threshold value. If the amount of persistent antigen (or parasite) is greater than this threshold value then immune memory will be relatively short lived. The consequences of this threshold for persistent mycobacterial and HIV infections and for the generation of vaccines are discussed.
Janoudi, Abdul; Poff, Kenneth L.
1990-01-01
The relationship between the amount of light and the amount of response for any photobiological process can be based on the number of incident quanta per unit time (fluence rate-response) or on the number of incident quanta during a given period of irradiation (fluence-response). Fluence-response and fluence rate-response relationships have been measured for second positive phototropism by seedlings of Arabidopsis thaliana. The fluence-response relationships exhibit a single limiting threshold at about 0.01 micromole per square meter when measured at fluence rates from 2.4 × 10−5 to 6.5 × 10−3 micromoles per square meter per second. The threshold values in the fluence rateresponse curves decrease with increasing time of irradiation, but show a common fluence threshold at about 0.01 micromole per square meter. These thresholds are the same as the threshold of about 0.01 micromole per square meter measured for first positive phototropism. Based on these data, it is suggested that second positive curvature has a threshold in time of about 10 minutes. Moreover, if the times of irradiation exceed the time threshold, there is a single limiting fluence threshold at about 0.01 micromole per square meter. Thus, the limiting fluence threshold for second positive phototropism is the same as the fluence threshold for first positive phototropism. Based on these data, we suggest that this common fluence threshold for first positive and second positive phototropism is set by a single photoreceptor pigment system. PMID:11537470
Dobie, Robert A; Wojcik, Nancy C
2015-07-13
The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Rutstein, Sarah E.; Price, Joan T.; Rosenberg, Nora E.; Rennie, Stuart M.; Biddle, Andrea K.; Miller, William C.
2017-01-01
Cost-effectiveness analysis (CEA) is an increasingly appealing tool for evaluating and comparing health-related interventions in resource-limited settings. The goal is to inform decision-makers regarding the health benefits and associated costs of alternative interventions, helping guide allocation of limited resources by prioritizing interventions that offer the most health for the least money. Although only one component of a more complex decision-making process, CEAs influence the distribution of healthcare resources, directly influencing morbidity and mortality for the world’s most vulnerable populations. However, CEA-associated measures are frequently setting-specific valuations, and CEA outcomes may violate ethical principles of equity and distributive justice. We examine the assumptions and analytical tools used in CEAs that may conflict with societal values. We then evaluate contextual features unique to resource-limited settings, including the source of health-state utilities and disability weights; implications of CEA thresholds in light of economic uncertainty; and the role of external donors. Finally, we explore opportunities to help align interpretation of CEA outcomes with values and budgetary constraints in resource-limited settings. The ethical implications of CEAs in resource-limited settings are vast. It is imperative that CEA outcome summary measures and implementation thresholds adequately reflect societal values and ethical priorities in resource-limited settings. PMID:27141969
Rutstein, Sarah E; Price, Joan T; Rosenberg, Nora E; Rennie, Stuart M; Biddle, Andrea K; Miller, William C
2017-10-01
Cost-effectiveness analysis (CEA) is an increasingly appealing tool for evaluating and comparing health-related interventions in resource-limited settings. The goal is to inform decision-makers regarding the health benefits and associated costs of alternative interventions, helping guide allocation of limited resources by prioritising interventions that offer the most health for the least money. Although only one component of a more complex decision-making process, CEAs influence the distribution of health-care resources, directly influencing morbidity and mortality for the world's most vulnerable populations. However, CEA-associated measures are frequently setting-specific valuations, and CEA outcomes may violate ethical principles of equity and distributive justice. We examine the assumptions and analytical tools used in CEAs that may conflict with societal values. We then evaluate contextual features unique to resource-limited settings, including the source of health-state utilities and disability weights, implications of CEA thresholds in light of economic uncertainty, and the role of external donors. Finally, we explore opportunities to help align interpretation of CEA outcomes with values and budgetary constraints in resource-limited settings. The ethical implications of CEAs in resource-limited settings are vast. It is imperative that CEA outcome summary measures and implementation thresholds adequately reflect societal values and ethical priorities in resource-limited settings.
Using satellite data to guide emission control strategies for surface ozone pollution
NASA Astrophysics Data System (ADS)
Jin, X.; Fiore, A. M.
2017-12-01
Surface ozone (O3) has adverse effects on public health, agriculture and ecosystems. As a secondary pollutant, ozone is not emitted directly. Ozone forms from two classes of precursors: NOx and VOCs. We use satellite observations of formaldehyde (a marker of VOCs) and NO2 (a marker of NOx) to identify areas which would benefit more from reducing NOx emissions (NOx-limited) versus areas where reducing VOC emissions would lead to lower ozone (VOC-limited). We use a global chemical transport model (GEOS-Chem) to develop a set of threshold values that separate the NOx-limited and VOC-limited conditions. Combining these threshold values with a decadal record of satellite observations, we find that U.S. cities (e.g. New York, Chicago) have shifted from VOC-limited to NOx-limited ozone production regimes in the warm season. This transition reflects the NOx emission controls implemented over the past decade. Increasing NOx sensitivity implies that regional NOx emission control programs will improve O3 air quality more now than it would have a decade ago.
NASA Astrophysics Data System (ADS)
Lükens, G.; Yacoub, H.; Kalisch, H.; Vescan, A.
2016-05-01
The interface charge density between the gate dielectric and an AlGaN/GaN heterostructure has a significant impact on the absolute value and stability of the threshold voltage Vth of metal-insulator-semiconductor (MIS) heterostructure field effect transistor. It is shown that a dry-etching step (as typically necessary for normally off devices engineered by gate-recessing) before the Al2O3 gate dielectric deposition introduces a high positive interface charge density. Its origin is most likely donor-type trap states shifting Vth to large negative values, which is detrimental for normally off devices. We investigate the influence of oxygen plasma annealing techniques of the dry-etched AlGaN/GaN surface by capacitance-voltage measurements and demonstrate that the positive interface charge density can be effectively compensated. Furthermore, only a low Vth hysteresis is observable making this approach suitable for threshold voltage engineering. Analysis of the electrostatics in the investigated MIS structures reveals that the maximum Vth shift to positive voltages achievable is fundamentally limited by the onset of accumulation of holes at the dielectric/barrier interface. In the case of the Al2O3/Al0.26Ga0.74N/GaN material system, this maximum threshold voltage shift is limited to 2.3 V.
A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2016-01-01
A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.
Threshold kinetics of a solar-simulator-pumped iodine laser
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Lee, Y.; Weaver, W. R.; Humes, D. H.; Lee, J. H.
1984-01-01
A model of the chemical kinetics of the n-C3F7I solar-simulator-pumped iodine laser is utilized to study the major kinetic processes associated with the threshold behavior of this experimental system. Excited-state diffusion to the cell wall is the dominant limiting factor below 5 torr. Excited-state diffusion to the cell wall is the dominant limiting factor below 5 torr. Excited-state recombination with the alkyl radical and quenching by the parent gas control threshold at higher pressures. Treatment of the hyperfine splitting and uncertainty in the pressure broadening are important factors in fixing the threshold level. In spite of scatter in the experimental data caused by instabilities in the simulator high-pressure high-pressure arc, reasonable agreement is achieved between the model and experiment. Model parameters arrived at are within the uncertainty range of values found in the literature.
Wilson, Raymond C.
1997-01-01
Broad-scale variations in long-term precipitation climate may influence rainfall/debris-flow threshold values along the U.S. Pacific coast, where both the mean annual precipitation (MAP) and the number of rainfall days (#RDs) are controlled by topography, distance from the coastline, and geographic latitude. Previous authors have proposed that rainfall thresholds are directly proportional to MAP, but this appears to hold only within limited areas (< 1?? latitude), where rainfall frequency (#RDs) is nearly constant. MAP-normalized thresholds underestimate the critical rainfall when applied to areas to the south, where the #RDs decrease, and overestimate threshold rainfall when applied to areas to the north, where the #RDs increase. For normalization between climates where both MAP and #RDs vary significantly, thresholds may best be described as multiples of the rainy-day normal, RDN = MAP/#RDs. Using data from several storms that triggered significant debris-flow activity in southern California, the San Francisco Bay region, and the Pacific Northwest, peak 24-hour rainfalls were plotted against RDN values, displaying a linear relationship with a lower bound at about 14 RDN. RDN ratios in this range may provide a threshold for broad-scale regional forecasting of debris-flow activity.
He, Zhi Chao; Huang, Shuo; Guo, Qing Hai; Xiao, Li Shan; Yang, De Wei; Wang, Ying; Yang, Yi Fu
2016-08-01
Urban sprawl has impacted increasingly on water environment quality in watersheds. Based on water environmental response, the simulation and prediction of expanding threshold of urban building land could provide an alternative reference for urban construction planning. Taking three watersheds (i.e., Yundang Lake at complete urbanization phase, Maluan Bay at peri-urbanization phase and Xinglin Bay at early urbanization phase) with 2009-2012 observation data as example, we calculated the upper limit of TN and TP capacity in three watersheds and identified the threshold value of urban building land in watersheds using the regional nutrient management (ReNuMa) model, and also predicted the water environmental effects associated with the changes of urban landscape pattern. Results indicated that the upper limit value of TN was 12900, 42800 and 43120 kg, while that of TP was 340, 420 and 450 kg for Yundang, Maluan and Xinglin watershed, respectively. In reality, the environment capacity of pollutants in Yundang Lake was not yet satura-ted, and annual pollutant loads in Maluan Bay and Xinglin Bay were close to the upper limit. How-ever, an obvious upward trend of annual TN and TP loads was observed in Xinglin Bay. The annual pollutant load was not beyond the annual upper limit in three watersheds under Scenario 1, while performed oppositely under Scenario 3. Under Scenario 2, the annual pollutant load in Yundang Lake was under-saturation, and the TN and TP in Maluan Bay were over their limits. The area thresholds of urban building land were 1320, 5600 and 4750 hm 2 in Yundang Lake, Maluan Bay and Xinglin Bay, respectively. This study could benefit the regulation on urban landscape planning.
NASA Technical Reports Server (NTRS)
Macewen, J. W.
1973-01-01
Oxygen toxicity is examined, including the effects of oxygen partial pressure variations on toxicity and oxygen effects on ozone and nitrogen dioxide toxicity. Toxicity of fuels and oxidizers, such as hydrazines, are reported. Carbon monoxide, spacecraft threshold limit values, emergency exposure limits, spacecraft contaminants, and water quality standards for space missions are briefly summarized.
33 CFR 138.240 - Procedure for calculating limit of liability adjustments for inflation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... calculating limit of liability adjustments for inflation. (a) Formula for calculating a cumulative percent... Current Period), using the following escalation formula: Percent change in the Annual CPI-U = [(Annual CPI.... This cumulative percent change value is rounded to one decimal place. (b) Significance threshold. Not...
Cabral, Ana Caroline; Stark, Jonathan S; Kolm, Hedda E; Martins, César C
2018-04-01
Sewage input and the relationship between chemical markers (linear alkylbenzenes and coprostanol) and fecal indicator bacteria (FIB, Escherichia coli and enterococci), were evaluated in order to establish thresholds values for chemical markers in suspended particulate matter (SPM) as indicators of sewage contamination in two subtropical estuaries in South Atlantic Brazil. Both chemical markers presented no linear relationship with FIB due to high spatial microbiological variability, however, microbiological water quality was related to coprostanol values when analyzed by logistic regression, indicating that linear models may not be the best representation of the relationship between both classes of indicators. Logistic regression was performed with all data and separately for two sampling seasons, using 800 and 100 MPN 100 mL -1 of E. coli and enterococci, respectively, as the microbiological limits of sewage contamination. Threshold values of coprostanol varied depending on the FIB and season, ranging between 1.00 and 2.23 μg g -1 SPM. The range of threshold values of coprostanol for SPM are relatively higher and more variable than those suggested in literature for sediments (0.10-0.50 μg g -1 ), probably due to higher concentration of coprostanol in SPM than in sediment. Temperature may affect the relationship between microbiological indicators and coprostanol, since the threshold value of coprostanol found here was similar to tropical areas, but lower than those found during winter in temperate areas, reinforcing the idea that threshold values should be calibrated for different climatic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Oil-in-Water Emulsion Exhibits Bitterness-Suppressing Effects in a Sensory Threshold Study.
Torrico, Damir Dennis; Sae-Eaw, Amporn; Sriwattana, Sujinda; Boeneke, Charles; Prinyawiwatkul, Witoon
2015-06-01
Little is known about how emulsion characteristics affect saltiness/bitterness perception. Sensory detection and recognition thresholds of NaCl, caffeine, and KCl in aqueous solution compared with oil-in-water emulsion systems were evaluated. For emulsions, NaCl, KCl, or caffeine were dissolved in water + emulsifier and mixed with canola oil (20% by weight). Two emulsions were prepared: emulsion 1 (viscosity = 257 cP) and emulsion 2 (viscosity = 59 cP). The forced-choice ascending concentration series method of limits (ASTM E-679-04) was used to determine detection and/or recognition thresholds at 25 °C. Group best estimate threshold (GBET) geometric means were expressed as g/100 mL. Comparing NaCl with KCl, there were no significant differences in detection GBET values for all systems (0.0197 - 0.0354). For saltiness recognition thresholds, KCl GBET values were higher compared with NaCl GBET (0.0822 - 0.1070 compared with 0.0471 - 0.0501). For NaCl and KCl, emulsion 1 and/or emulsion 2 did not significantly affect the saltiness recognition threshold compared with that of the aqueous solution. However, the bitterness recognition thresholds of caffeine and KCl in solution were significantly lower than in the emulsions (0.0242 - 0.0586 compared with 0.0754 - 0.1025). Gender generally had a marginal effect on threshold values. This study showed that, compared with the aqueous solutions, emulsions did not significantly affect the saltiness recognition threshold of NaCl and KCl, but exhibited bitterness-suppressing effects on KCl and/or caffeine. © 2015 Institute of Food Technologists®
Dolan, David G; Naumann, Bruce D; Sargent, Edward V; Maier, Andrew; Dourson, Michael
2005-10-01
A scientific rationale is provided for estimating acceptable daily intake values (ADIs) for compounds with limited or no toxicity information to support pharmaceutical manufacturing operations. These ADIs are based on application of the "thresholds of toxicological concern" (TTC) principle, in which levels of human exposure are estimated that pose no appreciable risk to human health. The same concept has been used by the US Food and Drug Administration (FDA) to establish "thresholds of regulation" for indirect food additives and adopted by the Joint FAO/WHO Expert Committee on Food Additives for flavoring substances. In practice, these values are used as a statement of safety and indicate when no actions need to be taken in a given exposure situation. Pharmaceutical manufacturing relies on ADIs for cleaning validation of process equipment and atypical extraneous matter investigations. To provide practical guidance for handling situations where relatively unstudied compounds with limited or no toxicity data are encountered, recommendations are provided on ADI values that correspond to three categories of compounds: (1) compounds that are likely to be carcinogenic, (2) compounds that are likely to be potent or highly toxic, and (3) compounds that are not likely to be potent, highly toxic or carcinogenic. Corresponding ADIs for these categories of materials are 1, 10, and 100 microg/day, respectively.
NASA Astrophysics Data System (ADS)
Panagoulia, D.; Trichakis, I.
2012-04-01
Considering the growing interest in simulating hydrological phenomena with artificial neural networks (ANNs), it is useful to figure out the potential and limits of these models. In this study, the main objective is to examine how to improve the ability of an ANN model to simulate extreme values of flow utilizing a priori knowledge of threshold values. A three-layer feedforward ANN was trained by using the back propagation algorithm and the logistic function as activation function. By using the thresholds, the flow was partitioned in low (x < μ), medium (μ ≤ x ≤ μ + 2σ) and high (x > μ + 2σ) values. The employed ANN model was trained for high flow partition and all flow data too. The developed methodology was implemented over a mountainous river catchment (the Mesochora catchment in northwestern Greece). The ANN model received as inputs pseudo-precipitation (rain plus melt) and previous observed flow data. After the training was completed the bootstrapping methodology was applied to calculate the ANN confidence intervals (CIs) for a 95% nominal coverage. The calculated CIs included only the uncertainty, which comes from the calibration procedure. The results showed that an ANN model trained specifically for high flows, with a priori knowledge of the thresholds, can simulate these extreme values much better (RMSE is 31.4% less) than an ANN model trained with all data of the available time series and using a posteriori threshold values. On the other hand the width of CIs increases by 54.9% with a simultaneous increase by 64.4% of the actual coverage for the high flows (a priori partition). The narrower CIs of the high flows trained with all data may be attributed to the smoothing effect produced from the use of the full data sets. Overall, the results suggest that an ANN model trained with a priori knowledge of the threshold values has an increased ability in simulating extreme values compared with an ANN model trained with all the data and a posteriori knowledge of the thresholds.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
Boyd, Paul J
2006-12-01
The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical explanation as to why PT has little effect when using the default maplaw of c = 500. Subjective reports of background threshold stimulation showed that most users could perceive a relatively loud auditory percept, in the absence of microphone input, when PT was set to double the behaviorally measured electrical thresholds ([theta]e), but that this produced little intrusion when microphone input was present. The results of these investigations have direct clinical relevance, showing that setting of PT is indeed relatively unimportant in terms of speech discrimination, but that it is worth ensuring that PT is not set excessively high, as this can produce distracting background stimulation. Indeed, it may even be set to minimum values without deleterious effect.
Paganoni, C.A.; Chang, K.C.; Robblee, M.B.
2006-01-01
A significant data quality challenge for highly variant systems surrounds the limited ability to quantify operationally reasonable limits on the data elements being collected and provide reasonable threshold predictions. In many instances, the number of influences that drive a resulting value or operational range is too large to enable physical sampling for each influencer, or is too complicated to accurately model in an explicit simulation. An alternative method to determine reasonable observation thresholds is to employ an automation algorithm that would emulate a human analyst visually inspecting data for limits. Using the visualization technique of self-organizing maps (SOM) on data having poorly understood relationships, a methodology for determining threshold limits was developed. To illustrate this approach, analysis of environmental influences that drive the abundance of a target indicator species (the pink shrimp, Farfantepenaeus duorarum) provided a real example of applicability. The relationship between salinity and temperature and abundance of F. duorarum is well documented, but the effect of changes in water quality upstream on pink shrimp abundance is not well understood. The highly variant nature surrounding catch of a specific number of organisms in the wild, and the data available from up-stream hydrology measures for salinity and temperature, made this an ideal candidate for the approach to provide a determination about the influence of changes in hydrology on populations of organisms.
NASA Astrophysics Data System (ADS)
Paganoni, Christopher A.; Chang, K. C.; Robblee, Michael B.
2006-05-01
A significant data quality challenge for highly variant systems surrounds the limited ability to quantify operationally reasonable limits on the data elements being collected and provide reasonable threshold predictions. In many instances, the number of influences that drive a resulting value or operational range is too large to enable physical sampling for each influencer, or is too complicated to accurately model in an explicit simulation. An alternative method to determine reasonable observation thresholds is to employ an automation algorithm that would emulate a human analyst visually inspecting data for limits. Using the visualization technique of self-organizing maps (SOM) on data having poorly understood relationships, a methodology for determining threshold limits was developed. To illustrate this approach, analysis of environmental influences that drive the abundance of a target indicator species (the pink shrimp, Farfantepenaeus duorarum) provided a real example of applicability. The relationship between salinity and temperature and abundance of F. duorarum is well documented, but the effect of changes in water quality upstream on pink shrimp abundance is not well understood. The highly variant nature surrounding catch of a specific number of organisms in the wild, and the data available from up-stream hydrology measures for salinity and temperature, made this an ideal candidate for the approach to provide a determination about the influence of changes in hydrology on populations of organisms.
Blom, Kimberly C; Farina, Sasha; Gomez, Yessica-Haydee; Campbell, Norm R C; Hemmelgarn, Brenda R; Cloutier, Lyne; McKay, Donald W; Dawes, Martin; Tobe, Sheldon W; Bolli, Peter; Gelfer, Mark; McLean, Donna; Bartlett, Gillian; Joseph, Lawrence; Featherstone, Robin; Schiffrin, Ernesto L; Daskalopoulou, Stella S
2015-04-01
Despite progress in automated blood pressure measurement (BPM) technology, there is limited research linking hard outcomes to automated office BPM (OBPM) treatment targets and thresholds. Equivalences for automated BPM devices have been estimated from approximations of standardized manual measurements of 140/90 mmHg. Until outcome-driven targets and thresholds become available for automated measurement methods, deriving evidence-based equivalences between automated methods and standardized manual OBPM is the next best solution. The MeasureBP study group was initiated by the Canadian Hypertension Education Program to close this critical knowledge gap. MeasureBP aims to define evidence-based equivalent values between standardized manual OBPM and automated BPM methods by synthesizing available evidence using a systematic review and individual subject-level data meta-analyses. This manuscript provides a review of the literature and MeasureBP study protocol. These results will lay the evidenced-based foundation to resolve uncertainties within blood pressure guidelines which, in turn, will improve the management of hypertension.
Effects of pulse duration on magnetostimulation thresholds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saritas, Emine U., E-mail: saritas@ee.bilkent.edu.tr; Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800; National Magnetic Resonance Research Center
Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number ofmore » cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations. Results: The magnetostimulation limits decreased with increasing pulse duration (T{sub pulse}). For T{sub pulse} < 18 ms, the thresholds were significantly higher than at the longest pulse durations (p < 0.01, paired Wilcoxon signed-rank test). The normalized magnetostimulation threshold (B{sub Norm}) vs duration curve at all three frequencies agreed almost identically, indicating that the observed effect is independent of the operating frequency. At the shortest pulse duration (T{sub pulse} ≈ 2 ms), the thresholds were approximately 24% higher than at the asymptotes. The thresholds decreased to within 4% of their asymptotic values for T{sub pulse} > 20 ms. These trends were well characterized (R{sup 2} = 0.78) by a stretched exponential function given by B{sub Norm}=1+αe{sup −(T{sub p}{sub u}{sub l}{sub s}{sub e}/β){sup γ}}, where the fitted parameters were α = 0.44, β = 4.32, and γ = 0.60. Conclusions: This work shows for the first time that the magnetostimulation thresholds decrease with increasing pulse duration, and that this effect is independent of the operating frequency. Normalized threshold vs duration trends are almost identical for a 20-fold range of frequencies: the thresholds are significantly higher at short pulse durations and settle to within 4% of their asymptotic values for durations longer than 20 ms. These results emphasize the importance of matching the human-subject experiments to the imaging conditions of a particular setup. Knowing the dependence of the safety limits to all contributing factors is critical for increasing the time-efficiency of imaging systems that utilize time-varying magnetic fields.« less
Seismic signals hard clipping overcoming
NASA Astrophysics Data System (ADS)
Olszowa, Paula; Sokolowski, Jakub
2018-01-01
In signal processing the clipping is understand as the phenomenon of limiting the signal beyond certain threshold. It is often related to overloading of a sensor. Two particular types of clipping are being recognized: soft and hard. Beyond the limiting value soft clipping reduces the signal real gain while the hard clipping stiffly sets the signal values at the limit. In both cases certain amount of signal information is lost. Obviously if one possess the model which describes the considered signal and the threshold value (which might be slightly more difficult to obtain in the soft clipping case), the attempt of restoring the signal can be made. Commonly it is assumed that the seismic signals take form of an impulse response of some specific system. This may lead to belief that the sine wave may be the most appropriate to fit in the clipping period. However, this should be tested. In this paper the possibility of overcoming the hard clipping in seismic signals originating from a geoseismic station belonging to an underground mine is considered. A set of raw signals will be hard-clipped manually and then couple different functions will be fitted and compared in terms of least squares. The results will be then analysed.
Corporate influence on threshold limit values.
Castleman, B I; Ziem, G E
1988-01-01
Investigations into the historical development of specific Threshold Limit Values (TLVs) for many substances have revealed serious shortcomings in the process followed by the American Conference of Governmental Industrial Hygienists. Unpublished corporate communications were important in developing TLVs for 104 substances; for 15 of these, the TLV documentation was based solely on such information. Efforts to obtain written copies of this unpublished material were mostly unsuccessful. Case studies on the TLV Committee's handling of lead and seven carcinogens illustrate various aspects of corporate influence and interaction with the committee. Corporate representatives listed officially as "consultants" since 1970 were given primary responsibility for developing TLVs on proprietary chemicals of the companies that employed them (Dow, DuPont). It is concluded that an ongoing international effort is needed to develop scientifically based guidelines to replace the TLVs in a climate of openness and without manipulation by vested interests.
Soil texture and climatc conditions for biocrust growth limitation: a meta analysis
NASA Astrophysics Data System (ADS)
Fischer, Thomas; Subbotina, Mariia
2015-04-01
Along with afforestation, attempts have been made to combat desertification by managing soil crusts, and is has been reported that recovery rates of biocrusts are dependent on many factors, including the type, severity, and extent of disturbance; structure of the vascular plant community; conditions of adjoining substrates; availability of inoculation material; and climate during and after disturbance (Belnap & Eldridge 2001). Because biological soil crusts are known to be more stable on and to prefer fine substrates (Belnap 2001), the question arises as to how successful crust management practices can be applied to coarser soil. In previous studies we observed similar crust biomasses on finer soils under arid and on coarser soils under temperate conditions. We hypothesized that the higher water holding capacity of finer substrates would favor crust development, and that the amount of silt and clay in the substrate that is required for enhanced crust development would vary with changes in climatic conditions. In a global meta study, climatic and soil texture threshold values promoting BSC growth were derived. While examining literature sources, it became evident that the amount of studies to be incorporated into this meta analysis was reversely related to the amount of common environmental parameters they share. We selected annual mean precipitaion, mean temperature and the amount of silt and clay as driving variables for crust growth. Response variable was the "relative crust biomass", which was computed per literature source as the ratio between each individual crust biomass value of the given study to the study maximum value reported. We distinguished lichen, green algal, cyanobacterial and moss crusts. To quantify threshold conditions at which crust biomass responded to differences in texture and climate, we (I) determined correlations between bioclimatic variables, (II) calculated linear models to determine the effect of typical climatic variables with soil clay content and with study site as a random effect. (III) Threshold values of texture and climatc effects were identified using a regression tree. Three mean annual temperature classes for texture dependent BSC growth limitation were identified: (1) <9 °C with a threshold value of 25% silt and clay (limited growth on coarser soils), (2) 9-19 °C, where texture did have no influence on relative crust biomass, and (3) >19 °C at soils with <4 or >17% silt and clay. Because biocrust development is limited under certain climatic and soil texture conditions, it is suggested to consider soil texture for biocrust rehabilitation purposes and in biogeochemical modeling of cryptogamic ground covers. References Belnap, J. & Eldridge, D. 2001. Disturbance and Recovery of Biological Soil Crusts. In: Belnap, J. & Lange, O. (eds.) Biological Soil Crusts: Structure, Function, and Management, Springer, Berlin. Belnap, J. 2001. Biological Soil Crusts and Wind Erosion. In: Belnap, J. & Lange, O. (eds.) Fischer, T., Subbotina, M. 2014. Climatic and soil texture threshold values for cryptogamic cover development: a meta analysis. Biologia 69/11:1520-1530,
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia
2017-04-01
Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was considered as the critical rainfall combination responsible for triggering the landslide event. Only events whose critical rainfall combinations have a return period above 3 years were included. This criterion reduces the likelihood of been included events whose triggering factor was other than rainfall. The rainfall quantity-duration threshold for the Lisbon region was firstly defined using the linear and potential regression. Considering that this threshold allow the existence of false negatives (i.e. events below the threshold) it was also identified the lower limit and upper limit rainfall thresholds. These limits were defined empirically by establishing the quantity-durations combinations bellow which no landslides were recorded (lower limit) and the quantity-durations combinations above which only landslides were recorded without any false positive occurrence (upper limit). The zone between the lower limit and upper limit rainfall thresholds was analysed using a probabilistic approach, defining the uncertainties of each rainfall critical conditions in the triggering of landslides. Finally, the performances of the thresholds obtained in this study were assessed using ROC metrics. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. Sérgio Cruz Oliveira is a post-doc fellow of the FCT [grant number SFRH/BPD/85827/2012].
Connolly, Declan A J
2012-09-01
The purpose of this article is to assess the value of the anaerobic threshold for use in clinical populations with the intent to improve exercise adaptations and outcomes. The anaerobic threshold is generally poorly understood, improperly used, and poorly measured. It is rarely used in clinical settings and often reserved for athletic performance testing. Increased exercise participation within both clinical and other less healthy populations has increased our attention to optimizing exercise outcomes. Of particular interest is the optimization of lipid metabolism during exercise in order to improve numerous conditions such as blood lipid profile, insulin sensitivity and secretion, and weight loss. Numerous authors report on the benefits of appropriate exercise intensity in optimizing outcomes even though regulation of intensity has proved difficult for many. Despite limited use, selected exercise physiology markers have considerable merit in exercise-intensity regulation. The anaerobic threshold, and other markers such as heart rate, may well provide a simple and valuable mechanism for regulating exercising intensity. The use of the anaerobic threshold and accurate target heart rate to regulate exercise intensity is a valuable approach that is under-utilized across populations. The measurement of the anaerobic threshold can be simplified to allow clients to use nonlaboratory measures, for example heart rate, in order to self-regulate exercise intensity and improve outcomes.
NASA Astrophysics Data System (ADS)
de Smet, Filip; Aeyels, Dirk
2010-12-01
We consider the stationary and the partially synchronous regimes in an all-to-all coupled neural network consisting of an infinite number of leaky integrate-and-fire neurons. Using analytical tools as well as simulation results, we show that two threshold values for the coupling strength may be distinguished. Below the lower threshold, no synchronization is possible; above the upper threshold, the stationary regime is unstable and partial synchrony prevails. In between there is a range of values for the coupling strength where both regimes may be observed. The assumption of an infinite number of neurons is crucial: simulations with a finite number of neurons indicate that above the lower threshold partial synchrony always prevails—but with a transient time that may be unbounded with increasing system size. For values of the coupling strength in a neighborhood of the lower threshold, the finite model repeatedly builds up toward synchronous behavior, followed by a sudden breakdown, after which the synchronization is slowly built up again. The “transient” time needed to build up synchronization again increases with increasing system size, and in the limit of an infinite number of neurons we retrieve stationary behavior. Similarly, within some range for the coupling strength in this neighborhood, a stable synchronous solution may exist for an infinite number of neurons.
El-Zaatari, Bassil M; Shete, Abhishek U; Adzima, Brian J; Kloxin, Christopher J
2016-09-14
The kinetic behaviour of the photo-induced copper(i) catalyzed azide-alkyne cycloaddition (CuAAC) reaction was studied in detail using real-time Fourier transform infrared (FTIR) spectroscopy on both a solvent-based monofunctional and a neat polymer network forming system. The results in the solvent-based system showed near first-order kinetics on copper and photoinitiator concentrations up to a threshold value in which the kinetics switch to zeroth-order. This kinetic shift shows that the photo-CuAAC reaction is not susceptible from side reactions such as copper disproportionation, copper(i) reduction, and radical termination at the early stages of the reaction. The overall reaction rate and conversion is highly dependent on the initial concentrations of photoinitiator and copper(ii) as well as their relative ratios. The conversion was decreased when an excess of photoinitiator was utilized compared to its threshold value. Interestingly, the reaction showed an induction period at relatively low intensities. The induction period is decreased by increasing light intensity and photoinitiator concentration. The reaction trends and limitations were further observed in a solventless polymer network forming system, exhibiting a similar copper and photoinitiator threshold behaviour.
El-Zaatari, Bassil M.; Shete, Abhishek U.; Adzima, Brian J.; Kloxin, Christopher J.
2016-01-01
The kinetic behaviour of the photo-induced copper(I) catalyzed azide—alkyne cycloaddition (CuAAC) reaction was studied in detail using real-time Fourier Transform Infrared Spectroscopy (FTIR) on both a solvent-based monofunctional and a neat polymer network forming system. The results in the solvent-based system showed near first-order kinetics on copper and photoinitiator concentrations up to a threshold value in which the kinetics switch to zeroth-order. This kinetic shift shows that the photo-CuAAC reaction is not suseptible from side reactions such as copper disproportionation, copper(I) reduction, and radical termination at the early stages of the reaction. The overall reaction rate and conversion is highly dependent on the initial concentrations of photoinitiator and copper(II), as well as their relative ratios. The conversion was decreased when an excess of photoinitiator was utilized compared to its threshold value. Interestingly, the reaction showed an induction period at relatively low intensities. The induction period is decreased by increasing light intensity, and photoinitiator concentration. The reaction trends and limitations were further observed in a solventless polymer network forming system, exhibiting a similar copper and photoinitiator threshold behaviour. PMID:27711587
The impact of exercise intensity on the release of cardiac biomarkers in marathon runners.
Legaz-Arrese, Alejandro; George, Keith; Carranza-García, Luis Enrique; Munguía-Izquierdo, Diego; Moros-García, Teresa; Serrano-Ostáriz, Enrique
2011-12-01
We sought to determine the influence of exercise intensity on the release of cardiac troponin I (cTnI) and N-terminal pro-brain natriuretic peptide (NT-proBNP) in amateur marathon runners. Fourteen runners completed three exercise trials of the same duration but at exercise intensities corresponding to: (a) a competitive marathon [mean ± SD: heart rate 159 ± 7 beat min(-1), finish time 202 ± 14 min]; (b) 95% of individual anaerobic threshold [heart rate 144 ± 6 beat min(-1)] and; (c) 85% of individual anaerobic threshold [heart rate 129 ± 5 beat min(-1)]. cTnI and NT-proBNP were assayed from blood samples collected before, 30 min and 3 h post-exercise for each trial. cTnI and NT-proBNP were not different at baseline before each trial. After exercise at 85% of individual anaerobic threshold cTnI was not significantly elevated. Conversely, cTnI was elevated after exercise at 95% of individual anaerobic threshold (0.016 μg L(-1)) and to an even greater extent after exercise at competition intensity (0.054 μg L(-1)). Peak post-exercise values of NT-proBNP were elevated to a similar extent after all exercise trials (P < 0.05). The upper reference limit for cTnI (0.04 μg L(-1)) was exceeded in six subjects at competition intensity. No data for NT-proBNP surpassed its upper reference limit. Peak post-exercise values for cTnI and NT-proBNP were correlated with their respective baseline values. These data suggest exercise intensity influences the release of cTnI, but not NT-proBNP, and that competitive marathon running intensity is required for cTnI to be elevated over its upper reference limit.
Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.
2012-01-01
Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.
Nielsen, Flemming K; Egund, Niels; Peters, David; Jurik, Anne Grethe
2014-12-20
Longitudinal assessment of bone marrow lesions (BMLs) in knee osteoarthritis (KOA) by MRI is usually performed using semi-quantitative grading methods. Quantitative segmentation methods may be more sensitive to detect change over time. The purpose of this study was to evaluate and compare the validity and sensitivity to detect changes of two quantitative MR segmentation methods for measuring BMLs in KOA, one computer assisted (CAS) and one manual (MS) method. Twenty-two patients with KOA confined to the medial femoro-tibial compartment obtained MRI at baseline and follow-up (median 334 days in between). STIR, T1 and fat saturated T1 post-contrast sequences were obtained using a 1.5 T system. The 44 sagittal STIR sequences were assessed independently by two readers for quantification of BML. The signal intensities (SIs) of the normal bone marrow in the lateral femoral condyles and tibial plateaus were used as threshold values. The volume of bone marrow with SIs exceeding the threshold values (BML) was measured in the medial femoral condyle and tibial plateau and related to the total volume of the condyles/plateaus.The 95% limits of agreement at baseline were used to determine the sensitivity to change. The mean threshold values of CAS and MS were almost identical but the absolute and relative BML volumes differed being 1319 mm3/10% and 1828 mm3/15% in the femur and 941 mm3/7% and 2097 mm3/18% in the tibia using CAS and MS, respectively. The BML volumes obtained by CAS and MS were significantly correlated but the tissue changes measured were different. The volume of voxels exceeding the threshold values was measured by CAS whereas MS included intervening voxels with normal SI.The 95% limits of agreement were narrower by CAS than by MS; a significant change of relative BML by CAS was outside the limits of -2.0%-4.7% whereas the limits by MS were -6.9%-8.2%. The BML changed significantly in 13 knees using CAS and in 10 knees by MS. CAS was a reliable method for measuring BML and more sensitive to detect changes over time than MS. The BML volumes measured by the two methods differed but were significantly correlated.
O'Brien, Anna; Keidser, Gitte; Yeend, Ingrid; Hartley, Lisa; Dillon, Harvey
2010-12-01
Audiometric measurements through a hearing aid ('in-situ') may facilitate provision of hearing services where these are limited. This study investigated the validity and reliability of in-situ air conduction hearing thresholds measured with closed and open domes relative to thresholds measured with insert earphones, and explored sources of variability in the measures. Twenty-four adults with sensorineural hearing impairment attended two sessions in which thresholds and real-ear-to-dial-difference (REDD) values were measured. Without correction, significantly higher low-frequency thresholds in dB HL were measured in-situ than with insert earphones. Differences were due predominantly to differences in ear canal SPL, as measured with the REDD, which were attributed to leaking low-frequency energy. Test-retest data yielded higher variability with the closed dome coupling due to inconsistent seals achieved with this tip. For all three conditions, inter-participant variability in the REDD values was greater than intra-participant variability. Overall, in-situ audiometry is as valid and reliable as conventional audiometry provided appropriate REDD corrections are made and ambient sound in the test environment is controlled.
Sazykina, Tatiana G; Kryshev, Alexander I
2016-12-01
Lower threshold dose rates and confidence limits are quantified for lifetime radiation effects in mammalian animals from internally deposited alpha-emitting radionuclides. Extensive datasets on effects from internal alpha-emitters are compiled from the International Radiobiological Archives. In total, the compiled database includes 257 records, which are analyzed by means of non-parametric order statistics. The generic lower threshold for alpha-emitters in mammalian animals (combined datasets) is 6.6·10 -5 Gy day -1 . Thresholds for individual alpha-emitting elements differ considerably: plutonium and americium - 2.0·10 -5 Gy day -1 ; radium - 2.1·10 -4 Gy day -1 . Threshold for chronic low-LET radiation is previously estimated at 1·10 -3 Gy day -1 . For low exposures, the following values of alpha radiation weighting factor w R for internally deposited alpha-emitters in mammals are quantified: w R (α) = 15 as a generic value for the whole group of alpha-emitters; w R (Pu) = 50 for plutonium; w R (Am) = 50 for americium; w R (Ra) = 5 for radium. These values are proposed to serve as radiation weighting factors in calculations of equivalent doses to non-human biota. The lower threshold dose rate for long-lived mammals (dogs) is significantly lower than comparing with the threshold for short-lived mammals (mice): 2.7·10 -5 Gy day -1 , and 2.0·10 -4 Gy day -1 , respectively. The difference in thresholds is exactly reflecting the relationship between the natural longevity of these two species. Graded scale of severity in lifetime radiation effects in mammals is developed, based on compiled datasets. Being placed on the severity scale, the effects of internal alpha-emitters are situated in the zones of considerably lower dose rates than effects of the same severity caused by low-LET radiation. RBE values, calculated for effects of equal severity, are found to depend on the intensity of chronic exposure: different RBE values are characteristic for low, moderate, and high lifetime exposures (30, 70, and 13, respectively). The results of the study provide a basis for selecting correct values of radiation weighting factors in dose assessment to non-human biota. Copyright © 2016 Elsevier Ltd. All rights reserved.
Alonso-Coello, Pablo; Montori, Victor M; Díaz, M Gloria; Devereaux, Philip J; Mas, Gemma; Diez, Ana I; Solà, Ivan; Roura, Mercè; Souto, Juan C; Oliver, Sven; Ruiz, Rafael; Coll-Vinent, Blanca; Gich, Ignasi; Schünemann, Holger J; Guyatt, Gordon
2015-12-01
Exploration of values and preferences in the context of anticoagulation therapy for atrial fibrillation (AF) remains limited. To better characterize the distribution of patient and physician values and preferences relevant to decisions regarding anticoagulation in patients with AF, we conducted interviews with patients at risk of developing AF and physicians who manage patients with AF. We interviewed 96 outpatients and 96 physicians in a multicenter study and elicited the maximal increased risk of bleeding (threshold risk) that respondents would tolerate with warfarin vs. aspirin to achieve a reduction in three strokes in 100 patients over a 2-year period. We used the probabilistic version of the threshold technique. The median threshold risk for both patients and physicians was 10 additional bleeds (10 P = 0.7). In both groups, we observed large variability in the threshold number of bleeds, with wider variability in patients than clinicians [patient range: 0-100, physician range: 0-50]. We observed one cluster of patients and physicians who would tolerate <10 bleeds and another cluster of patients, but not physicians, who would accept more than 35. Our findings suggest wide variability in patient and physician values and preferences regarding the trade-off between strokes and bleeds. Results suggest that in individual decision making, physician and patient values and preferences will often be discordant; this mandates tailoring treatment to the individual patient's preferences. © 2014 John Wiley & Sons Ltd.
On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?
Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro
2016-01-01
Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
10 CFR 851.27 - Reference sources.
Code of Federal Regulations, 2011 CFR
2011-01-01
....nfpa.org. (v) American Conference of Governmental Industrial Hygienist (ACGIH), 1330 Kemper Meadow...) American Society of Mechanical Engineers (ASME), P.O. Box 2300 Fairfield, NJ 07007. Telephone: 800-843-2763...) American Conference of Governmental Industrial Hygienists, “Threshold Limit Values for Chemical Substances...
Preclinical studies of photodynamic therapy of intracranial tissues
NASA Astrophysics Data System (ADS)
Lilge, Lothar D.; Sepers, Marja; Park, Jane; O'Carroll, Cindy; Pournazari, Poupak; Prosper, Joe; Wilson, Brian C.
1997-05-01
The applicability and limitations of the photodynamic threshold model were investigated for an intracranial tumor (VX2) and normal brain tissues in a rabbit model. Photodynamic threshold values for four different photosensitizers, i.e., Photofrin, 5(delta) -aminolaevulinic acid (5(delta) -ALA) induced Protoporphyrin IX (PPIX), Tin Ethyl Etiopurpurin (SnET2), and chloroaluminum phthalocyanine (AlClPc), were determined based on measured light fluence distributions, macroscopic photosensitizer concentration in various brain structures, and histologically determined extent of tissue necrosis following PDT. For Photofrin, AlClPc, and SnET2, normal brain displayed a significantly lower threshold value than VX2 tumor. For 5(delta) -ALA induced PPIX and SnET2 no or very little white matter damage, equalling to very high or infinite threshold values, was observed. Additionally, the latter two photosensitizers showed significantly lower uptake in white matter compared to other brain structures and VX2 tumor. Normal brain structures lacking a blood- brain-barrier, such as the choroid plexus and the meninges, showed high photosensitizer uptake for all photosensitizers, and, hence, are at risk when exposed to light. Results to date suggest that the photodynamic threshold values iares valid for white matter, cortex and VX2 tumor. For clinical PDT of intracranial neoplasms 5(delta) -ALA induced PPIX and SnET2 appear to be the most promising for selective tumor necrosis.However, the photosensitizer concentration in each normal brain structure and the fluence distribution throughout the treatment volume and adjacent tissues at risk must be monitored to maximize the selectivity of PDT for intracranial tumors.
Cunningham, Thomas V
2016-01-01
Three common ethical principles for establishing the limits of parental authority in pediatric treatment decision-making are the harm principle, the principle of best interest, and the threshold view. This paper considers how these principles apply to a case of a premature neonate with multiple significant co-morbidities whose mother wanted all possible treatments, and whose health care providers wondered whether it would be ethically permissible to allow him to die comfortably despite her wishes. Whether and how these principles help in understanding what was morally right for the child is questioned. The paper concludes that the principles were of some value in understanding the moral geography of the case; however, this case reveals that common bioethical principles for medical decision-making are problematically value-laden because they are inconsistent with the widespread moral value of medical vitalism.
Interlaminar shear fracture toughness and fatigue thresholds for composite materials
NASA Technical Reports Server (NTRS)
Obrien, T. Kevin; Murri, Gretchen B.; Salpekar, Satish A.
1987-01-01
Static and cyclic end notched flexure tests were conducted on a graphite epoxy, a glass epoxy, and graphite thermoplastic to determine their interlaminar shear fracture toughness and fatigue thresholds for delamination in terms of limiting values of the mode II strain energy release rate, G-II, for delamination growth. The influence of precracking and data reduction schemes are discussed. Finite element analysis indicated that the beam theory calculation for G-II with the transverse shear contribution included was reasonably accurate over the entire range of crack lengths. Cyclic loading significantly reduced the critical G-II for delamination. A threshold value of the maximum cyclic G-II below which no delamination occurred after one million cycles was identified for each material. Also, residual static toughness tests were conducted on glass epoxy specimens that had undergone one million cycles without delamination. A linear mixed-mode delamination criteria was used to characterize the static toughness of several composite materials; however, a total G threshold criterion appears to characterize the fatigue delamination durability of composite materials with a wide range of static toughness.
NASA Astrophysics Data System (ADS)
Oliver, Jeffrey W.; Stolarski, David J.; Noojin, Gary D.; Hodnett, Harvey M.; Imholte, Michelle L.; Rockwell, Benjamin A.; Kumru, Semih S.
2007-02-01
A series of experiments in a new animal model for retinal damage, cynomolgus monkeys (Macaca fascicularis), have been conducted to determine the damage threshold for 12.5-nanosecond laser exposures at 1064 nm. These results provide a direct comparison to threshold values obtained in rhesus monkey (Macaca mulatta), which is the model historically used in establishing retinal maximum permissible exposure (MPE) limits. In this study, the irradiance level of a collimated Gaussian laser beam of 2.5 mm diameter at the cornea was randomly varied to produce a rectangular grid of exposures on the retina. Exposures sites were fundoscopically evaluated at post-irradiance intervals of 1 hour and 24 hours. Probit analysis was performed on dose-response data to obtain probability of response curves. The 50% probability of damage (ED50) values for 1 and 24 hours post-exposure are 28.5(22.7-38.4) μJ and 17.0(12.9-21.8) μJ, respectively. These values compare favorably to data obtained with the rhesus model, 28.7(22.3-39.3) μJ and 19.1(13.6-24.4) μJ, suggesting that the cynomolgus monkey may be a suitable replacement for rhesus monkey in photoacoustic minimum visible lesion threshold studies.
Soil contamination in landfills: a case study of a landfill in Czech Republic
NASA Astrophysics Data System (ADS)
Adamcová, D.; Vaverková, M. D.; Bartoň, S.; Havlíček, Z.; Břoušková, E.
2016-02-01
A phytotoxicity test was determined to assess ecotoxicity of landfill soil. Sinapis alba L. was used as a bioindicator of heavy metals. Soil samples 1-8, which were taken from the landfill body, edge of the landfill body, and its vicinity meet the limits for heavy metals Co, Cd, Pb, and Zn specified in the applicable legislation. Hg and Mn threshold values are not established in legislation, but values have been determined for the needs of the landfill operator. For heavy metals Cr, Cu, and Ni sample 2 exceeded the threshold values, which attained the highest values of all the samples tested for Cr, Cu, and Ni. For Cr and Ni the values were several times higher than values of the other samples. The second highest values for Cr, Cu, and Ni showed sample 6 and 7. Both samples exceeded the set limits. An increase in plant biomass was observed in plants growing on plates with soil samples, but no changes in appearance, slow growth, or necrotic lesions appeared. Ecotoxicity tests show that tested soils (concentration of 50 %) collected from the landfill body, edge of the landfill body, and its vicinity reach high percentage values of germination capacity of seeds of Sinapis alba L. (101-137 %). At a concentration of 25 %, tested soil samples exhibit lower values of germination capacity - in particular samples 3 to 8 - yet the seed germination capacity in all eight samples of tested soils ranges between 86 and 137 %.
Soil contaminations in landfill: a case study of the landfill in Czech Republic
NASA Astrophysics Data System (ADS)
Adamcová, D.; Vaverková, M. D.; Bartoň, S.; Havlíček, Z.; Břoušková, E.
2015-10-01
Phytotoxicity test was determined to assess ecotoxicity of landfill soil. Sinapis alba L. was used as heavy metals bioindicator. Soil samples 1-8, which were taken from the landfill body, edge of the landfill body and its vicinity meet the limits for heavy metals Co, Cd, Pb, and Zn specified in the applicable legislation. Hg and Mn threshold values are not established in legislation, but values have been determined for the needs of the landfill operator. For heavy metals Cr, Cu, and Ni sample 2 exceeded the threshold values, which attained the highest values of all the samples tested for Cr, Cu and Ni. For Cr and Ni the values were several times higher than values of the other samples. The second highest values for Cr, Cu, and Ni showed sample 6 and 7. Both samples exceeded the set limits. An increase in plant biomass was observed in plants growing on plates with soil samples, but no changes in appearance, slow growth or necrotic lesions appeared. Ecotoxicity tests show that tested soils (concentration of 50 %) collected from the landfill body, edge of the landfill body and its vicinity reach high percentage values of germination capacity of seeds of Sinapis alba L. (101-137 %). At a concentration of 25 %, tested soil samples exhibit lower values of germination capacity; in particular samples 3 to 8, yet the seed germination capacity in all 8 samples of tested soils range between 86 and 137 %.
Goodwin accelerator model revisited with fixed time delays
NASA Astrophysics Data System (ADS)
Matsumoto, Akio; Merlone, Ugo; Szidarovszky, Ferenc
2018-05-01
Dynamics of Goodwin's accelerator business cycle model is reconsidered. The model is characterized by a nonlinear accelerator and an investment time delay. The role of the nonlinearity for the birth of persistent oscillations is fully discussed in the existing literature. On the other hand, not much of the role of the delay has yet been revealed. The purpose of this paper is to show that the delay really matters. In the original framework of Goodwin [6], it is first demonstrated that there is a threshold value of the delay: limit cycles arise for smaller values than the threshold and so do sawtooth oscillations for larger values. In the extended framework in which a consumption or saving delay, in addition to the investment delay, is introduced, three main results are demonstrated under assumption of the identical length of investment and consumption delays. The dynamics with consumption delay is basically the same as that of the single delay model. Second, in the case of saving delay, the steady state can coexist with the stable and unstable limit cycles in the stable case. Third, in the unstable case, there is an interval of delay in which the limit cycle or the sawtooth oscillation emerges depending on the choice of the constant initial function.
Screening of phthalate esters in 47 branded perfumes.
Al-Saleh, Iman; Elkhatib, Rola
2016-01-01
In the last few years, the use of phthalates in perfumes has gained attention because these chemicals are sometimes added intentionally as a solvent and a fixative. Five phthalate esters, dimethyl phthalate (DMP), diethyl phthalate (DEP), dibutyl phthalate (DBP), benzyl butyl phthalate (BBP), and diethyl hexyl phthalate (DEHP), were measured in 47 branded perfumes using headspace solid phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS). The results revealed considerable amounts of phthalate in all 47 brands with detection frequencies > limit of quantitation in the following order: DEP (47/47) > DMP (47/47) > BBP (47/47) > DEHP (46/47) > DBP (23/45). Of the 47 brands, 68.1, 72.3, 85.1, 36.2, and 6.7 % had DEP, DMP, BBP, DEHP, and DBP levels, respectively, above their reported threshold limits. Of these phthalates, DEP was found to have the highest mean value (1621.625 ppm) and a maximum of 23,649.247 ppm. The use of DEP in the perfume industry is not restricted because it does not pose any known health risks for humans. DMP had the second highest level detected in the perfumes, with a mean value of 30.202 ppm and a maximum of 405.235 ppm. Although DMP may have some uses in cosmetics, it is not as commonly used as DEP, and again, there are no restrictions on its use. The levels of BBP were also high, with a mean value of 8.446 ppm and a maximum of 186.770 ppm. Although the EU banned the use of BBP in cosmetics, 27 of the tested perfumes had BBP levels above the threshold limit of 0.1 ppm. The mean value of DEHP found in this study was 5.962 ppm, and a maximum was 147.536 ppm. In spite of its prohibition by the EU, 7/28 perfumes manufactured in European countries had DEHP levels above the threshold limit of 1 ppm. The DBP levels were generally low, with a mean value of 0.0305 ppm and a maximum value of 0.594 ppm. The EU banned the use of DBP in cosmetics; however, we found three brands that were above the threshold limit of 0.1 ppm, and all were manufactured in European countries. The results of this study are alarming and definitely need to be brought to the attention of the public and health regulators. Although some phthalate compounds are still used in cosmetics, many scientists and environmental activists have argued that phthalates are endocrine-disrupting chemicals that have not been yet proven to be safe for any use, including cosmetics. Phthalates may also have different degrees of estrogenic modes of action. Furthermore, we should not dismiss the widespread use of phthalates in everyday products and exposure to these chemicals from sources such as food, medications, and other personal care products.
42 CFR 423.336 - Risk-sharing arrangements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... range as follows: (A) First threshold lower limit. The first threshold lower limit of the corridor is equal to— (1) The target amount for the plan; minus (2) An amount equal to the first threshold risk.... (B) Second threshold lower limit. The second threshold lower limit of the corridor is equal to— (1...
Dobie, Robert A; Wojcik, Nancy C
2015-01-01
Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
Hopke, P K; Liu, C; Rubin, D B
2001-03-01
Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.
NASA Astrophysics Data System (ADS)
Goeritno, Arief; Rasiman, Syofyan
2017-06-01
Performance examination of the bulk oil circuit breaker that is influenced by its parameters at the Substation of Bogor Baru (the State Electricity Company = PLN) has been done. It is found that (1) dielectric strength of oil still qualifies as an insulating and cooling medium, because the average value of the measurement result is still above the minimum value allowed, where the minimum limit of 80 kV/2.5 cm or 32 kV/cm; (2) the simultaneity of the CB's contacts is still eligible, so that the BOCB can still be operated, because the difference of time between the highest and lowest values when the BOCB's contacts are opened/closed are less than (Δt<) 10 milliseconds (if meeting the PLN standards as recommended by Alsthom); and (3) the parameter of resistance according to the standards, where (i) the resistance of insulation has a value far above the allowed threshold, while the minimum standards are above 2,000 Mn (if meeting the ANSI standards) or on the value of 2,000 MΩ (if meeting PLN standards), (ii) the resistance of contacts has a value far above the allowed threshold, while the minimum standards are below 350 µΩ (if meeting ANSI standards) or on the value of 200 µΩ (if meeting PLN standards). The resistance of grounding is equal to the maximum limit specified, while the maximum standard is on the value of 0.5 Ω (if meeting PLN standard).
NASA Astrophysics Data System (ADS)
Hinsby, K.; Markager, S.; Kronvang, B.; Windolf, J.; Sonnenborg, T. O.; Thorling, L.
2012-02-01
Intensive farming has severe impacts on the chemical status of groundwater and streams and consequently on the ecological status of dependent ecosystems. Eutrophication is a widespread problem in lakes and marine waters. Common problems are hypoxia, algal blooms and fish kills, and loss of water clarity, underwater vegetation, biodiversity, and recreational value. In this paper we evaluate the nitrogen (N) and phosphorus (P) chemistry of groundwater and surface water in a coastal catchment, the loadings and sources of N and P and their effect on the ecological status of an estuary. We calculate the necessary reductions in N and P loadings to the estuary for obtaining a good ecological status, which we define based on the number of days with N and P limitation, and the equivalent stream and groundwater threshold values assuming two different management options. The calculations are performed by the combined use of empirical models and a physically based 3-D integrated hydrological model of the whole catchment. The assessment of the ecological status indicates that the N and P loads to the investigated estuary should be reduced by a factor of 0.52 and 0.56, respectively, to restore good ecological status. Model estimates show that threshold total N concentrations should be in the range of 2.9 to 3.1 mg l-1 in inlet freshwater to Horsens Estuary and 6.0 to 9.3 mg l-1 in shallow aerobic groundwater (∼27-41 mg l-1 of nitrate), depending on the management measures implemented in the catchment. The situation for total P is more complex but data indicate that groundwater threshold values are not needed. The inlet freshwater threshold value for total P to Horsens Estuary for the selected management options is 0.084 mg l-1. Regional climate models project increasing winter precipitation and runoff in the investigated region resulting in increasing runoff and nutrient loads to coastal waters if present land use and farming practices continue. Hence, lower threshold values are required in the future to ensure good status of all water bodies and ecosystems.
NASA Astrophysics Data System (ADS)
Hinsby, K.; Markager, S.; Kronvang, B.; Windolf, J.; Sonnenborg, T. O.; Thorling, L.
2012-08-01
Intensive farming has severe impacts on the chemical status of groundwater and streams and consequently on the ecological status of dependent ecosystems. Eutrophication is a widespread problem in lakes and marine waters. Common problems are hypoxia, algal blooms, fish kills, and loss of water clarity, underwater vegetation, biodiversity and recreational value. In this paper we evaluate the nitrogen (N) and phosphorus (P) concentrations of groundwater and surface water in a coastal catchment, the loadings and sources of N and P, and their effect on the ecological status of an estuary. We calculate the necessary reductions in N and P loadings to the estuary for obtaining a good ecological status, which we define based on the number of days with N and P limitation, and the corresponding stream and groundwater threshold values assuming two different management options. The calculations are performed by the combined use of empirical models and a physically based 3-D integrated hydrological model of the whole catchment. The assessment of the ecological status indicates that the N and P loads to the investigated estuary should be reduced to levels corresponding to 52 and 56% of the current loads, respectively, to restore good ecological status. Model estimates show that threshold total N (TN) concentrations should be in the range of 2.9 to 3.1 mg l-1 in inlet freshwater (streams) to Horsens estuary and 6.0 to 9.3 mg l-1 in shallow aerobic groundwater (∼ 27-41 mg l-1 of nitrate), depending on the management measures implemented in the catchment. The situation for total P (TP) is more complex, but data indicate that groundwater threshold values are not needed. The stream threshold value for TP to Horsens estuary for the selected management options is 0.084 mg l-1. Regional climate models project increasing winter precipitation and runoff in the investigated region resulting in increasing runoff and nutrient loads to the Horsens estuary and many other coastal waters if present land use and farming practices continue. Hence, lower threshold values are required in many coastal catchments in the future to ensure good status of water bodies and ecosystems.
Grantz, Erin; Haggard, Brian; Scott, J Thad
2018-06-12
We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.
Safety in the Chemical Laboratory.
ERIC Educational Resources Information Center
Gerlach, Rudolph
1986-01-01
Background information is provided on the registered trademark "TLV" (Threshold Limit Value), the term used to express tolerable concentrations. The TLV of a compound is an estimate extrapolated from some defined damage to humans or animals at higher concentrations or by drawing analogies between similar concentrations. (JN)
78 FR 37818 - Request for Information on Toluene Diisocyanates
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-24
.... engineering controls, work practices, personal protective equipment, exposure data before and after... Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) for TDI is 0.005 ppm... vitro and in vivo studies. (7) Information on control measures (e.g., engineering controls, work...
30 CFR 75.322 - Harmful quantities of noxious gases.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Section 75.322 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Ventilation § 75.322 Harmful... Governmental Industrial Hygienists in “Threshold Limit Values for Substance in Workroom Air” (1972). Detectors...
30 CFR 75.322 - Harmful quantities of noxious gases.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Section 75.322 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Ventilation § 75.322 Harmful... Governmental Industrial Hygienists in “Threshold Limit Values for Substance in Workroom Air” (1972). Detectors...
n-hexane polyneuropathy in Japan: a review of n-hexane poisoning and its preventive measures.
Takeuchi, Y
1993-07-01
n-Hexane is used in industry as a solvent for adhesive, dry cleaning, and vegetable oil extraction. In 1963, the first case of severe polyneuropathy suspected to be caused by n-hexane was referred to us. Case studies, animal experiments, and field surveys on n-hexane poisoning were conducted, and preventive measures like threshold limit value revision and biological monitoring were also studied. I review a brief history of our investigations on n-hexane poisoning and its preventive measures in Japan. n-Hexane could cause overt polyneuropathy in workers exposed to more than 100 ppm time-weighted average concentrations [TWA]. The present threshold limit value of 40 ppm in Japan is considered low enough to prevent subclinical impairment of peripheral nerve caused by n-hexane. Urinary 2,5-hexanedione could be a good indicator for biological monitoring of n-hexane exposure. About 2.2 mg/liter of 2,5-hexanedione measured by our improved method corresponds to exposure of 40 ppm (TWA) of n-hexane.
Ozenc, E; Seker, E; Baki Acar, D; Birdane, M K; Darbaz, I; Dogan, N
2011-12-01
This study investigated the bacterial agents causing sub-clinical mastitis and the mean somatic cell counts (SCC) of milk in Pirlak sheep at mid-lactation. The percentage of infected udder halves was 11.4% (53/464). The most frequently isolated species were coagulase-negative staphylococci (CNS) (64.2%), followed by Staphylococcus aureus (24.5%) and Escherichia coli (11.3%). Among the CNS, the most common species was Staphylococcus epidermidis (38.2%). The other species isolated from milk samples were Staphylococcus xylosus (17.7%), Staphylococcus chromogenes (14.7%), Staphylococcus simulans (8.8%) and Staphylococcus hyicus (8.8%). The mean SCC for culture positive and negative samples was 1742×10(3) and 161×10(3) cells/ml, respectively. A significant difference (p<0.05) was determined between with and without microbial growth groups in terms of the SCC values. Threshold limit for SCC was 374×10(3) cells/ml for Pirlak sheep. In conclusion, it was considered that SCC is an important predictor of sub-clinical mastitis in Pirlak sheep. This is the first study to describe the bacterial agents causing sub-clinical mastitis and threshold limit for SCC in Pirlak sheep in Turkey. © 2011 Blackwell Verlag GmbH.
NASA Technical Reports Server (NTRS)
Long, Edward R., Jr.; Long, Sheila Ann T.; Gray, Stephanie L.; Collins, William D.
1989-01-01
The threshold values of total absorbed dose for causing changes in tensile properties of a polyetherimide film and the limitations of the absorbed dose rate for accelerated-exposure evaluation of the effects of electron radiation in geosynchronous orbit were studied. Total absorbed doses from 1 kGy to 100 MGy and absorbed dose rates from 0.01 MGy/hr to 100 MGy/hr were investigated, where 1 Gy equals 100 rads. Total doses less than 2.5 MGy did not significantly change the tensile properties of the film whereas doses higher than 2.5 MGy significantly reduced elongation-to-failure. There was no measurable effect of the dose rate on the tensile properties for accelerated electron exposures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tereshchenko, S. A., E-mail: tsa@miee.ru; Savelyev, M. S.; Podgaetsky, V. M.
A threshold model is described which permits one to determine the properties of limiters for high-powered laser light. It takes into account the threshold characteristics of the nonlinear optical interaction between the laser beam and the limiter working material. The traditional non-threshold model is a particular case of the threshold model when the limiting threshold is zero. The nonlinear characteristics of carbon nanotubes in liquid and solid media are obtained from experimental Z-scan data. Specifically, the nonlinear threshold effect was observed for aqueous dispersions of nanotubes, but not for nanotubes in solid polymethylmethacrylate. The threshold model fits the experimental Z-scanmore » data better than the non-threshold model. Output characteristics were obtained that integrally describe the nonlinear properties of the optical limiters.« less
NASA Astrophysics Data System (ADS)
Fereydooni, H.; Mojeddifar, S.
2017-09-01
This study introduced a different procedure to implement matched filtering algorithm (MF) on the ASTER images to obtain the distribution map of alteration minerals in the northwestern part of the Kerman Cenozoic Magmatic Arc (KCMA). This region contains many areas with porphyry copper mineralization such as Meiduk, Abdar, Kader, Godekolvari, Iju, Serenu, Chahfiroozeh and Parkam. Also argillization, sericitization and propylitization are the most common types of hydrothermal alteration in the area. Matched filtering results were provided for alteration minerals with a matched filtering score, called MF image. To identify the pixels which contain only one material (endmember), an appropriate threshold value should be used to the MF image. The chosen threshold classifies a MF image into background and target pixels. This article argues that the current thresholding process (the choice of a threshold) shows misclassification for MF image. To address the issue, this paper introduced the directed matched filtering (DMF) algorithm in which a spectral signature-based filter (SSF) was used instead of the thresholding process. SSF is a user-defined rule package which contains numeral descriptions about the spectral reflectance of alteration minerals. On the other hand, the spectral bands are defined by an upper and lower limit in SSF filter for each alteration minerals. SSF was developed for chlorite, kaolinite, alunite, and muscovite minerals to map alteration zones. The validation proved that, at first: selecting a contiguous range of MF values could not identify desirable results, second: unexpectedly, considerable frequency of pure pixels was observed in the MF scores less than threshold value. Also, the comparison between DMF results and field studies showed an accuracy of 88.51%.
NASA Astrophysics Data System (ADS)
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
Price, W D; Williams, E R
1997-11-20
Unimolecular rate constants for blackbody infrared radiative dissociation (BIRD) were calculated for the model protonated peptide (AlaGly)(n) (n = 2-32) using a variety of dissociation parameters. Combinations of dissociation threshold energies ranging from 0.8 to 1.7 eV and transition entropies corresponding to Arrhenius preexponential factors ranging from very "tight" (A(infinity) = 10(9.9) s(-1)) to "loose" (A(infinity) = 10(16.8) s(-1)) were selected to represent dissociation parameters within the experimental temperature range (300-520 K) and kinetic window (k(uni) = 0.001-0.20 s(-1)) typically used in the BIRD experiment. Arrhenius parameters were determined from the temperature dependence of these values and compared to those in the rapid energy exchange (REX) limit. In this limit, the internal energy of a population of ions is given by a Boltzmann distribution, and kinetics are the same as those in the traditional high-pressure limit. For a dissociation process to be in this limit, the rate of photon exchange between an ion and the vacuum chamber walls must be significantly greater than the dissociation rate. Kinetics rapidly approach the REX limit either as the molecular size or threshold dissociation energy increases or as the transition-state entropy or experimental temperature decreases. Under typical experimental conditions, peptide ions larger than 1.6 kDa should be in the REX limit. Smaller ions may also be in the REX limit depending on the value of the threshold dissociation energy and transition-state entropy. Either modeling or information about the dissociation mechanism must be known in order to confirm REX limit kinetics for these smaller ions. Three principal factors that lead to the size dependence of REX limit kinetics are identified. With increasing molecular size, rates of radiative absorption and emission increase, internal energy distributions become relatively narrower, and the microcanonical dissociation rate constants increase more slowly over the energy distribution of ions. Guidelines established here should make BIRD an even more reliable method to obtain information about dissociation energetics and mechanisms for intermediate size molecules.
Price, William D.
2005-01-01
Unimolecular rate constants for blackbody infrared radiative dissociation (BIRD) were calculated for the model protonated peptide (AlaGly)n (n = 2–32) using a variety of dissociation parameters. Combinations of dissociation threshold energies ranging from 0.8 to 1.7 eV and transition entropies corresponding to Arrhenius preexponential factors ranging from very “tight” (A∞ = 109.9 s−1) to “loose” (A∞ = 1016.8 s−1) were selected to represent dissociation parameters within the experimental temperature range (300–520 K) and kinetic window (kuni = 0.001–0.20 s−1) typically used in the BIRD experiment. Arrhenius parameters were determined from the temperature dependence of these values and compared to those in the rapid energy exchange (REX) limit. In this limit, the internal energy of a population of ions is given by a Boltzmann distribution, and kinetics are the same as those in the traditional high-pressure limit. For a dissociation process to be in this limit, the rate of photon exchange between an ion and the vacuum chamber walls must be significantly greater than the dissociation rate. Kinetics rapidly approach the REX limit either as the molecular size or threshold dissociation energy increases or as the transition-state entropy or experimental temperature decreases. Under typical experimental conditions, peptide ions larger than 1.6 kDa should be in the REX limit. Smaller ions may also be in the REX limit depending on the value of the threshold dissociation energy and transition-state entropy. Either modeling or information about the dissociation mechanism must be known in order to confirm REX limit kinetics for these smaller ions. Three principal factors that lead to the size dependence of REX limit kinetics are identified. With increasing molecular size, rates of radiative absorption and emission increase, internal energy distributions become relatively narrower, and the microcanonical dissociation rate constants increase more slowly over the energy distribution of ions. Guidelines established here should make BIRD an even more reliable method to obtain information about dissociation energetics and mechanisms for intermediate size molecules. PMID:16604162
Agustsson, R.; Pogorelsky, I.; Arab, E.; ...
2015-11-18
Optical photonic structures driven by picosecond, GW-class lasers are emerging as promising novel sources of electron beams and high quality X-rays. Due to quadratic dependence on wavelength of the laser ponderomotive potential, the performance of such sources scales very favorably towards longer drive laser wavelengths. However, to take full advantage of photonic structures at mid-IR spectral region, it is important to determine optical breakdown limits of common optical materials. To this end, an experimental study was carried out at a wavelength of 5 µm, using a frequency-doubled CO 2 laser source, with 5 ps pulse length. Single-shot optical breakdowns weremore » detected and characterized at different laser intensities, and damage threshold values of 0.2, 0.3, and 7.0 J/cm 2, were established for Ge, Si, and sapphire, respectively. As a result, the measured damage threshold values were stable and repeatable within individual data sets, and across varying experimental conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agustsson, R.; Pogorelsky, I.; Arab, E.
Optical photonic structures driven by picosecond, GW-class lasers are emerging as promising novel sources of electron beams and high quality X-rays. Due to quadratic dependence on wavelength of the laser ponderomotive potential, the performance of such sources scales very favorably towards longer drive laser wavelengths. However, to take full advantage of photonic structures at mid-IR spectral region, it is important to determine optical breakdown limits of common optical materials. To this end, an experimental study was carried out at a wavelength of 5 µm, using a frequency-doubled CO 2 laser source, with 5 ps pulse length. Single-shot optical breakdowns weremore » detected and characterized at different laser intensities, and damage threshold values of 0.2, 0.3, and 7.0 J/cm 2, were established for Ge, Si, and sapphire, respectively. As a result, the measured damage threshold values were stable and repeatable within individual data sets, and across varying experimental conditions.« less
Enhanced Detectability of Community Structure in Multilayer Networks through Layer Aggregation.
Taylor, Dane; Shai, Saray; Stanley, Natalie; Mucha, Peter J
2016-06-03
Many systems are naturally represented by a multilayer network in which edges exist in multiple layers that encode different, but potentially related, types of interactions, and it is important to understand limitations on the detectability of community structure in these networks. Using random matrix theory, we analyze detectability limitations for multilayer (specifically, multiplex) stochastic block models (SBMs) in which L layers are derived from a common SBM. We study the effect of layer aggregation on detectability for several aggregation methods, including summation of the layers' adjacency matrices for which we show the detectability limit vanishes as O(L^{-1/2}) with increasing number of layers, L. Importantly, we find a similar scaling behavior when the summation is thresholded at an optimal value, providing insight into the common-but not well understood-practice of thresholding pairwise-interaction data to obtain sparse network representations.
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Metter, E J; Granville, R L; Kussman, M J
1997-04-01
The study determines the extent to which payment thresholds for reporting malpractice claims to the National Practitioner Data Bank identifies substandard health care delivery in the Department of Defense. Relevant data were available on 2,291 of 2,576 medical malpractice claims reported to the closed medical malpractice case data base of the Office of the Assistant Secretary of Defense (Health Affairs). Amount paid was analyzed as a diagnostic test using standard of care assessment from each military Surgeon General office as the criterion. Using different paid threshold amounts per claim as a positive test, the sensitivity of identifying substandard care declined from 0.69 for all paid cases to 0.41 for claims over $40,000. Specificity increased from 0.75 for all paid claims to 0.89 for claims over $40,000. Positive and negative predictive values and likelihood ratio were similar at all thresholds. Malpractice case payment was of limited value for identifying substandard medical practice. All paid claims missed about 30% of substandard care, and reported about 25% of acceptable medical practice.
45 CFR 149.115 - Cost threshold and cost limit.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Cost threshold and cost limit. 149.115 Section 149... REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reinsurance Amounts § 149.115 Cost threshold and cost limit. The following cost threshold and cost limits apply individually, to each early retiree as defined...
Zamba, Gideon K. D.; Artes, Paul H.
2018-01-01
Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822
Rebuilding DEMATEL threshold value: an example of a food and beverage information system.
Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin
2016-01-01
This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.
Quality of life in childhood, adolescence and adult food allergy: Patient and parent perspectives.
Stensgaard, A; Bindslev-Jensen, C; Nielsen, D; Munch, M; DunnGalvin, A
2017-04-01
Studies of children with food allergy typically only include the mother and have not investigated the relationship between the amount of allergen needed to elicit a clinical reaction (threshold) and health-related quality of life (HRQL). Our aims were (i) to compare self-reported and parent-reported HRQL in different age groups, (ii) to evaluate the impact of severity of allergic reaction and threshold on HRQL, and (iii) to investigate factors associated with patient-reported and parent-reported HRQL. Age-appropriate Food Allergy Quality of Life Questionnaires (FAQLQ) were completed by 73 children, 49 adolescents and 29 adults with peanut, hazelnut or egg allergy. Parents (197 mothers, 120 fathers) assessed their child's HRQL using the FAQLQ-Parent form. Clinical data and threshold values were obtained from a hospital database. Significant factors for HRQL were investigated using univariate and multivariate regression. Female patients reported greater impact of food allergy on HRQL than males did. Egg and hazelnut thresholds did not affect HRQL, but lower peanut threshold was associated with worse HRQL. Both parents scored their child's HRQL better than the child's own assessment, but whereas mother-reported HRQL was significantly affected by limitations in the child's social life, father-reported HRQL was affected by limitations in the family's social life. Severity of allergic reaction did not contribute significantly to HRQL. The risk of accidental allergen ingestion and limitations in social life are associated with worse HRQL. Fathers provide a unique perspective and should have a greater opportunity to contribute to food allergy research. © 2016 John Wiley & Sons Ltd.
Wetzel, Hermann
2006-01-01
In a large number of mostly retrospective association studies, a statistical relationship between volume and quality of health care has been reported. However, the relevance of these results is frequently limited by methodological shortcomings. In this article, criteria for the evidence and definition of thresholds for volume-outcome relations are proposed, e.g. the specification of relevant outcomes for quality indicators, analysis of volume as a continuous variable with an adequate case-mix and risk adjustment, accounting for cluster effects and considering mathematical models for the derivation of cut-off values. Moreover, volume thresholds are regarded as surrogate parameters for the indirect classification of the quality of care, whose diagnostic validity and effectiveness in improving health care quality need to be evaluated in prospective studies.
The effect of laser ablation parameters on optical limiting properties of silver nanoparticles
NASA Astrophysics Data System (ADS)
Gursoy, Irmak; Yaglioglu, Halime Gul
2017-09-01
This paper presents the effect of laser ablation parameters on optical limiting properties of silver nanoparticles. The current applications of lasers such as range finding, guidance, detection, illumination and designation have increased the potential of damaging optical imaging systems or eyes temporary or permanently. The applications of lasers introduce risks for sensors or eyes, when laser power is higher than damage threshold of the detection system. There are some ways to protect these systems such as neutral density (nd) filters, shutters, etc. However, these limiters reduce the total amount of light that gets into the system. Also, response time of these limiters may not be fast enough to prevent damage and cause precipitation in performance due to deprivation of transmission or contrast. Therefore, optical limiting filters are needed that is transparent for low laser intensities and limit or block the high laser intensities. Metal nanoparticles are good candidates for such optical limiting filters for ns pulsed lasers or CW lasers due to their high damage thresholds. In this study we investigated the optical limiting performances of silver nanoparticles produced by laser ablation technique. A high purity silver target immersed in pure water was ablated with a Nd:YAG nanosecond laser at 532 nm. The effect of altering laser power and ablation time on laser ablation efficiency of nanoparticles was investigated experimentally and optimum values were specified. Open aperture Zscan experiment was used to investigate the effect of laser ablation parameters on the optical limiting performances of silver nanoparticles in pure water. It was found that longer ablation time decreases the optical limiting threshold. These results are useful for silver nanoparticles solutions to obtain high performance optical limiters.
Saito, Hirotaka; McKenna, Sean A
2007-07-01
An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.
I. RENAL THRESHOLDS FOR HEMOGLOBIN IN DOGS
Lichty, John A.; Havill, William H.; Whipple, George H.
1932-01-01
We use the term "renal threshold for hemoglobin" to indicate the smallest amount of hemoglobin which given intravenously will effect the appearance of recognizable hemoglobin in the urine. The initial renal threshold level for dog hemoglobin is established by the methods employed at an average value of 155 mg. hemoglobin per kilo body weight with maximal values of 210 and minimal of 124. Repeated daily injections of hemoglobin will depress this initial renal threshold level on the average 46 per cent with maximal values of 110 and minimal values of 60 mg. hemoglobin per kilo body weight. This minimal or depression threshold is relatively constant if the injections are continued. Rest periods without injections cause a return of the renal threshold for hemoglobin toward the initial threshold levels—recovery threshold level. Injections of hemoglobin below the initial threshold level but above the minimal or depression threshold will eventually reduce the renal threshold for hemoglobin to its depression threshold level. We believe the depression threshold or minimal renal threshold level due to repeated hemoglobin injections is a little above the glomerular threshold which we assume is the base line threshold for hemoglobin. Our reasons for this belief in the glomerular threshold are given above and in the other papers of this series. PMID:19870016
Debris Motion and Injury Relationships in All Hazard Environments
1976-07-01
reaction, limit of voluntary tolerance, injury threshold, LD5 0 value, limit of survival, etc. Our current state of knowledge concerning human impact...of-the-art review of "Human Impact Tolerance" up to approximately August 1970. Snyder concludes that current knowledge on human tolerance to impact...children and adults, suicides, high divers, skiers etc. have occurred and are reported in the literature (Refs. 19, 20, 21). With the objective of
Thresholds and the Evolution of Bedrock Channels on the Hawaiian Islands
NASA Astrophysics Data System (ADS)
Raming, L. W.; Whipple, K. X.
2017-12-01
Erosional thresholds are a key component of the non-linear dynamics of bedrock channel incision and long-term landscape evolution. Erosion thresholds, however, have remained difficult to quantify and uniquely identify in landscape evolution. Here we present an analysis of the morphology of canyons on the Hawaiian Islands and put forth the hypothesis that they are threshold-dominated landforms. Geologic(USGS), topographic (USGS 10m DEM), runoff (USGS) and meteorological data (Rainfall Atlas of Hawai`i) were used in an analysis of catchments on the islands of Hawai`i, Kaua`i, Lāna`i, Maui, and Moloka'i. Channel incision was estimated by differencing the present topography from reconstructed pre-incision volcanic surfaces. Four key results were obtained from our analysis: (1) Mean total incision ranged from 11 to 684 m and exhibited no correlation with incision duration. (2) In major canyons on the Islands of Hawaii and Kauai rejuvenated-stage basalt flow outcrops at river level show incision effectively ceased after a period no longer than 100 ka and 1.4 Ma, respectively. (3) Mean canyon wall gradient below knickpoints decreases with volcano age, with a median value of 1 measured on Hawaii and of 0.7 on Kauai. (4) Downstream of major knickpoints which demarcate the upper limits of deep canyons, channel profiles have near uniform channel steepness with most values ranging between 60 and 100. The presence of uniform channel steepness (KSN) implies uniform bed shear stress and typically is interpreted as a steady-state balance between uplift and incision in tectonically active landscapes. However, this is untenable for Hawaiian canyons and subsequently we posit that uniform KSN represents a condition where flood shear stress has been reduced to threshold values and incision reduced to near zero. Uniform KSN values decrease with rainfall, consistent with wetter regions generating threshold shear stress at lower KSN. This suggests that rapid incision occurred during brief intervals where thresholds were exceeded through a combination of initial slope, over-steeping due to cliff formation, and available runoff as function of climate. From this analysis, we find significant evidence of the role of thresholds in landscape evolution and an alternative framework for viewing the evolution of the Hawaiian Islands.
The importance of reference materials in doping-control analysis.
Mackay, Lindsey G; Kazlauskas, Rymantas
2011-08-01
Currently a large range of pure substance reference materials are available for calibration of doping-control methods. These materials enable traceability to the International System of Units (SI) for the results generated by World Anti-Doping Agency (WADA)-accredited laboratories. Only a small number of prohibited substances have threshold limits for which quantification is highly important. For these analytes only the highest quality reference materials that are available should be used. Many prohibited substances have no threshold limits and reference materials provide essential identity confirmation. For these reference materials the correct identity is critical and the methods used to assess identity in these cases should be critically evaluated. There is still a lack of certified matrix reference materials to support many aspects of doping analysis. However, in key areas a range of urine matrix materials have been produced for substances with threshold limits, for example 19-norandrosterone and testosterone/epitestosterone (T/E) ratio. These matrix-certified reference materials (CRMs) are an excellent independent means of checking method recovery and bias and will typically be used in method validation and then regularly as quality-control checks. They can be particularly important in the analysis of samples close to threshold limits, in which measurement accuracy becomes critical. Some reference materials for isotope ratio mass spectrometry (IRMS) analysis are available and a matrix material certified for steroid delta values is currently under production. In other new areas, for example the Athlete Biological Passport, peptide hormone testing, designer steroids, and gene doping, reference material needs still need to be thoroughly assessed and prioritised.
Moody, J.A.; Martin, D.A.
2001-01-01
Wildfire alters the hydrologic response of watersheds, including the peak discharges resulting from subsequent rainfall. Improving predictions of the magnitude of flooding that follows wildfire is needed because of the increase in human population at risk in the wildland-urban interface. Because this wildland-urban interface is typically in mountainous terrain, we investigated rainfall-runoff relations by measuring the maximum 30 min rainfall intensity and the unit-area peak discharge (peak discharge divided by the area burned) in three mountainous watersheds (17-26.8 km2) after a wildfire. We found rainfall-runoff relations that relate the unit-area peak discharges to the maximum 30 min rainfall intensities by a power law. These rainfall-runoff relations appear to have a threshold value for the maximum 30 min rainfall intensity (around 10 mm h-1) such that, above this threshold, the magnitude of the flood peaks increases more rapidly with increases in intensity. This rainfall intensity could be used to set threshold limits in rain gauges that are part of an early-warning flood system after wildfire. The maximum unit-area peak discharges from these three burned watersheds ranged from 3.2 to 50 m3 s-1 km-2. These values could provide initial estimates of the upper limits of runoff that can be used to predict floods after wildfires in mountainous terrain. Published in 2001 by John Wiley and Sons, Ltd.
Chao, Wan-Tien; Lin, Yuan-Yao; Peng, Jin-Long; Huang, Chen-Bin
2014-02-15
Adiabatic soliton spectral compression in a dispersion-increasing fiber (DIF) with a linear dispersion ramp is studied both numerically and experimentally. The anticipated maximum spectral compression ratio (SCR) would be limited by the ratio of the DIF output to the input dispersion values. However, our numerical analyses indicate that SCR greater than the DIF dispersion ratio is feasible, provided the input pulse duration is shorter than a threshold value along with adequate pulse energy control. Experimentally, a SCR of 28.6 is achieved in a 1 km DIF with a dispersion ratio of 22.5.
Occupational exposure limits for carcinogens--variant approaches by different countries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, W.A.
1989-09-01
The differences in treatment of occupational exposure limits for carcinogens by 24 countries is described along with a discussion of the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit values (TLV) treatment, the similar treatment of the new Occupational Safety and Health Administration (OSHA) standard, and the treatment by provinces of Canada. The unique listing by the Federal Republic of Germany of so-called technical guiding concentrations of a group of carcinogens is discussed with the note that Austria used this same system. Publications on justification for establishing occupational exposure limits for certain carcinogens are discussed also.
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2010 CFR
2010-07-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2011 CFR
2011-01-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2012 CFR
2012-01-01
... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-08
...DoD is issuing this final rule amending the Defense Federal Acquisition Regulation Supplement (DFARS) to implement section 826 of the National Defense Authorization Act for Fiscal Year 2011. Section 826 amended the DoD pilot program for transition to follow-on contracting after use of other transaction authority, to establish that the threshold limitation of $50 million for contracts and subcontracts under the program includes the dollar value of all options.
Methods for automatic trigger threshold adjustment
Welch, Benjamin J; Partridge, Michael E
2014-03-18
Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.
Influence of aging on thermal and vibratory thresholds of quantitative sensory testing.
Lin, Yea-Huey; Hsieh, Song-Chou; Chao, Chi-Chao; Chang, Yang-Chyuan; Hsieh, Sung-Tsang
2005-09-01
Quantitative sensory testing has become a common approach to evaluate thermal and vibratory thresholds in various types of neuropathies. To understand the effect of aging on sensory perception, we measured warm, cold, and vibratory thresholds by performing quantitative sensory testing on a population of 484 normal subjects (175 males and 309 females), aged 48.61 +/- 14.10 (range 20-86) years. Sensory thresholds of the hand and foot were measured with two algorithms: the method of limits (Limits) and the method of level (Level). Thresholds measured by Limits are reaction-time-dependent, while those measured by Level are independent of reaction time. In addition, we explored (1) the correlations of thresholds between these two algorithms, (2) the effect of age on differences in thresholds between algorithms, and (3) differences in sensory thresholds between the two test sites. Age was consistently and significantly correlated with sensory thresholds of all tested modalities measured by both algorithms on multivariate regression analysis compared with other factors, including gender, body height, body weight, and body mass index. When thresholds were plotted against age, slopes differed between sensory thresholds of the hand and those of the foot: for the foot, slopes were steeper compared with those for the hand for each sensory modality. Sensory thresholds of both test sites measured by Level were highly correlated with those measured by Limits, and thresholds measured by Limits were higher than those measured by Level. Differences in sensory thresholds between the two algorithms were also correlated with age: thresholds of the foot were higher than those of the hand for each sensory modality. This difference in thresholds (measured with both Level and Limits) between the hand and foot was also correlated with age. These findings suggest that age is the most significant factor in determining sensory thresholds compared with the other factors of gender and anthropometric parameters, and this provides a foundation for investigating the neurobiologic significance of aging on the processing of sensory stimuli.
Lindenblatt, G.; Silny, J.
2006-01-01
Leakage currents, tiny currents flowing from an everyday-life appliance through the body to the ground, can cause a non-adequate perception (called electrocutaneous sensation, ECS) or even pain and should be avoided. Safety standards for low-frequency range are based on experimental results of current thresholds of electrocutaneous sensations, which however show a wide range between about 50 μA (rms) and 1000 μA (rms). In order to be able to explain these differences, the perception threshold was measured repeatedly in experiments with test persons under identical experimental setup, but by means of different methods (measuring strategies), namely: direct adjustment, classical threshold as amperage of 50% perception probability, and confidence rating procedure of signal detection theory. The current is injected using a 1 cm2 electrode at the highly touch sensitive part of the index fingertip. These investigations show for the first time that the threshold of electrocutaneous sensations is influenced both by adaptation to the non-adequate stimulus and individual, emotional factors. Therefore, classical methods, on which the majority of the safety investigations are based, cannot be used to determine a leakage current threshold. The confidence rating procedure of the modern signal detection theory yields a value of 179.5 μA (rms) at 50 Hz power supply net frequency as the lower end of the 95% confidence range considering the variance in the investigated group. This value is expected to be free of adaptation influences, and is distinctly lower than the European limits and supports the stricter regulations of Canada and USA. PMID:17111461
How to Assess the Value of Medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066
How to assess the value of medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.
45 CFR 149.115 - Cost threshold and cost limit.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Cost threshold and cost limit. 149.115 Section 149.115 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reinsurance Amounts § 149.115 Cost threshold and cost limit. The following cost threshold...
45 CFR 149.115 - Cost threshold and cost limit.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Cost threshold and cost limit. 149.115 Section 149.115 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reinsurance Amounts § 149.115 Cost threshold and cost limit. The following cost threshold...
45 CFR 149.115 - Cost threshold and cost limit.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Cost threshold and cost limit. 149.115 Section 149.115 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reinsurance Amounts § 149.115 Cost threshold and cost limit. The following cost threshold...
45 CFR 149.115 - Cost threshold and cost limit.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Cost threshold and cost limit. 149.115 Section 149.115 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reinsurance Amounts § 149.115 Cost threshold and cost limit. The following cost threshold...
On flows of viscoelastic fluids under threshold-slip boundary conditions
NASA Astrophysics Data System (ADS)
Baranovskii, E. S.
2018-03-01
We investigate a boundary-value problem for the steady isothermal flow of an incompressible viscoelastic fluid of Oldroyd type in a 3D bounded domain with impermeable walls. We use the Fujita threshold-slip boundary condition. This condition states that the fluid can slip along a solid surface when the shear stresses reach a certain critical value; otherwise the slipping velocity is zero. Assuming that the flow domain is not rotationally symmetric, we prove an existence theorem for the corresponding slip problem in the framework of weak solutions. The proof uses methods for solving variational inequalities with pseudo-monotone operators and convex functionals, the method of introduction of auxiliary viscosity, as well as a passage-to-limit procedure based on energy estimates of approximate solutions, Korn’s inequality, and compactness arguments. Also, some properties and estimates of weak solutions are established.
Generalized minimum dominating set and application in automatic text summarization
NASA Astrophysics Data System (ADS)
Xu, Yi-Zhi; Zhou, Hai-Jun
2016-03-01
For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.
Threshold effect under nonlinear limitation of the intensity of high-power light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tereshchenko, S A; Podgaetskii, V M; Gerasimenko, A Yu
2015-04-30
A model is proposed to describe the properties of limiters of high-power laser radiation, which takes into account the threshold character of nonlinear interaction of radiation with the working medium of the limiter. The generally accepted non-threshold model is a particular case of the threshold model if the threshold radiation intensity is zero. Experimental z-scan data are used to determine the nonlinear optical characteristics of media with carbon nanotubes, polymethine and pyran dyes, zinc selenide, porphyrin-graphene and fullerene-graphene. A threshold effect of nonlinear interaction between laser radiation and some of investigated working media of limiters is revealed. It is shownmore » that the threshold model more adequately describes experimental z-scan data. (nonlinear optical phenomena)« less
Fujisawa, Jun-Ichi; Osawa, Ayumi; Hanaya, Minoru
2016-08-10
Photoinduced carrier injection from dyes to inorganic semiconductors is a crucial process in various dye-sensitized solar energy conversions such as photovoltaics and photocatalysis. It has been reported that an energy offset larger than 0.2-0.3 eV (threshold value) is required for efficient electron injection from excited dyes to metal-oxide semiconductors such as titanium dioxide (TiO2). Because the energy offset directly causes loss in the potential of injected electrons, it is a crucial issue to minimize the energy offset for efficient solar energy conversions. However, a fundamental understanding of the energy offset, especially the threshold value, has not been obtained yet. In this paper, we report the origin of the threshold value of the energy offset, solving the long-standing questions of why such a large energy offset is necessary for the electron injection and which factors govern the threshold value, and suggest a strategy to minimize the threshold value. The threshold value is determined by the sum of two reorganization energies in one-electron reduction of semiconductors and typically-used donor-acceptor (D-A) dyes. In fact, the estimated values (0.21-0.31 eV) for several D-A dyes are in good agreement with the threshold value, supporting our conclusion. In addition, our results reveal that the threshold value is possible to be reduced by enlarging the π-conjugated system of the acceptor moiety in dyes and enhancing its structural rigidity. Furthermore, we extend the analysis to hole injection from excited dyes to semiconductors. In this case, the threshold value is given by the sum of two reorganization energies in one-electron oxidation of semiconductors and D-A dyes.
2005 ACGIH Lifting TLV: Employee-Friendly Presentation and Guidance for Professional Judgment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Splittstoesser, Riley; O'Farrell, Daniel Edward; Hill, John
The American Council of Governmental Industrial Hygienists (ACGIH) Lifting Threshold Limit Values (TLVs) provide a tool to reduce incidence of low back and shoulder injuries. However, application of the TLV is too complicated for floor-level workers and relies on professional judgment to assess commonly encountered tasks. This paper presents an Employee-Friendly Simplified Format of the TLV that has been adapted from Table 1 of the Lifting TLV presented in the 2005 TLVs and BEIs Based on the Documentation of the Threshold Limit Values for Chemical Substances and Physical Agents & Biological Exposure Indices. This simplified format can be employed bymore » floor-level workers to self-assess lifting tasks. The Ergonomics Project Team also provides research-based guidance for applying professional judgment consistent with standard industry practice: Extended Work Shifts – Reduce weight by 20% for shifts lasting 8 to 12 hours; Constrained Lower Body Posture – Reduce weight by 25% when lifting in such postures; Infrequently Performed Lifts – Lift up to 15 lbs. ≤3 lifts per hour within the zones marked “No safe limit for repetitive lifting” in the TLVs Table 1; Asymmetry beyond 30° – Reduce weight by 10 lbs. for lifts with up to 60° asymmetry from sagittal plane.« less
Marsh collapse thresholds for coastal Louisiana estimated using elevation and vegetation index data
Couvillion, Brady R.; Beck, Holly
2013-01-01
Forecasting marsh collapse in coastal Louisiana as a result of changes in sea-level rise, subsidence, and accretion deficits necessitates an understanding of thresholds beyond which inundation stress impedes marsh survival. The variability in thresholds at which different marsh types cease to occur (i.e., marsh collapse) is not well understood. We utilized remotely sensed imagery, field data, and elevation data to help gain insight into the relationships between vegetation health and inundation. A Normalized Difference Vegetation Index (NDVI) dataset was calculated using remotely sensed data at peak biomass (August) and used as a proxy for vegetation health and productivity. Statistics were calculated for NDVI values by marsh type for intermediate, brackish, and saline marsh in coastal Louisiana. Marsh-type specific NDVI values of 1.5 and 2 standard deviations below the mean were used as upper and lower limits to identify conditions indicative of collapse. As marshes seldom occur beyond these values, they are believed to represent a range within which marsh collapse is likely to occur. Inundation depth was selected as the primary candidate for evaluation of marsh collapse thresholds. Elevation relative to mean water level (MWL) was calculated by subtracting MWL from an elevation dataset compiled from multiple data types including light detection and ranging (lidar) and bathymetry. A polynomial cubic regression was used to examine a random subset of pixels to determine the relationship between elevation (relative to MWL) and NDVI. The marsh collapse uncertainty range values were found by locating the intercept of the regression line with the 1.5 and 2 standard deviations below the mean NDVI value for each marsh type. Results indicate marsh collapse uncertainty ranges of 30.7–35.8 cm below MWL for intermediate marsh, 20–25.6 cm below MWL for brackish marsh, and 16.9–23.5 cm below MWL for saline marsh. These values are thought to represent the ranges of inundation depths within which marsh collapse is probable.
NASA Astrophysics Data System (ADS)
Telesman, J.; Smith, T. M.; Gabb, T. P.; Ring, A. J.
2018-06-01
Cyclic near-threshold fatigue crack growth (FCG) behavior of two disk superalloys was evaluated and was shown to exhibit an unexpected sudden failure mode transition from a mostly transgranular failure mode at higher stress intensity factor ranges to an almost completely intergranular failure mode in the threshold regime. The change in failure modes was associated with a crossover of FCG resistance curves in which the conditions that produced higher FCG rates in the Paris regime resulted in lower FCG rates and increased ΔK th values in the threshold region. High-resolution scanning and transmission electron microscopy were used to carefully characterize the crack tips at these near-threshold conditions. Formation of stable Al-oxide followed by Cr-oxide and Ti-oxides was found to occur at the crack tip prior to formation of unstable oxides. To contrast with the threshold failure mode regime, a quantitative assessment of the role that the intergranular failure mode has on cyclic FCG behavior in the Paris regime was also performed. It was demonstrated that even a very limited intergranular failure content dominates the FCG response under mixed mode failure conditions.
OLFACTORY STIMULATION OF BLOWFLIES BY HOMOLOGOUS ALCOHOLS
Dethier, V. G.; Yost, M. T.
1952-01-01
The response of the blowfly Phormia regina to stimulation by alcohols in the vapor phase has been investigated by means of an olfactometer which permitted quantitative control of stimulus concentration. The median rejection threshold was selected as a criterion of response. As was true in the case of contact chemoreception, the distribution of thresholds in the fly population is normal with respect to the logarithm of concentration. In terms of molar concentration the alcohols are rejected at logarithmically decreasing concentration as chain length is increased. Beyond decanol there is no further stimulation. When thresholds are expressed as pressures and plotted against saturated vapor pressures on logarithmic coordinates, the data fit a line the slope of which is not significantly different from 1; i.e., the thresholds vary directly with vapor pressure. Individual threshold values, however, deviate significantly from this line. and the deviation must be ascribed to other factors which have not as yet been identified. When thresholds are expressed as activities, all alcohols are equally stimulating. It appears that the limiting process of olfaction, at least in so far as the normal alcohols are concerned, may involve an equilibrium process. Conformity to this concept is most exact for intermediate members of the series. PMID:14938521
Oldenkamp, Rik; Huijbregts, Mark A J; Ragas, Ad M J
2016-05-01
The selection of priority APIs (Active Pharmaceutical Ingredients) can benefit from a spatially explicit approach, since an API might exceed the threshold of environmental concern in one location, while staying below that same threshold in another. However, such a spatially explicit approach is relatively data intensive and subject to parameter uncertainty due to limited data. This raises the question to what extent a spatially explicit approach for the environmental prioritisation of APIs remains worthwhile when accounting for uncertainty in parameter settings. We show here that the inclusion of spatially explicit information enables a more efficient environmental prioritisation of APIs in Europe, compared with a non-spatial EU-wide approach, also under uncertain conditions. In a case study with nine antibiotics, uncertainty distributions of the PAF (Potentially Affected Fraction) of aquatic species were calculated in 100∗100km(2) environmental grid cells throughout Europe, and used for the selection of priority APIs. Two APIs have median PAF values that exceed a threshold PAF of 1% in at least one environmental grid cell in Europe, i.e., oxytetracycline and erythromycin. At a tenfold lower threshold PAF (i.e., 0.1%), two additional APIs would be selected, i.e., cefuroxime and ciprofloxacin. However, in 94% of the environmental grid cells in Europe, no APIs exceed either of the thresholds. This illustrates the advantage of following a location-specific approach in the prioritisation of APIs. This added value remains when accounting for uncertainty in parameter settings, i.e., if the 95th percentile of the PAF instead of its median value is compared with the threshold. In 96% of the environmental grid cells, the location-specific approach still enables a reduction of the selection of priority APIs of at least 50%, compared with a EU-wide prioritisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Thermalization threshold in models of 1D fermions
NASA Astrophysics Data System (ADS)
Mukerjee, Subroto; Modak, Ranjan; Ramswamy, Sriram
2013-03-01
The question of how isolated quantum systems thermalize is an interesting and open one. In this study we equate thermalization with non-integrability to try to answer this question. In particular, we study the effect of system size on the integrability of 1D systems of interacting fermions on a lattice. We find that for a finite-sized system, a non-zero value of an integrability breaking parameter is required to make an integrable system appear non-integrable. Using exact diagonalization and diagnostics such as energy level statistics and the Drude weight, we find that the threshold value of the integrability breaking parameter scales to zero as a power law with system size. We find the exponent to be the same for different models with its value depending on the random matrix ensemble describing the non-integrable system. We also study a simple analytical model of a non-integrable system with an integrable limit to better understand how a power law emerges.
The relationships between content of heavy metals in soil and in strawberries.
Bystricka, Judita; Musilova, Janette; Trebichalsky, Pavol; Tomas, Jan; Stanovic, Radovan; Bajcan, Daniel; Kavalcova, Petra
2016-01-01
The work was aimed at assessment of quality of strawberry based on the contents of heavy metals as well as the possible correlations between selected heavy metals in soil and strawberries. The results revealed that from all observed metals in soil determined in aqua regia only in the case of cadmium the maximum permissible limit in comparison with the limit resulting from the Law No. 220/2004 as well as threshold values proposed by European Commission (EC) (2006) has been exceeded. In our paper the values of cadmium in the soil representing 1.86 to 2.41 times higher values than limit valid in the Slovak Republic (0.7 mg/kg) and 2.6 to 3.38 times higher in comparison to EC (0.5 mg/kg). In our study in 1 M NH4NO3 the values of lead ranged from 0.125 to 0.205 mg/kg representing values exceeded the limit valid in Slovak Republic (0.1 mg/kg) about 0.037-0.105 mg/kg. Despite exceeded values of heavy metals in soil, no values above the limit directly in strawberries when compared to Food Codex of Slovak Republic as well as to Commission Regulation 1881/2006 were recorded. Among the varieties statistically significant differences (P < 0.05) in intake of heavy metals were found.
Schomaker, Michael; Egger, Matthias; Ndirangu, James; Phiri, Sam; Moultrie, Harry; Technau, Karl; Cox, Vivian; Giddy, Janet; Chimbetete, Cleophas; Wood, Robin; Gsponer, Thomas; Bolton Moore, Carolyn; Rabie, Helena; Eley, Brian; Muhe, Lulu; Penazzato, Martina; Essajee, Shaffiq; Keiser, Olivia; Davies, Mary-Ann
2013-01-01
Background There is limited evidence on the optimal timing of antiretroviral therapy (ART) initiation in children 2–5 y of age. We conducted a causal modelling analysis using the International Epidemiologic Databases to Evaluate AIDS–Southern Africa (IeDEA-SA) collaborative dataset to determine the difference in mortality when starting ART in children aged 2–5 y immediately (irrespective of CD4 criteria), as recommended in the World Health Organization (WHO) 2013 guidelines, compared to deferring to lower CD4 thresholds, for example, the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4 percentage (CD4%) <25%. Methods and Findings ART-naïve children enrolling in HIV care at IeDEA-SA sites who were between 24 and 59 mo of age at first visit and with ≥1 visit prior to ART initiation and ≥1 follow-up visit were included. We estimated mortality for ART initiation at different CD4 thresholds for up to 3 y using g-computation, adjusting for measured time-dependent confounding of CD4 percent, CD4 count, and weight-for-age z-score. Confidence intervals were constructed using bootstrapping. The median (first; third quartile) age at first visit of 2,934 children (51% male) included in the analysis was 3.3 y (2.6; 4.1), with a median (first; third quartile) CD4 count of 592 cells/mm3 (356; 895) and median (first; third quartile) CD4% of 16% (10%; 23%). The estimated cumulative mortality after 3 y for ART initiation at different CD4 thresholds ranged from 3.4% (95% CI: 2.1–6.5) (no ART) to 2.1% (95% CI: 1.3%–3.5%) (ART irrespective of CD4 value). Estimated mortality was overall higher when initiating ART at lower CD4 values or not at all. There was no mortality difference between starting ART immediately, irrespective of CD4 value, and ART initiation at the WHO 2010 recommended threshold of CD4 count <750 cells/mm3 or CD4% <25%, with mortality estimates of 2.1% (95% CI: 1.3%–3.5%) and 2.2% (95% CI: 1.4%–3.5%) after 3 y, respectively. The analysis was limited by loss to follow-up and the unavailability of WHO staging data. Conclusions The results indicate no mortality difference for up to 3 y between ART initiation irrespective of CD4 value and ART initiation at a threshold of CD4 count <750 cells/mm3 or CD4% <25%, but there are overall higher point estimates for mortality when ART is initiated at lower CD4 values. Please see later in the article for the Editors' Summary PMID:24260029
1987-07-06
levels of intellegence tests and academic background as values to predict promotion. The model, however, demonstrated only limited utility as a preditive...attributes in the form of promotion points or a minimum threshold scale would be one approach. Unfortunately, this may artificially force NCO’s of less
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
Ultrasonic attenuation in superconducting molybdenum-rhenium alloys.
NASA Technical Reports Server (NTRS)
Ashkin, M.; Deis, D. W.; Gottlieb, M.; Jones, C. K.
1971-01-01
Investigation of longitudinal sound attenuation in superconducting Mo-Re alloys as a function of temperature, magnetic field, and frequency. Evaporated thin film CdS transducers were used for the measurements at frequencies up to 3 GHz. The normal state attenuation coefficient was found to be proportional to the square of frequency over this frequency range. Measurements in zero magnetic field yielded a value of the energy gap parameter close to the threshold value of 3.56 kTc, appropriate to a weakly coupled dirty limit superconductor.
NASA Astrophysics Data System (ADS)
Sanchez, Andrea Nathalie
Corrosion initiates in reinforced concrete structures exposed to marine environments when the chloride ion concentration at the surface of an embedded steel reinforcing bar exceeds the chloride corrosion threshold (CT) value. The value of CT is generally assumed to have a conservative fixed value ranging from 0.2% to - 0.5 % of chloride ions by weight of cement. However, extensive experimental investigations confirmed that C T is not a fixed value and that the value of CT depends on many variables. Among those, the potential of passive steel embedded in concrete is a key influential factor on the value of CT and has received little attention in the literature. The phenomenon of a potential-dependent threshold (PDT) permits accounting for corrosion macrocell coupling between active and passive steel assembly components in corrosion forecast models, avoiding overly conservative long-term damage projections and leading to more efficient design. The objectives of this investigation was to 1) expand by a systematic experimental assessment the knowledge and data base on how dependent the chloride threshold is on the potential of the steel embedded in concrete and 2) introduce the chloride threshold dependence on steel potential as an integral part of corrosion-related service life prediction of reinforced concrete structures. Experimental assessments on PDT were found in the literature but for a limited set of conditions. Therefore, experiments were conducted with mortar and concrete specimens and exposed to conditions more representative of the field than those previously available. The experimental results confirmed the presence of the PDT effect and provided supporting information to use a value of -550 mV per decade of Cl- for the cathodic prevention slope betaCT, a critical quantitative input for implementation in a practical model. A refinement of a previous corrosion initiation-propagation model that incorporated PDT in a partially submerged reinforced concrete column in sea water was developed. Corrosion was assumed to start when the chloride corrosion threshold was reached in an active steel zone of a given size, followed by recalculating the potential distribution and update threshold values over the entire system at each time step. Notably, results of this work indicated that when PDT is ignored, as is the case in present forecasting model practice, the corrosion damage prediction can be overly conservative which could lead to structural overdesign or misguided future damage management planning. Implementation of PDT in next-generation models is therefore highly desirable. However, developing a mathematical model that forecasts the corrosion damage of an entire marine structure with a fully implemented PDT module can result in excessive computational complexity. Hence, a provisional simplified approach for incorporating the effect of PDT was developed. The approach uses a correction function to be applied to projections that have been computed using the traditional procedures.
Li, Kai; Chen, Wenyuan; Zhang, Weiping
2011-01-01
Beam’s multiple-contact mode, characterized by multiple and discrete contact regions, non-uniform stoppers’ heights, irregular contact sequence, seesaw-like effect, indirect interaction between different stoppers, and complex coupling relationship between loads and deformation is studied. A novel analysis method and a novel high speed calculation model are developed for multiple-contact mode under mechanical load and electrostatic load, without limitations on stopper height and distribution, providing the beam has stepped or curved shape. Accurate values of deflection, contact load, contact region and so on are obtained directly, with a subsequent validation by CoventorWare. A new concept design of high-g threshold microaccelerometer based on multiple-contact mode is presented, featuring multiple acceleration thresholds of one sensitive component and consequently small sensor size. PMID:22163897
Learning to wait: A laboratory investigation
Oprea, R.; Friedman, D.; Anderson, S.T.
2009-01-01
Human subjects decide when to sink a fixed cost C to seize an irreversible investment opportunity whose value V is governed by Brownian motion. The optimal policy is to invest when V first crosses a threshold V* = (1 + w*) C, where the wait option premium w* depends on drift, volatility, and expiration hazard parameters. Subjects in the Low w* treatment on average invest at values quite close to optimum. Subjects in the two Medium and the High w* treatments invested at values below optimum, but with the predicted ordering, and values approached the optimum by the last block of 20 periods. ?? 2009 The Review of Economic Studies Limited.
Maximizing the value of gate capacitance in field-effect devices using an organic interface layer
NASA Astrophysics Data System (ADS)
Kwok, H. L.
2015-12-01
Past research has confirmed the existence of negative capacitance in organics such as tris (8-Hydroxyquinoline) Aluminum (Alq3). This work explored using such an organic interface layer to enhance the channel voltage in the field-effect transistor (FET) thereby lowering the sub-threshold swing. In particular, if the values of the positive and negative gate capacitances are approximately equal, the composite negative capacitance will increase by orders of magnitude. One concern is the upper frequency limit (∼100 Hz) over which negative capacitance has been observed. Nonetheless, this frequency limit can be raised to kHz when the organic layer is subjected to a DC bias.
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
Perspectives on setting limits for RF contact currents: a commentary.
Tell, Richard A; Tell, Christopher A
2018-01-15
Limits for exposure to radiofrequency (RF) contact currents are specified in the two dominant RF safety standards and guidelines developed by the Institute of Electrical and Electronics Engineers (IEEE) and the International Commission on Non-Ionizing Radiation Protection (ICNIRP). These limits are intended to prevent RF burns when contacting RF energized objects caused by high local tissue current densities. We explain what contact currents are and review some history of the relevant limits with an emphasis on so-called "touch" contacts, i.e., contact between a person and a contact current source during touch via a very small contact area. Contact current limits were originally set on the basis of controlling the specific absorption rate resulting from the current flowing through regions of small conductive cross section within the body, such as the wrist or ankle. More recently, contact currents have been based on thresholds of perceived heating. In the latest standard from the IEEE developed for NATO, contact currents have been based on two research studies in which thresholds for perception of thermal warmth or thermal pain have been measured. Importantly, these studies maximized conductive contact between the subject and the contact current source. This factor was found to dominate the response to heating wherein high resistance contact, such as from dry skin, can result in local heating many times that from a highly conductive contact. Other factors such as electrode size and shape, frequency of the current and the physical force associated with contact are found to introduce uncertainty in threshold values when comparing data across multiple studies. Relying on studies in which the contact current is minimized for a given threshold does not result in conservative protection limits. Future efforts to develop limits on contact currents should include consideration of (1) the basis for the limits (perception, pain, tissue damage); (2) understanding of the practical conditions of real world exposure for contact currents such as contact resistance, size and shape of the contact electrode and applied force at the point of contact; (3) consistency of how contact currents are applied in research studies across different researchers; (4) effects of frequency.
Wildlife toxicity extrapolations: NOAEL versus LOAEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fairbrother, A.; Berg, M. van den
1995-12-31
Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. One is then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to all organisms of interest. This is a particularly acute problem when tryingmore » to estimate hazards to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. This section continues the debate by six panelists of the ``correct`` approach for determining wildlife toxicity thresholds by debating which toxicity value should be used for setting threshold criteria. Should the lowest observable effect level (LOAEL) be used or is it more appropriate to use the no observable effect level (NOAEL)? What are the short-comings of using either of these point estimates? Should a ``benchmark`` approach, similar to that proposed for human health risk assessments, be used instead, where an EC{sub 5} or EC{sub 10} and associated confidence limits are determined and then divided by a safety factor? How should knowledge of the slope of the dose-response curve be incorporated into determination of toxicity threshold values?« less
Quantifying Information Gain from Dynamic Downscaling Experiments
NASA Astrophysics Data System (ADS)
Tian, Y.; Peters-Lidard, C. D.
2015-12-01
Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.
Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan
2018-01-01
Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow < 1.239 × 10 -3 mm 2 /s), and D fast (D fast < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.
Societal-level Risk Factors Associated with Pediatric Hearing Loss: A Systematic Review
Vasconcellos, Adam P.; Colello, Stephanie; Kyle, Meghann E.; Shin, Jennifer J.
2015-01-01
Objective To determine if the current body of evidence describes specific threshold values of concern for modifiable societal-level risk factors for pediatric hearing loss, with the overarching goal of providing actionable guidance for the prevention and screening of audiological deficits in children. Data Sources Three related systematic reviews were performed. Computerized PubMed, Embase, and Cochrane Library searches were performed from inception through October 2013 and were supplemented with manual searches. Review Methods Inclusion/exclusion criteria were designed to determine specific threshold values of societal-level risk factors on hearing loss in the pediatric population. Searches and data extraction were performed by independent reviewers. Results There were 20 criterion-meeting studies with 29,128 participants. Infants less than 2 standard deviations below standardized weight, length, or body mass index were at increased risk. Specific nutritional deficiencies related to iodine and thiamine may also increase risk, although data are limited and threshold values of concern have not been quantified. Blood lead levels above 10 μg/dL were significantly associated with pediatric sensorineural loss, and mixed findings were noted for other heavy metals. Hearing loss was also more prevalent among children of socioeconomically disadvantaged families, as measured by a poverty income ratio less than 0.3 to 1, higher deprivation category status, and head of household employment as a manual laborer. Conclusions Increasing our understanding of specific thresholds of risk associated with causative factors forms the foundation for preventive and targeted screening programs as well as future research endeavors. PMID:24671458
Societal-level Risk Factors Associated with Pediatric Hearing Loss: A Systematic Review.
Vasconcellos, Adam P; Colello, Stephanie; Kyle, Meghann E; Shin, Jennifer J
2014-07-01
To determine if the current body of evidence describes specific threshold values of concern for modifiable societal-level risk factors for pediatric hearing loss, with the overarching goal of providing actionable guidance for the prevention and screening of audiological deficits in children. Three related systematic reviews were performed. Computerized PubMed, Embase, and Cochrane Library searches were performed from inception through October 2013 and were supplemented with manual searches. Inclusion/exclusion criteria were designed to determine specific threshold values of societal-level risk factors on hearing loss in the pediatric population. Searches and data extraction were performed by independent reviewers. There were 20 criterion-meeting studies with 29,128 participants. Infants less than 2 standard deviations below standardized weight, length, or body mass index were at increased risk. Specific nutritional deficiencies related to iodine and thiamine may also increase risk, although data are limited and threshold values of concern have not been quantified. Blood lead levels above 10 µg/dL were significantly associated with pediatric sensorineural loss, and mixed findings were noted for other heavy metals. Hearing loss was also more prevalent among children of socioeconomically disadvantaged families, as measured by a poverty income ratio less than 0.3 to 1, higher deprivation category status, and head of household employment as a manual laborer. Increasing our understanding of specific thresholds of risk associated with causative factors forms the foundation for preventive and targeted screening programs as well as future research endeavors. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
Qi, Cong; Gu, Yiyang; Sun, Qing; Gu, Hongliang; Xu, Bo; Gu, Qing; Xiao, Jing; Lian, Yulong
2017-05-01
We assessed the risk of liver injuries following low doses of N,N-dimethylformamide (DMF) below threshold limit values (20 mg/m) among leather industry workers and comparison groups. A cohort of 429 workers from a leather factory and 466 non-exposed subjects in China were followed for 4 years. Poisson regression and piece-wise linear regression were used to examine the relationship between DMF and liver injury. Workers exposed to a cumulative dose of DMF were significantly more likely than non-exposed workers to develop liver injury. A nonlinear relationship between DMF and liver injury was observed, and a threshold of the cumulative DMF dose for liver injury was 7.30 (mg/m) year. The findings indicate the importance of taking action to reduce DMF occupational exposure limits for promoting worker health.
The development of fluorides for high power laser optics
NASA Astrophysics Data System (ADS)
Ready, J. F.; Vora, H.
1980-07-01
The laser assisted thermonuclear fusion program has need for improved optical materials with high transmission in the ultraviolet, and with low values of nonlinear index of refraction. Lithium fluoride possesses a combination of optical properties which are of use. Single crystalline LiF is limited by low mechanical strength. The technique of press forging to increase the mechanical strength is investigated. LiF single crystals were press forged over the temperature range 300 - 600 deg C to produce fine grained polycrystalline material. Optical homogenity at 633, stress birefringence, scattering at 633, residual absorption over the spectral range 339 - 3800 nm, and laser damage thresholds for 1 ns, 1064 nm and 700 ps, 266 nm laser pulses are evaluated. Single crystals can be press forged without seriously degrading their optical properties. Yield strength in compression, proportional limit and fracture strength in 3 and 4 point bending, fracture energy, and threshold for microyield are discussed.
Monitoring Start of Season in Alaska
NASA Astrophysics Data System (ADS)
Robin, J.; Dubayah, R.; Sparrow, E.; Levine, E.
2006-12-01
In biomes that have distinct winter seasons, start of spring phenological events, specifically timing of budburst and green-up of leaves, coincides with transpiration. Seasons leave annual signatures that reflect the dynamic nature of the hydrologic cycle and link the different spheres of the Earth system. This paper evaluates whether continuity between AVHRR and MODIS normalized difference vegetation index (NDVI) is achievable for monitoring land surface phenology, specifically start of season (SOS), in Alaska. Additionally, two thresholds, one based on NDVI and the other on accumulated growing degree-days (GDD), are compared to determine which most accurately predicts SOS for Fairbanks. Ratio of maximum greenness at SOS was computed from biweekly AVHRR and MODIS composites for 2001 through 2004 for Anchorage and Fairbanks regions. SOS dates were determined from annual green-up observations made by GLOBE students. Results showed that different processing as well as spectral characteristics of each sensor restrict continuity between the two datasets. MODIS values were consistently higher and had less inter-annual variability during the height of the growing season than corresponding AVHRR values. Furthermore, a threshold of 131-175 accumulated GDD was a better predictor of SOS for Fairbanks than a NDVI threshold applied to AVHRR and MODIS datasets. The NDVI threshold was developed from biweekly AVHRR composites from 1982 through 2004 and corresponding annual green-up observations at University of Alaska-Fairbanks (UAF). The GDD threshold was developed from 20+ years of historic daily mean air temperature data and the same green-up observations. SOS dates computed with the GDD threshold most closely resembled actual green-up dates observed by GLOBE students and UAF researchers. Overall, biweekly composites and effects of clouds, snow, and conifers limit the ability of NDVI to monitor phenological changes in Alaska.
John Kilgo; Mark Vukovich
2014-01-01
Thresholds in response by cavity-nesting bird populations to variations in the snag resource are poorly understood. In addition, limited information exists on the value of artificially created snags for cavity-nesting birds. Therefore, uncertainty exists in whether artificially created snags can yield a positive population response among snag-dependent birds. We used...
Desideri, Giovambattista; Virdis, Agostino; Casiglia, Edoardo; Borghi, Claudio
2018-06-01
The relevance of cardiovascular role played by levels of serum uric acid is dramatically growing, especially as cardiovascular risk factor potentially able to exert either a direct deleterious impact or a synergic effect with other cardiovascular risk factors. At the present time, it still remains undefined the threshold level of serum uric acid able to contribute to the cardiovascular risk. Indeed, the available epidemiological case studies are not homogeneous, and some preliminary data suggest that the so-called "cardiovascular threshold limit" may substantially differ from that identified as a cut-off able to trigger the acute gout attack. In such scenario, there is the necessity to clarify and quantify this threshold value, to insert it in the stratification of risk algorithm scores and, in turn, to adopt proper prevention and correction strategies. The clarification of the relationship between circulating levels of uric acid and cardio-nephro-metabolic disorders in a broad sample representative of general population is critical to identify the threshold value of serum uric acid better discriminating the increased risk associated with uric acid. The Uric acid Right for heArt Health (URRAH) project has been designed to define, as primary objective, the level of uricemia above which the independent risk of cardiovascular disease may increase in a significantly manner in a general Italian population.
Threshold network of a financial market using the P-value of correlation coefficients
NASA Astrophysics Data System (ADS)
Ha, Gyeong-Gyun; Lee, Jae Woo; Nobi, Ashadun
2015-06-01
Threshold methods in financial networks are important tools for obtaining important information about the financial state of a market. Previously, absolute thresholds of correlation coefficients have been used; however, they have no relation to the length of time. We assign a threshold value depending on the size of the time window by using the P-value concept of statistics. We construct a threshold network (TN) at the same threshold value for two different time window sizes in the Korean Composite Stock Price Index (KOSPI). We measure network properties, such as the edge density, clustering coefficient, assortativity coefficient, and modularity. We determine that a significant difference exists between the network properties of the two time windows at the same threshold, especially during crises. This implies that the market information depends on the length of the time window when constructing the TN. We apply the same technique to Standard and Poor's 500 (S&P500) and observe similar results.
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Ribatet, M.; Stübi, R.; Weihs, P.; Holawe, F.; Peter, T.; Davison, A. C.
2009-04-01
Over the last few decades negative trends in stratospheric ozone have been studied because of the direct link between decreasing stratospheric ozone and increasing surface UV-radiation. Recently a discussion on ozone recovery has begun. Long-term measurements of total ozone extending back earlier than 1958 are limited and only available from a few stations in the northern hemisphere. The world's longest total ozone record is available from Arosa, Switzerland (Staehelin et al., 1998a,b). At this site total ozone measurements have been made since late 1926 through the present day. Within this study (Rieder et al., 2009) new tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) are applied to select mathematically well-defined thresholds for extreme low and extreme high total ozone. A heavy-tail focused approach is used by fitting the Generalized Pareto Distribution (GPD) to the Arosa time series. Asymptotic arguments (Pickands, 1975) justify the use of the GPD for modeling exceedances over a sufficiently high (or below a sufficiently low) threshold (Coles, 2001). More precisely, the GPD is the limiting distribution of normalized excesses over a threshold, as the threshold approaches the endpoint of the distribution. In practice, GPD parameters are fitted, to exceedances by maximum likelihood or other methods - such as the probability weighted moments. A preliminary step consists in defining an appropriate threshold for which the asymptotic GPD approximation holds. Suitable tools for threshold selection as the MRL-plot (mean residual life plot) and TC-plot (stability plot) from the POT-package (Ribatet, 2007) are presented. The frequency distribution of extremes in low (termed ELOs) and high (termed EHOs) total ozone and their influence on the long-term changes in total ozone are analyzed. Further it is shown that from the GPD-model the distribution of so-called ozone mini holes (e.g. Bojkov and Balis, 2001) can be precisely estimated and that the "extremes concept" provides new information on the data distribution and variability within the Arosa record as well as on the influence of ELOs and EHOs on the long-term trends of the ozone time series. References: Bojkov, R. D., and Balis, D.S.: Characteristics of episodes with extremely low ozone values in the northern middle latitudes 1975-2000, Ann. Geophys., 19, 797-807, 2001. Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Pickands, J.: Statistical inference using extreme order statistics, Ann. Stat., 3, 1, 119-131, 1975. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Stübi, R., Weihs, P., Holawe, F., and M. Ribatet: From ozone mini holes and mini highs towards extreme value theory: New insights from extreme events and non stationarity, submitted to J. Geophys. Res., 2009. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998a. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998b.
Contrera, Joseph F
2011-02-01
The Threshold of Toxicological Concern (TTC) is a level of exposure to a genotoxic impurity that is considered to represent a negligible risk to humans. The TTC was derived from the results of rodent carcinogenicity TD50 values that are a measure of carcinogenic potency. The TTC currently sets a default limit of 1.5 μg/day in food contact substances and pharmaceuticals for all genotoxic impurities without carcinogenicity data. Bercu et al. (2010) used the QSAR predicted TD50 to calculate a risk specific dose (RSD) which is a carcinogenic potency adjusted TTC for genotoxic impurities. This promising approach is currently limited by the software used, a combination of MC4PC (www.multicase.com) and a Lilly Inc. in-house software (VISDOM) that is not available to the public. In this report the TD50 and RSD were predicted using a commercially available software, SciQSAR (formally MDL-QSAR, www.scimatics.com) employing the same TD50 training data set and external validation test set that was used by Bercu et al. (2010). The results demonstrate the general applicability of QSAR predicted TD50 values to determine the RSDs for genotoxic impurities and the improved performance of SciQSAR for predicting TD50 values. Copyright © 2010 Elsevier Inc. All rights reserved.
Higher certainty of the laser-induced damage threshold test with a redistributing data treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Lars; Mrohs, Marius; Gyamfi, Mark
2015-10-15
As a consequence of its statistical nature, the measurement of the laser-induced damage threshold holds always risks to over- or underestimate the real threshold value. As one of the established measurement procedures, the results of S-on-1 (and 1-on-1) tests outlined in the corresponding ISO standard 21 254 depend on the amount of data points and their distribution over the fluence scale. With the limited space on a test sample as well as the requirements on test site separation and beam sizes, the amount of data from one test is restricted. This paper reports on a way to treat damage testmore » data in order to reduce the statistical error and therefore measurement uncertainty. Three simple assumptions allow for the assignment of one data point to multiple data bins and therefore virtually increase the available data base.« less
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
An Algorithm to Automate Yeast Segmentation and Tracking
Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.
2013-01-01
Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484
NASA Astrophysics Data System (ADS)
L'vov, Victor A.; Kosogor, Anna
2016-09-01
The magnetic field application leads to spatially inhomogeneous magnetostriction of twinned ferromagnetic martensite. When the increasing field and magnetostrictive strain reach certain threshold values, the motion of twin boundaries and magnetically induced reorientation (MIR) of twinned martensite start. The MIR leads to giant magnetically induced deformation of twinned martensite. In the present article, the threshold field (TF) and temperature range of observability of MIR were calculated for the Ni-Mn-Ga martensite assuming that the threshold strain (TS) is temperature-independent. The calculations show that if the TS is of the order of 10-4, the TF strongly depends on temperature and MIR can be observed only above the limiting temperature (~220 K). If the TS is of the order of 10-6, the TF weakly depends on temperature and MIR can be observed at extremely low temperatures. The obtained theoretical results are in agreement with available experimental data.
Experimental study of transient paths to the extinction in sonoluminescence.
Urteaga, Raúl; Dellavale, Damián; Puente, Gabriela F; Bonetto, Fabián J
2008-09-01
An experimental study of the extinction threshold of single bubble sonoluminescence in an air-water system is presented. Different runs from 5% to 100% of air concentrations were performed at room pressure and temperature. The intensity of sonoluminescence (SL) and time of collapse (t(c)) with respect to the driving were measured while the acoustic pressure was linearly increased from the onset of SL until the bubble extinction. The experimental data were compared with theoretical predictions for shape and position instability thresholds. It was found that the extinction of the bubble is determined by different mechanisms depending on the air concentration. For concentrations greater than approximately 30%-40% with respect to the saturation, the parametric instability limits the maximum value of R(0) that can be reached. On the other hand, for lower concentrations, the extinction appears as a limitation in the time of collapse. Two different mechanisms emerge in this range, i.e., the Bjerknes force and the Rayleigh-Taylor instability. The bubble acoustic emission produces backreaction on the bubble itself. This effect occurs in both mechanisms and is essential for the correct prediction of the extinction threshold in the case of low air dissolved concentration.
ZnO-PVA nanocomposite films for low threshold optical limiting applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viswanath, Varsha; Beenakumari, C.; Muneera, C. I.
Zinc oxide-PVA nanocomposite films were fabricated adopting a simple method based on solution-casting, incorporating small weight percentages (<1.2 wt%) of ZnO in PVA (∼0.625×10{sup −3}M to 7×10{sup −3}M), and their structure, morphology, linear and low threshold nonlinear optical properties were investigated. The films were characterized as nanostructured ZnO encapsulated between the molecules/chains of the semicrystalline host polymer PVA. The samples exhibited low threshold nonlinear absorption and negative nonlinear refraction, as studied using the Z-scan technique. A switchover from SA to RSA was observed as the concentration of ZnO was increased. The optical limiting of 632.8 nm CW laser light displayedmore » by these nanocomposite films is also demonstrated. The estimated values of the effective coefficients of nonlinear absorption, nonlinear refraction and third-order nonlinear susceptibility, |χ{sup (3)}|, compared to those reported for continuous wave laser light excitation, measure up to the highest among them. The results show that the ZnO-PVA nanocomposite films have great potential applications in future optical and photonic devices.« less
Jing, X; Cimino, J J
2014-01-01
Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.
Estimation of the geochemical threshold and its statistical significance
Miesch, A.T.
1981-01-01
A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
NASA Astrophysics Data System (ADS)
Rivlin, Lev A.
1990-05-01
A method is suggested for the generation of atomic beams with a high degree of monokinetization from beams of negative ions accelerated in an electric field up to a threshold moment at which, subject to the Doppler effect, the longitudinal component of the ion velocity becomes sufficient for the photodetachment of an electron from an ion by photons in a laser beam collinear with the ion beam. The resultant neutral atoms continue to move without acceleration and at the same longitudinal velocities equal to the threshold value. An analysis of a number of factors limiting this effect is given below.
NASA Astrophysics Data System (ADS)
Kurtzman, D.; Kanner, B.; Levy, Y.; Shapira, R. H.; Bar-Tal, A.
2017-12-01
Closed-root-zone experiments (e.g. pots, lyzimeters) reveal in many cases a mineral-nitrogen (N) concentration from which the root-N-uptake efficiency reduces significantly and nitrate leaching below the root-zone increases dramatically. A les-direct way to reveal this threshold concentration in agricultural fields is to calibrate N-transport models of the unsaturated zone to nitrate data of the deep samples (under the root-zone) by fitting the threshold concentration of the nitrate-uptake function. Independent research efforts of these two types in light soils where nitrate problems in underlying aquifers are common reviled: 1) that the threshold exists for most crops (filed, vegetables and orchards); 2) nice agreement on the threshold value between the two very different research methodologies; and 3) the threshold lies within 20-50 mg-N/L. Focusing on being below the threshold is a relatively simple aim in the way to maintain intensive agriculture with limited effects on the nitrate concentration in the underlying water resource. Our experience show that in some crops this threshold coincides with the end-of-rise of the N-yield curve (e.g. corn); in this case, it is relatively easy to convince farmers to fertilize below threshold. In other crops, although significant N is lost to leaching the crop can still use higher N concentration to increase yield (e.g. potato).
Manning, F.W.; Groothuis, S.E.; Lykins, J.H.; Papke, D.M.
1962-06-12
S>An improved area radiation dose monitor is designed which is adapted to compensate continuously for background radiation below a threshold dose rate and to give warning when the dose integral of the dose rate of an above-threshold radiation excursion exceeds a selected value. This is accomplished by providing means for continuously charging an ionization chamber. The chamber provides a first current proportional to the incident radiation dose rate. Means are provided for generating a second current including means for nulling out the first current with the second current at all values of the first current corresponding to dose rates below a selected threshold dose rate value. The second current has a maximum value corresponding to that of the first current at the threshold dose rate. The excess of the first current over the second current, which occurs above the threshold, is integrated and an alarm is given at a selected integrated value of the excess corresponding to a selected radiation dose. (AEC)
Asymptotic Laws of Thermovibrational Convecton in a Horizontal Fluid Layer
NASA Astrophysics Data System (ADS)
Smorodin, B. L.; Myznikova, B. I.; Keller, I. O.
2017-02-01
Theoretical study of convective instability is applied to a horizontal layer of incompressible single-component fluid subjected to the uniform steady gravity, longitudinal vibrations of arbitrary frequency and initial temperature difference. The mathematical model of thermovibrational convection has the form of initial boundary value problem for the Oberbeck-Boussinesq system of equations. The problems are solved using different simulation strategies, like the method of averaging, method of multiple scales, Galerkin approach, Wentzel-Kramers-Brillouin method and Floquet technique. The numerical analysis has shown that the effect of vibrations on the stability threshold is complex: vibrations can either stabilize or destabilize the basic state depending on values of the parameters. The influence of the Prandtl number on the instability thresholds is investigated. The asymptotic behaviour of critical values of the parameters is studied in two limiting cases: (i) small amplitude and (ii) low frequency of vibration. In case (i), the instability is due to the influence of thermovibrational mechanism on the classical Rayleigh-Benard convective instability. In case (ii), the nature of the instability is related to the instability of oscillating counter-streams with a cubic profile.
Energy Emergency Management Information System (EEMIS): Functional requirements
NASA Astrophysics Data System (ADS)
1980-10-01
These guidelines state that in order to create the widest practicable competition, the system's requirements, with few exceptions, must be expressed in functional terms without reference to specific hardware or software products, and that wherever exceptions are made a statement of justification must be provided. In addition, these guidelines set forth a recommended maximum threshold limit of annual contract value for schedule contract procurements.
An assessment of the Space Station Freedom program's leakage current requirement
NASA Technical Reports Server (NTRS)
Nagy, Michael
1991-01-01
The Space Station Freedom Program requires leakage currents to be limited to less than human perception level, which NASA presently defines as 5 mA for dc. The origin of this value is traced, and the literature for other dc perception threshold standards is surveyed. It is shown that while many varying standards exist, very little experimental data is available to support them.
NASA Astrophysics Data System (ADS)
Vembris, Aivars; Zarins, Elmars; Kokars, Valdis
2017-10-01
Organic solid state lasers are thoughtfully investigated due to their potential applications in communication, sensors, biomedicine, etc. Low amplified spontaneous emission (ASE) excitation threshold value is essential for further use of the material in devices. Intramolecular interaction limits high molecule density load in the matrix. It is the case of the well-known red light emitting laser dye - 4-(dicyanomethylene)-2-methyl-6-(4-dimethylaminostyryl)-4H-pyran (DCM). The lowest ASE threshold value of the mentioned laser dye could be obtained within the concentration range between 2 and 4 wt%. At higher concentration threshold energy drastically increases. In this work optical and ASE properties of three original DCM derivatives in poly(N-vinylcarbazole) (PVK) at various concentrations will be discussed. One of the derivatives is modified DCM dye in which the methyl substituents in the electron donor part have been replaced with bulky trityloxyethyl groups (DWK-1). These sterically significant functional groups do not influence electron transitions in the dye but prevent aggregation of the molecules. The chemical structure of the second investigated compound is similar to DWK-1 where the methyl group is replaced with the tert-butyl substituent (DWK-1TB). The third derivative (DWK-2) consists of two N,N-di(trityloxyethyl)amino electron donor groups. All results were compared with DCM:PVK system. Photoluminescence quantum yield (PLQY) is up to ten times larger for DWK-1TB with respect to DCM systems. Bulky trityloxyethyl groups prevent aggregation of the molecules thus decreasing interaction between dyes and amount of non-radiative decays. The red shift of the photoluminescence and amplified spontaneous emission at higher concentrations were observed due to the solid state solvation effect. The increase of the investigated dye density in the matrix with a smaller reduction in PLQY resulted in low ASE threshold energy. The lowest threshold value was obtained around 21 μJ/cm2 (2.1 kW/cm2) in DWK-1TB:PVK films.
Asamoah, Anita; Essumang, David Kofi; Muff, Jens; Kucheryavskiy, Sergey V; Søgaard, Erik Gydesen
2018-01-15
The aim of the study was to assess the levels of PCBs in the breast milk of some Ghanaian women at suspected hotspot and relatively non-hotspot areas and to find out if the levels of these PCBs pose any risk to the breastfed infants. A total of 128 individual human breast milk were sampled from both primiparae and multiparae mothers. The levels of PCBs in the milk samples were compared. Some of these mothers (105 individuals) work or reside in and around Agbogbloshie (hot-spot), the largest electric and electronic waste dump and recycling site in Accra, Ghana. Others (23 donor mothers) also reside in and around Kwabenya (non-hotspot) which is a mainly residential area without any industrial activities. Samples were analyzed using GC-MS/MS. The total mean levels and range of Σ 7 PCBs were 3.64ng/glipidwt and ˂LOD-29.20ng/glipidwt, respectively. Mean concentrations from Agbogbloshie (hot-spot area) and Kwabenya (non-hotspot areas) were 4.43ng/glipidwt and 0.03ng/glipidwt, respectively. PCB-28 contributed the highest of 29.5% of the total PCBs in the milk samples, and PCB-101 contributed the lowest of 1.74%. The estimated daily intake of PCBs and total PCBs concentrations in this work were found to be lower as compared to similar studies across the world. The estimated hazard quotient using Health Canada's guidelines threshold limit of 1μg/kgbw/day showed no potential health risk to babies. However, considering minimum tolerable value of 0.03μg/kgbw/day defined by the Agency for Toxic Substances and Disease Registry (ATSDR), the values of some mothers were found to be at the threshold limit. This may indicate a potential health risk to their babies. Mothers with values at the threshold levels of the minimum tolerable limits are those who work or reside in and around the Agbogbloshie e-waste site. Copyright © 2017 Elsevier B.V. All rights reserved.
Neural tuning characteristics of auditory primary afferents in the chicken embryo.
Jones, S M; Jones, T A
1995-02-01
Primary afferent activity was recorded from the cochlear ganglion in chicken embryos (Gallus domesticus) at 19 days of incubation (E19). The ganglion was accessed via the recessus scala tympani and impaled with glass micropipettes. Frequency tuning curves were obtained using a computerized threshold tracking procedure. Tuning curves were evaluated to determine characteristics frequencies (CFs), CF thresholds, slopes of low and high frequency flanks, and tip sharpness (Q10dB). The majority of tuning curves exhibited the typical 'V' shape described for older birds and, on average, appeared relatively mature based on mean values for CF thresholds (59.6 +/- 20.3 dBSPL) and tip sharpness (Q10dB = 5.2 +/- 3). The mean slopes of low (61.9 +/- 37 dB/octave) and high (64.6 +/- 33 dB/octave) frequency flanks although comparable were somewhat less than those reported for 21-day-old chickens. Approximately 14% of the tuning curves displayed an unusual 'saw-tooth' pattern. CFs ranged from 188 to 1623 Hz. The highest CF was well below those reported for post-hatch birds. In addition, a broader range of Q10dB values (1.2 to 16.9) may related to a greater variability in embryonic tuning curves. Overall, these data suggest that an impressive functional maturity exists in the embryo at E19. The most significant sign of immaturity was the limited expression of high frequencies. It is argued that the limited high CF in part may be due to the developing middle ear transfer function and/or to a functionally immature cochlear base.
Neural tuning characteristics of auditory primary afferents in the chicken embryo
NASA Technical Reports Server (NTRS)
Jones, S. M.; Jones, T. A.
1995-01-01
Primary afferent activity was recorded from the cochlear ganglion in chicken embryos (Gallus domesticus) at 19 days of incubation (E19). The ganglion was accessed via the recessus scala tympani and impaled with glass micropipettes. Frequency tuning curves were obtained using a computerized threshold tracking procedure. Tuning curves were evaluated to determine characteristics frequencies (CFs), CF thresholds, slopes of low and high frequency flanks, and tip sharpness (Q10dB). The majority of tuning curves exhibited the typical 'V' shape described for older birds and, on average, appeared relatively mature based on mean values for CF thresholds (59.6 +/- 20.3 dBSPL) and tip sharpness (Q10dB = 5.2 +/- 3). The mean slopes of low (61.9 +/- 37 dB/octave) and high (64.6 +/- 33 dB/octave) frequency flanks although comparable were somewhat less than those reported for 21-day-old chickens. Approximately 14% of the tuning curves displayed an unusual 'saw-tooth' pattern. CFs ranged from 188 to 1623 Hz. The highest CF was well below those reported for post-hatch birds. In addition, a broader range of Q10dB values (1.2 to 16.9) may related to a greater variability in embryonic tuning curves. Overall, these data suggest that an impressive functional maturity exists in the embryo at E19. The most significant sign of immaturity was the limited expression of high frequencies. It is argued that the limited high CF in part may be due to the developing middle ear transfer function and/or to a functionally immature cochlear base.
What is the maximum mass of a Population III galaxy?
NASA Astrophysics Data System (ADS)
Visbal, Eli; Bryan, Greg L.; Haiman, Zoltán
2017-08-01
We utilize cosmological hydrodynamic simulations to study the formation of Population III (Pop III) stars in dark matter haloes exposed to strong ionizing radiation. We simulate the formation of three haloes subjected to a wide range of ionizing fluxes, and find that for high flux, ionization and photoheating can delay gas collapse and star formation up to halo masses significantly larger than the atomic cooling threshold. The threshold halo mass at which gas first collapses and cools increases with ionizing flux for intermediate values, and saturates at a value approximately an order of magnitude above the atomic cooling threshold for extremely high flux (e.g. ≈5 × 108 M⊙ at z ≈ 6). This behaviour can be understood in terms of photoheating, ionization/recombination and Ly α cooling in the pressure-supported, self-shielded gas core at the centre of the growing dark matter halo. We examine the spherically averaged radial velocity profiles of collapsing gas and find that a gas mass of up to ≈106 M⊙ can reach the central regions within 3 Myr, providing an upper limit on the amount of massive Pop III stars that can form. The ionizing radiation increases this limit by a factor of a few compared to strong Lyman-Werner radiation alone. We conclude that the bright He II 1640 Å emission recently observed from the high-redshift galaxy CR7 cannot be explained by Pop III stars alone. However, in some haloes, a sufficient number of Pop III stars may form to be detectable with future telescopes such as the James Webb Space Telescope.
Rogowski, W H; Grosse, S D; Meyer, E; John, J; Palmer, S
2012-05-01
Public decision makers face demands to invest in applied research in order to accelerate the adoption of new genetic tests. However, such an investment is profitable only if the results gained from further investigations have a significant impact on health care practice. An upper limit for the value of additional information aimed at improving the basis for reimbursement decisions is given by the expected value of perfect information (EVPI). This study illustrates the significance of the concept of EVPI on the basis of a probabilistic cost-effectiveness model of screening for hereditary hemochromatosis among German men. In the present example, population-based screening can barely be recommended at threshold values of 50,000 or 100,000 Euro per life year gained and also the value of additional research which might cause this decision to be overturned is small: At the mentioned threshold values, the EVPI in the German public health care system was ca. 500,000 and 2,200,000 Euro, respectively. An analysis of EVPI by individual parameters or groups of parameters shows that additional research about adherence to preventive phlebotomy could potentially provide the highest benefit. The potential value of further research also depends on methodological assumptions regarding the decision maker's time horizon as well as on scenarios with an impact on the number of affected patients and the cost-effectiveness of screening.
Determination of the measurement threshold in gamma-ray spectrometry.
Korun, M; Vodenik, B; Zorko, B
2017-03-01
In gamma-ray spectrometry the measurement threshold describes the lover boundary of the interval of peak areas originating in the response of the spectrometer to gamma-rays from the sample measured. In this sense it presents a generalization of the net indication corresponding to the decision threshold, which is the measurement threshold at the quantity value zero for a predetermined probability for making errors of the first kind. Measurement thresholds were determined for peaks appearing in the spectra of radon daughters 214 Pb and 214 Bi by measuring the spectrum 35 times under repeatable conditions. For the calculation of the measurement threshold the probability for detection of the peaks and the mean relative uncertainty of the peak area were used. The relative measurement thresholds, the ratios between the measurement threshold and the mean peak area uncertainty, were determined for 54 peaks where the probability for detection varied between some percent and about 95% and the relative peak area uncertainty between 30% and 80%. The relative measurement thresholds vary considerably from peak to peak, although the nominal value of the sensitivity parameter defining the sensitivity for locating peaks was equal for all peaks. At the value of the sensitivity parameter used, the peak analysis does not locate peaks corresponding to the decision threshold with the probability in excess of 50%. This implies that peaks in the spectrum may not be located, although the true value of the measurand exceeds the decision threshold. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computational analysis of thresholds for magnetophosphenes
NASA Astrophysics Data System (ADS)
Laakso, Ilkka; Hirata, Akimasa
2012-10-01
In international guidelines, basic restriction limits on the exposure of humans to low-frequency magnetic and electric fields are set with the objective of preventing the generation of phosphenes, visual sensations of flashing light not caused by light. Measured data on magnetophosphenes, i.e. phosphenes caused by a magnetically induced electric field on the retina, are available from volunteer studies. However, there is no simple way for determining the retinal threshold electric field or current density from the measured threshold magnetic flux density. In this study, the experimental field configuration of a previous study, in which phosphenes were generated in volunteers by exposing their heads to a magnetic field between the poles of an electromagnet, is computationally reproduced. The finite-element method is used for determining the induced electric field and current in five different MRI-based anatomical models of the head. The direction of the induced current density on the retina is dominantly radial to the eyeball, and the maximum induced current density is observed at the superior and inferior sides of the retina, which agrees with literature data on the location of magnetophosphenes at the periphery of the visual field. On the basis of computed data, the macroscopic retinal threshold current density for phosphenes at 20 Hz can be estimated as 10 mA m-2 (-20% to + 30%, depending on the anatomical model); this current density corresponds to an induced eddy current of 14 μA (-20% to + 10%), and about 20% of this eddy current flows through each eye. The ICNIRP basic restriction limit for the induced electric field in the case of occupational exposure is not exceeded until the magnetic flux density is about two to three times the measured threshold for magnetophosphenes, so the basic restriction limit does not seem to be conservative. However, the reasons for the non-conservativeness are purely technical: removal of the highest 1% of electric field values by taking the 99th percentile as recommended by the ICNIRP leads to the underestimation of the induced electric field, and there are difficulties in applying the basic restriction limit for the retinal electric field.
Pressure and cold pain threshold reference values in a large, young adult, pain-free population.
Waller, Robert; Smith, Anne Julia; O'Sullivan, Peter Bruce; Slater, Helen; Sterling, Michele; McVeigh, Joanne Alexandra; Straker, Leon Melville
2016-10-01
Currently there is a lack of large population studies that have investigated pain sensitivity distributions in healthy pain free people. The aims of this study were: (1) to provide sex-specific reference values of pressure and cold pain thresholds in young pain-free adults; (2) to examine the association of potential correlates of pain sensitivity with pain threshold values. This study investigated sex specific pressure and cold pain threshold estimates for young pain free adults aged 21-24 years. A cross-sectional design was utilised using participants (n=617) from the Western Australian Pregnancy Cohort (Raine) Study at the 22-year follow-up. The association of site, sex, height, weight, smoking, health related quality of life, psychological measures and activity with pain threshold values was examined. Pressure pain threshold (lumbar spine, tibialis anterior, neck and dorsal wrist) and cold pain threshold (dorsal wrist) were assessed using standardised quantitative sensory testing protocols. Reference values for pressure pain threshold (four body sites) stratified by sex and site, and cold pain threshold (dorsal wrist) stratified by sex are provided. Statistically significant, independent correlates of increased pressure pain sensitivity measures were site (neck, dorsal wrist), sex (female), higher waist-hip ratio and poorer mental health. Statistically significant, independent correlates of increased cold pain sensitivity measures were, sex (female), poorer mental health and smoking. These data provide the most comprehensive and robust sex specific reference values for pressure pain threshold specific to four body sites and cold pain threshold at the dorsal wrist for young adults aged 21-24 years. Establishing normative values in this young age group is important given that the transition from adolescence to adulthood is a critical temporal period during which trajectories for persistent pain can be established. These data will provide an important research resource to enable more accurate profiling and interpretation of pain sensitivity in clinical pain disorders in young adults. The robust and comprehensive data can assist interpretation of future clinical pain studies and provide further insight into the complex associations of pain sensitivity that can be used in future research. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Ultrahigh Error Threshold for Surface Codes with Biased Noise
NASA Astrophysics Data System (ADS)
Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.
2018-02-01
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
Hao, Yinglu; Li, Yanping; Liao, Derong; Yang, Ling; Liu, Fangyan
2017-03-01
Data comparing active atrial lead fixation with passive atrial lead fixation in Chinese patients with cardiovascular implantable electronic devices (CIEDs) for atrial pacing is limited. Our study evaluated the effectiveness of active fixation versus passive fixation of atrial leads by observing the lead performance parameters. This retrospective, long-term, single-center study included a cohort of Chinese patients who underwent CIED implantation at the Department of Cardiology of People's Hospital of Yuxi City, China, from 1 March 2010 to 1 March 2015. Efficacy was determined by comparing implantation time, threshold values, incidence of lead dislocation/failure, and lead-related complications between the two groups. Of the 1217 patients, active and passive atrial lead fixation were performed in 530 (mean age, 69.37 ± 11.44 years) and 497 (mean age, 68.33 ± 10.96 years). The active fixation group reported significantly lower mean atrial implantation times (P = .0001) and threshold values (P = .044) compared with the passive atrial lead fixation group. In addition, threshold values in the active atrial lead fixation group were stable throughout the observation period. No instances of myocardial perforation, cardiac tamponade, implantation failure, or electrode dislocation/re-fixation were reported in the active atrial lead fixation group. A favorable decrease in patient comfort parameters such as bed rest time (P = .027) and duration of hospital stay (P = .038) were also observed in the active lead fixation group. Active atrial lead fixation demonstrated greater stability, steady long-term thresholds and minimal lead-related complications compared to passive lead fixation in Chinese patients with CIEDs.
Midline Shift Threshold Value for Hemiparesis in Chronic Subdural Hematoma.
Juković, Mirela F; Stojanović, Dejan B
2015-01-01
Chronic subdural hematoma (CSDH) has a variety of clinical presentations, with numerous neurological symptoms and signs. Hemiparesis is one of the leading signs that potentially indicates CSDH. Purpose of this study was to determine the threshold (cut-off) value of midsagittal line (MSL) shift after which hemiparesis is likely to appear. The study evaluated 83 patients with 53 unilateral and 30 bilateral CSDHs in period of three years. Evaluated computed tomography (CT) findings in patients with CSDH were diameter of the hematoma and midsagittal line shift, measured on non-contrast CT scan in relation with occurrence of hemiparesis. Threshold values of MSL shift for both types of CSDHs were obtained as maximal (equal) sensitivity and specificity (intersection of the curves). MSL is a good predictor for hemiparesis occurrence (total sample, AUROC 0.75, p=0.0001). Unilateral and bilateral CSDHs had different threshold values of the MSL for hemiparesis development. Results suggested that in unilateral CSDH the threshold values of MSL could be at 10 mm (AUROC=0.65; p=0.07). For bilateral CSDH the threshold level of MSL shift was 4.5 mm (AUROC=0.77; p=0.01). Our study pointed on the phenomenon that midsagittal line shift can predict hemiparesis occurrence. Hemiparesis in patients with bilateral CSDH was more related to midsagittal line shift compared with unilateral CSDH. When value of midsagittal line shift exceed the threshold level, hemiparesis occurs with certain probability.
Master, Hiral; Thoma, Louise M; Christiansen, Meredith B; Polakowski, Emily; Schmitt, Laura A; White, Daniel K
2018-07-01
Evidence of physical function difficulties, such as difficulty rising from a chair, may limit daily walking for people with knee osteoarthritis (OA). The purpose of this study was to identify minimum performance thresholds on clinical tests of physical function predictive to walking ≥6,000 steps/day. This benchmark is known to discriminate people with knee OA who develop functional limitation over time from those who do not. Using data from the Osteoarthritis Initiative, we quantified daily walking as average steps/day from an accelerometer (Actigraph GT1M) worn for ≥10 hours/day over 1 week. Physical function was quantified using 3 performance-based clinical tests: 5 times sit-to-stand test, walking speed (tested over 20 meters), and 400-meter walk test. To identify minimum performance thresholds for daily walking, we calculated physical function values corresponding to high specificity (80-95%) to predict walking ≥6,000 steps/day. Among 1,925 participants (mean ± SD age 65.1 ± 9.1 years, mean ± SD body mass index 28.4 ± 4.8 kg/m 2 , and 55% female) with valid accelerometer data, 54.9% walked ≥6,000 steps/day. High specificity thresholds of physical function for walking ≥6,000 steps/day ranged 11.4-14.0 seconds on the 5 times sit-to-stand test, 1.13-1.26 meters/second for walking speed, or 315-349 seconds on the 400-meter walk test. Not meeting these minimum performance thresholds on clinical tests of physical function may indicate inadequate physical ability to walk ≥6,000 steps/day for people with knee OA. Rehabilitation may be indicated to address underlying impairments limiting physical function. © 2017, American College of Rheumatology.
Gatti, Daniel M.; Morgan, Daniel L.; Kissling, Grace E.; Shockley, Keith R.; Knudsen, Gabriel A.; Shepard, Kim G.; Price, Herman C.; King, Deborah; Witt, Kristine L.; Pedersen, Lars C.; Munger, Steven C.; Svenson, Karen L.; Churchill, Gary A.
2014-01-01
Background Inhalation of benzene at levels below the current exposure limit values leads to hematotoxicity in occupationally exposed workers. Objective We sought to evaluate Diversity Outbred (DO) mice as a tool for exposure threshold assessment and to identify genetic factors that influence benzene-induced genotoxicity. Methods We exposed male DO mice to benzene (0, 1, 10, or 100 ppm; 75 mice/exposure group) via inhalation for 28 days (6 hr/day for 5 days/week). The study was repeated using two independent cohorts of 300 animals each. We measured micronuclei frequency in reticulocytes from peripheral blood and bone marrow and applied benchmark concentration modeling to estimate exposure thresholds. We genotyped the mice and performed linkage analysis. Results We observed a dose-dependent increase in benzene-induced chromosomal damage and estimated a benchmark concentration limit of 0.205 ppm benzene using DO mice. This estimate is an order of magnitude below the value estimated using B6C3F1 mice. We identified a locus on Chr 10 (31.87 Mb) that contained a pair of overexpressed sulfotransferases that were inversely correlated with genotoxicity. Conclusions The genetically diverse DO mice provided a reproducible response to benzene exposure. The DO mice display interindividual variation in toxicity response and, as such, may more accurately reflect the range of response that is observed in human populations. Studies using DO mice can localize genetic associations with high precision. The identification of sulfotransferases as candidate genes suggests that DO mice may provide additional insight into benzene-induced genotoxicity. Citation French JE, Gatti DM, Morgan DL, Kissling GE, Shockley KR, Knudsen GA, Shepard KG, Price HC, King D, Witt KL, Pedersen LC, Munger SC, Svenson KL, Churchill GA. 2015. Diversity Outbred mice identify population-based exposure thresholds and genetic factors that influence benzene-induced genotoxicity. Environ Health Perspect 123:237–245; http://dx.doi.org/10.1289/ehp.1408202 PMID:25376053
Fe induced optical limiting properties of Zn1-xFexS nanospheres
NASA Astrophysics Data System (ADS)
Vineeshkumar, T. V.; Raj, D. Rithesh; Prasanth, S.; Unnikrishnan, N. V.; Mahadevan Pillai, V. P.; Sudarasanakumar, C.
2018-02-01
Zn1-xFexS (x = 0.00, 0.01, 0.03, 0.05) nanospheres were synthesized by polyethylene glycol assisted hydrothermal method. XRD studies revealed that samples of all concentrations exhibited cubic structure with crystallite grain size 7-9 nm. TEM and SEM show the formation of nanospheres by dense aggregation of smaller particles. Increasing Zn/Fe ratio tune the band gap from 3.4 to 3.2 eV and also quenches the green luminescence. FTIR spectra reveal the presence of capping agent, intensity variation and shifting of LO and TO phonon modes confirm the presence of Fe ions. Nonlinear optical properties were measured using open and closed aperture z-scan techniques, employing frequency doubled 532 nm pumping sources which indicated reverse saturable absorption (RSA) process. The nonlinear optical coefficients are obtained by two photon absorption (2PA). Composition dependent nonlinear optical coefficients ;β;, nonlinear refractive index, third order susceptibility and optical limiting threshold were estimated. The sample shows good nonlinear absorption and enhancement of optical limiting behavior with increasing Fe volume fraction. Contribution of RSA on optical nonlinearity of Zn1-xFexS nanospheres are also investigated using three different input energies. Zn1-xFexS with comparatively small limiting threshold value is a promising candidate for optical power limiting applications.
[The analysis of threshold effect using Empower Stats software].
Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan
2013-11-01
In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.
Threshold concepts: implications for the management of natural resources
Guntenspergen, Glenn R.; Gross, John
2014-01-01
Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brisson, Michael J.; Ashley, Kevin
2005-08-16
Beryllium in various forms is widely used throughout the world in ceramics, aerospace and military applications, electronics, and sports equipment. Workplace exposure to beryllium is a growing industrial hygiene concern due to the potential for development of chronic beryllium disease (CBD), a lung condition with no known cure, in a small percentage of those exposed. There are workplace exposure limits for beryllium that have been in place for several decades. However, recent studies suggest that the current American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value (TLV) and the Occupational Safety and Health Administration (OSHA) Permissible Exposure Limit (PEL)more » may not be sufficiently protective for workers who are potentially exposed to airborne beryllium. Early in 2005, ACGIH issued a Notice of Intended Change (NIC) to the current TLV for beryllium which entails a 100-fold reduction (from 2 to 0.02 micrograms per cubic meter of sampled air). It is noted that ACGIH TLVs do not carry legal force in the manner that OSHA PELs or other federal regulations do. Nevertheless, OSHA plans a beryllium rulemaking in the near future, and a reduction in the PEL is anticipated. Also, if this change in the TLV for beryllium is adopted, it is reasonable to assume that at least some sampling and analysis activities will need to be modified to address airborne beryllium at the lower levels. There are implications to both the industrial hygiene and the laboratory communities, which are discussed.« less
NASA Astrophysics Data System (ADS)
Kleinmann, Johanna; Wueller, Dietmar
2007-01-01
Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.
Vogel, M N; Schmücker, S; Maksimovic, O; Hartmann, J; Claussen, C D; Horger, M
2012-07-01
This study compares tumour response assessment by automated CT volumetry and standard manual measurements regarding the impact on treatment decisions and patient outcome. 58 consecutive patients with 203 pulmonary metastases undergoing baseline and follow-up multirow detector CT (MDCT) under chemotherapy were assessed for response to chemotherapy. Tumour burden of pulmonary target lesions was quantified in three ways: (1) following response evaluation criteria in solid tumours (RECIST); (2) following the volume equivalents of RECIST (i.e. with a threshold of -65/+73%); and (3) using calculated limits for stable disease (SD). For volumetry, calculated limits had been set at ±38% prior to the study by repeated quantification of nodules scanned twice. Results were compared using non-weighted κ-values and were evaluated for their impact on treatment decisions and patient outcome. In 15 (17%) of the 58 patients, the results of response assessment were inconsistent with 1 of the 3 methods, which would have had an impact on treatment decisions in 8 (13%). Patient outcome regarding therapy response could be verified in 5 (33%) of the 15 patients with inconsistent measurement results and was consistent with both RECIST and volumetry in 1, with calculated limits in 3 and with none in 1. Diagnosis as to the overall response was consistent with RECIST in six patients, with volumetry in six and with calculated limits in eight cases. There is an impact of different methods for therapy response assessment on treatment decisions. A reduction of threshold for SD to ±30-40% of volume change seems reasonable when using volumetry.
Uribe, Juan S; Isaacs, Robert E; Youssef, Jim A; Khajavi, Kaveh; Balzer, Jeffrey R; Kanter, Adam S; Küelling, Fabrice A; Peterson, Mark D
2015-04-01
This multicenter study aims to evaluate the utility of triggered electromyography (t-EMG) recorded throughout psoas retraction during lateral transpsoas interbody fusion to predict postoperative changes in motor function. Three hundred and twenty-three patients undergoing L4-5 minimally invasive lateral interbody fusion from 21 sites were enrolled. Intraoperative data collection included initial t-EMG thresholds in response to posterior retractor blade stimulation and subsequent t-EMG threshold values collected every 5 min throughout retraction. Additional data collection included dimensions/duration of retraction as well as pre-and postoperative lower extremity neurologic exams. Prior to expanding the retractor, the lowestt-EMG threshold was identified posterior to the retractor in 94 % of cases. Postoperatively, 13 (4.5 %) patients had a new motor weakness that was consistent with symptomatic neuropraxia (SN) of lumbar plexus nerves on the approach side. There were no significant differences between patients with or without a corresponding postoperative SN with respect to initial posterior blade reading (p = 0.600), or retraction dimensions (p > 0.05). Retraction time was significantly longer in those patients with SN vs. those without (p = 0.031). Stepwise logistic regression showed a significant positive relationship between the presence of new postoperative SN and total retraction time (p < 0.001), as well as change in t-EMG thresholds over time (p < 0.001), although false positive rates (increased threshold in patients with no new SN) remained high regardless of the absolute increase in threshold used to define an alarm criteria. Prolonged retraction time and coincident increases in t-EMG thresholds are predictors of declining nerve integrity. Increasing t-EMG thresholds, while predictive of injury, were also observed in a large number of patients without iatrogenic injury, with a greater predictive value in cases with extended duration. In addition to a careful approach with minimal muscle retraction and consistent lumbar plexus directional retraction, the incidence of postoperative motor neuropraxia may be reduced by limiting retraction time and utilizing t-EMG throughout retraction, while understanding that the specificity of this monitoring technique is low during initial retraction and increases with longer retraction duration.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Xiong, Zhihua
2016-10-01
The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.
Tectonic uplift, threshold hillslopes, and denudation rates in a developing mountain range
Binnie, S.A.; Phillips, W.M.; Summerfield, M.A.; Fifield, L.K.
2007-01-01
Studies across a broad range of drainage basins have established a positive correlation between mean slope gradient and denudation rates. It has been suggested, however, that this relationship breaks down for catchments where slopes are at their threshold angle of stability because, in such cases, denudation is controlled by the rate of tectonic uplift through the rate of channel incision and frequency of slope failure. This mechanism is evaluated for the San Bernardino Mountains, California, a nascent range that incorporates both threshold hill-slopes and remnants of pre-uplift topography. Concentrations of in situ-produced cosmogenic 10Be in alluvial sediments are used to quantify catchment-wide denudation rates and show a broadly linear relationship with mean slope gradient up to ???30??: above this value denudation rates vary substantially for similar mean slope gradients. We propose that this decoupling in the slope gradient-denudation rate relationship marks the emergence of threshold topography and coincides with the transition from transport-limited to detachment-limited denudation. The survival in the San Bernardino Mountains of surfaces formed prior to uplift provides information on the topographic evolution of the range, in particular the transition from slope-gradient-dependent rates of denudation to a regime where denudation rates are controlled by rates of tectonic uplift. This type of transition may represent a general model for the denudational response to orogenic uplift and topographic evolution during the early stages of mountain building. ?? 2007 The Geological Society of America.
A Procedural Guide on Sick Building Syndrome.
1987-03-01
membrane irrition. Sodium dodecyl sulfate , an ingredient in many carpet shampoos has also been associated with mucous membrane irritation in certain offices...American Industrial Hygiene Conference, 20 May 1986 13. Kreiss, K, and M.G. Gonzales, et al, "Respiratory Irritation Due to Carpet Shampoo ," Chronic...Carpet Shampoo ," Proc. Fifth Inter. Symp. Medichem, San Francisco, p. 347 (1977) 12 15. Threshold Limit Values and Biological Exposure Indices for 1985
Vuralli, Doga; Evren Boran, H; Cengiz, Bulent; Coskun, Ozlem; Bolay, Hayrunnisa
2016-10-01
Migraine headache attacks have been shown to be accompanied by significant prolongation of somatosensory temporal discrimination threshold values, supporting signs of disrupted sensorial processing in migraine. Chronic migraine is one of the most debilitating and challenging headache disorders with no available biomarker. We aimed to test the diagnostic value of somatosensory temporal discrimination for chronic migraine in this prospective, controlled study. Fifteen chronic migraine patients and 15 healthy controls completed the study. Chronic migraine patients were evaluated twice, during a headache and headache-free period. Somatosensory temporal discrimination threshold values were evaluated in both hands. Duration of migraine and chronic migraine, headache intensity, clinical features accompanying headache such as nausea, photophobia, phonophobia and osmophobia, and pressure pain thresholds were also recorded. In the chronic migraine group, somatosensory temporal discrimination threshold values on the headache day (138.8 ± 21.8 ms for the right hand and 141.2 ± 17.4 ms for the left hand) were significantly higher than somatosensory temporal discrimination threshold values on the headache free day (121.5 ± 13.8 ms for the right hand and 122.8 ± 12.6 ms for the left hand, P = .003 and P < .0001, respectively) and somatosensory temporal discrimination thresholds of healthy volunteers (35.4 ± 5.5 ms for the right hand and 36.4 ± 5.4 ms for the left hand, P < .0001 and P < .0001, respectively). Somatosensory temporal discrimination threshold values of chronic migraine patients on the headache free day were significantly prolonged compared to somatosensory temporal discrimination threshold values of the control group (121.5 ± 13.8 ms vs 35.4 ± 5.5 ms for the right hand, P < .0001 and 122.8 ± 12.6 ms vs 36.4 ± 5.4 ms for the left hand, P < .0001). Somatosensory temporal discrimination threshold values of the hand contralateral to the headache lateralization (153.3 ± 13.7 ms) were significantly higher (P < .0001) than the ipsilateral hand (118.2 ± 11.9 ms) in chronic migraine patients when headache was lateralized. The headache intensity of chronic migraine patients rated with visual analog score was positively correlated with the contralateral somatosensory temporal discrimination threshold values. Somatosensory temporal discrimination thresholds persist elevated during the headache-free intervals in patients with chronic migraine. By providing evidence for the first time for unremitting disruption of central sensory processing, somatosensory temporal discrimination test stands out as a promising neurophysiological biomarker for chronic migraine. © 2016 American Headache Society.
Work climate and work load measurement in production room of Batik Merak Manis Laweyan
NASA Astrophysics Data System (ADS)
Suhardi, Bambang; Simanjutak, Sry Yohana; Laksono, Pringgo Widyo; Herjunowibowo, Dewanto
2017-11-01
The work environment is everything around the labours that can affect them in the exercise of duties and work that is charged. In a work environment, there are workplace climate and workload which affect the labour in force carrying out its work. The working climate is one of the physical factors that could potentially cause health problems towards labour at extreme conditions of hot and cold that exceed the threshold limit value allowed by the standards of health. The climate works closely related to the workload accepted by workers in the performance of their duties. The influence of workload is pretty dominant against the performance of human resources and may cause negative effects to the safety and health of the labours. This study aims to measure the effect of the work climate and the workload against workers productivity. Furthermore, some suggestions to increase the productivity also been recommended. The research conducted in production room of Batik Merak Manis Laweyan. The results showed that the workplace climate and the workload at eight stations in production room of Merak Manis does not agree to the threshold limit value that has been set. Therefore, it is recommended to add more opening windows to add air velocity inside the building thus the humidity and temperature might be reduced.
Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lilge, L.; Olivo, M. C.; Schatz, S. W.; MaGuire, J. A.; Patterson, M. S.; Wilson, B. C.
1996-01-01
The applicability and limitations of a photodynamic threshold model, used to describe quantitatively the in vivo response of tissues to photodynamic therapy, are currently being investigated in a variety of normal and malignant tumour tissues. The model states that tissue necrosis occurs when the number of photons absorbed by the photosensitiser per unit tissue volume exceeds a threshold. New Zealand White rabbits were sensitised with porphyrin-based photosensitisers. Normal brain or intracranially implanted VX2 tumours were illuminated via an optical fibre placed into the tissue at craniotomy. The light fluence distribution in the tissue was measured by multiple interstitial optical fibre detectors. The tissue concentration of the photosensitiser was determined post mortem by absorption spectroscopy. The derived photodynamic threshold values for normal brain are significantly lower than for VX2 tumour for all photosensitisers examined. Neuronal damage is evident beyond the zone of frank necrosis. For Photofrin the threshold decreases with time delay between photosensitiser administration and light treatment. No significant difference in threshold is found between Photofrin and haematoporphyrin derivative. The threshold in normal brain (grey matter) is lowest for sensitisation by 5 delta-aminolaevulinic acid. The results confirm the very high sensitivity of normal brain to porphyrin photodynamic therapy and show the importance of in situ light fluence monitoring during photodynamic irradiation. Images Figure 1 Figure 4 Figure 5 Figure 6 Figure 7 PMID:8562339
Sorensen, James P R; Baker, Andy; Cumberland, Susan A; Lapworth, Dan J; MacDonald, Alan M; Pedley, Steve; Taylor, Richard G; Ward, Jade S T
2018-05-01
We assess the use of fluorescent dissolved organic matter at excitation-emission wavelengths of 280nm and 360nm, termed tryptophan-like fluorescence (TLF), as an indicator of faecally contaminated drinking water. A significant logistic regression model was developed using TLF as a predictor of thermotolerant coliforms (TTCs) using data from groundwater- and surface water-derived drinking water sources in India, Malawi, South Africa and Zambia. A TLF threshold of 1.3ppb dissolved tryptophan was selected to classify TTC contamination. Validation of the TLF threshold indicated a false-negative error rate of 15% and a false-positive error rate of 18%. The threshold was unsuccessful at classifying contaminated sources containing <10 TTC cfu per 100mL, which we consider the current limit of detection. If only sources above this limit were classified, the false-negative error rate was very low at 4%. TLF intensity was very strongly correlated with TTC concentration (ρ s =0.80). A higher threshold of 6.9ppb dissolved tryptophan is proposed to indicate heavily contaminated sources (≥100 TTC cfu per 100mL). Current commercially available fluorimeters are easy-to-use, suitable for use online and in remote environments, require neither reagents nor consumables, and crucially provide an instantaneous reading. TLF measurements are not appreciably impaired by common intereferents, such as pH, turbidity and temperature, within typical natural ranges. The technology is a viable option for the real-time screening of faecally contaminated drinking water globally. Copyright © 2017 Natural Environment Research Council (NERC), as represented by the British Geological Survey (BGS. Published by Elsevier B.V. All rights reserved.
Lazar, Aurel A; Pnevmatikakis, Eftychios A
2011-03-01
We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value. PMID:21296708
Emissions of NO, NO2 and PM from inland shipping
NASA Astrophysics Data System (ADS)
Kurtenbach, Ralf; Vaupel, Kai; Kleffmann, Jörg; Klenk, Ulrich; Schmidt, Eberhard; Wiesen, Peter
2016-11-01
Particulate matter (PM) and nitrogen oxides NOx (NOx = NO2+ NO) are key species for urban air quality in Europe and are emitted by mobile sources. According to European recommendations, a significant fraction of road freight should be shifted to waterborne transport in the future. In order to better consider this emission change pattern in future emission inventories, in the present study inland water transport emissions of NOx, CO2 and PM were investigated under real world conditions on the river Rhine, Germany, in 2013. An average NO2 / NOx emission ratio of 0.08 ± 0.02 was obtained, which is indicative of ship diesel engines without exhaust gas aftertreatment systems. For all measured motor ship types and operation conditions, overall weighted average emission indices (EIs), as emitted mass of pollutant per kg burnt fuel of EINOx = 54 ± 4 g kg-1 and a lower limit EIPM1
Poupard, Laurent; Court-Fortune, Isabelle; Pichot, Vincent; Chouchou, Florian; Barthélémy, Jean-Claude; Roche, Frédéric
2011-12-01
Several studies have correlated the ratio of the very low frequency power spectral density of heart rate increment (%VLFI) with obstructive sleep apnoea syndrome (OSAS). However, patients with impaired heart rate variability may exhibit large variations of heart rate increment (HRI) spectral pattern and alter the screening accuracy of the method. To overcome this limitation, the present study uses the high-frequency increment (HFI) peak in the HRI spectrum, which corresponds to the respiratory influence on RR variations over the frequency range 0.2 to 0.4 Hz. We evaluated 288 consecutive patients referred for snoring, observed nocturnal breathing cessation and/or daytime sleepiness. Patients were classified as OSAS if their apnoea plus hypopnoea index (AHI) during polysomnography exceeded 15 events per hour. Synchronized electrocardiogram Holter monitoring allowed HRI analysis. Using a %VLFI threshold >2.4% for identifying the presence of OSAS, sensitivity for OSAS was 74.9%, specificity 51%, positive predictive value 54.9% and negative predictive value 71.7% (33 false negative subjects). Using threshold for %VLFI >2.4% and HFI peak position >0.4 Hz, negative predictive value increased to 78.2% while maintaining specificity at 50.6%. Among 11 subjects with %VLFI <2.4% and HFI peak >0.4 Hz, nine demonstrated moderate to severe OSAS (AHI >30). HFI represents a minimal physiological criterion for applying %VLFI by ensuring that heart rate variations are band frequency limited.
Study of blur discrimination for 3D stereo viewing
NASA Astrophysics Data System (ADS)
Subedar, Mahesh; Karam, Lina J.
2014-03-01
Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.
1992-03-01
In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less
Flexible circuits with integrated switches for robotic shape sensing
NASA Astrophysics Data System (ADS)
Harnett, C. K.
2016-05-01
Digital switches are commonly used for detecting surface contact and limb-position limits in robotics. The typical momentary-contact digital switch is a mechanical device made from metal springs, designed to connect with a rigid printed circuit board (PCB). However, flexible printed circuits are taking over from the rigid PCB in robotics because the circuits can bend while carrying signals and power through moving joints. This project is motivated by a previous work where an array of surface-mount momentary contact switches on a flexible circuit acted as an all-digital shape sensor compatible with the power resources of energy harvesting systems. Without a rigid segment, the smallest commercially-available surface-mount switches would detach from the flexible circuit after several bending cycles, sometimes violently. This report describes a low-cost, conductive fiber based method to integrate electromechanical switches into flexible circuits and other soft, bendable materials. Because the switches are digital (on/off), they differ from commercially-available continuous-valued bend/flex sensors. No amplification or analog-to-digital conversion is needed to read the signal, but the tradeoff is that the digital switches only give a threshold curvature value. Boundary conditions on the edges of the flexible circuit are key to setting the threshold curvature value for switching. This presentation will discuss threshold-setting, size scaling of the design, automation for inserting a digital switch into the flexible circuit fabrication process, and methods for reconstructing a shape from an array of digital switch states.
NASA Technical Reports Server (NTRS)
Murri, Gretchen Bostaph; Martin, Roderick H.
1991-01-01
Static and fatigue double-cantilever beam (DCB) and end-notch flexure (ENF) tests were conducted to determine the effect of the simulated initial delamination in interlaminar fracture toughness, G(sub c), and fatigue fracture threshold, G(sub th). Unidirectional, 24-ply specimens of S2/SP250 glass/epoxy were tested using Kapton inserts of four different thickness - 13, 25, 75, and 130 microns, at the midplane at one end, or with tension or shear precracks, to simulate an initial delamination. To determine G(sub c), the fatigue fracture threshold below which no delamination growth would occur in less than 1 x 10(exp 6) cycles, fatigue tests were conducted by cyclically loading specimens until delamination growth was detected. Consistent values of model 1 fracture toughness, G(sub Ic), were measured from DCB specimens with inserts of thickness 75 microns or thinner, or with shear precracks. The fatigue DCB tests gave similar values of G(sub Ith) for the 13, 25, and 75 microns specimens. Results for the shear precracked specimens were significantly lower that for specimens without precracks. Results for both the static and fatigue ENF tests showed that measured G(IIc) and G(IIth) values decreased with decreasing insert thickness, so that no limiting thickness could be determined. Results for specimens with inserts of 75 microns or thicker were significantly higher than the results for precracked specimens or specimens with 13 or 25 microns inserts.
Haywood, P E; Teale, P; Moss, M S
1990-07-01
Thoroughbred geldings were fed racehorse cubes containing a predetermined concentration of theobromine in the form of cocoa husk. They were offered 7 kg of cubes per day, divided between morning and evening feed, and food consumption was monitored. Urinary concentrations of theobromine were determined following the consumption of cubes containing 11.5, 6.6, 2.0 and 1.2 mg per kg of theobromine, to verify whether or not such concentrations would produce positive urine tests. Pre-dose urine samples were collected to verify the absence of theobromine before each experiment. It became apparent from the results of the first three administrations that the limit of detection of theobromine, using such procedures, would be reached at a feed level of about 1 mg per kg theobromine. Therefore the final administration, using cubes containing 1.2 mg per kg theobromine, was singled out for additional analytical work and quantitative procedures were developed to measure urinary concentrations of theobromine. It was anticipated that the results would form a basis for discussions relating to the establishment of a threshold value for theobromine in horse urine. The Stewards of the Jockey Club subsequently gave notice that they had established a threshold level for theobromine in urine of 2 micrograms/ml.
Transport temperatures observed during the commercial transportation of animals.
Fiore, Gianluca; Hofherr, Johann; Natale, Fabrizio; Mainetti, Sergio; Ruotolo, Espedito
2012-01-01
Current temperature standards and those proposed by the European Food Safety Authority (EFSA) were compared with the actual practices of commercial transport in the European Union. Temperature and humidity records recorded for a year on 21 vehicles over 905 journeys were analysed. Differences in temperature and humidity recorded by sensors at four different positions in the vehicles exceeded 10°C between the highest and lowest temperatures in nearly 7% of cases. The number and position of temperature sensors are important to ensure the correct representation of temperature conditions in the different parts of a vehicle. For all journeys and all animal categories, a relatively high percentage of beyond threshold temperatures can be observed in relation to the temperature limits of 30°C and 5°C. Most recorded temperature values lie within the accepted tolerance of ±5°C stipulated in European Community Regulation (EC) 1/2005. The temperature thresholds proposed by EFSA would result in a higher percentage of non-compliant conditions which are more pronounced at the lower threshold, compared to the thresholds laid down in Regulation (EC) 1/2005. With respect to the different animal categories, the non-compliant temperature occurrences were more frequent in pigs and sheep, in particular with regard to the thresholds proposed by EFSA.
Large exchange-dominated domain wall velocities in antiferromagnetically coupled nanowires
NASA Astrophysics Data System (ADS)
Kuteifan, Majd; Lubarda, M. V.; Fu, S.; Chang, R.; Escobar, M. A.; Mangin, S.; Fullerton, E. E.; Lomakin, V.
2016-04-01
Magnetic nanowires supporting field- and current-driven domain wall motion are envisioned for methods of information storage and processing. A major obstacle for their practical use is the domain-wall velocity, which is traditionally limited for low fields and currents due to the Walker breakdown occurring when the driving component reaches a critical threshold value. We show through numerical and analytical modeling that the Walker breakdown limit can be extended or completely eliminated in antiferromagnetically coupled magnetic nanowires. These coupled nanowires allow for large domain-wall velocities driven by field and/or current as compared to conventional nanowires.
NASA Astrophysics Data System (ADS)
Lv, Hongshui; Sun, Haiyan; Wang, Shoujuan; Kong, Fangong
2018-05-01
A novel dicyanoisophorone based fluorescent probe HP was developed to detect hydrazine. Upon the addition of hydrazine, probe HP displayed turn-on fluorescence in the red region with a large Stokes shift (180 nm). This probe exhibited high selectivity and high sensitivity to hydrazine in solution. The detection limit of HP was found to be 3.26 ppb, which was lower than the threshold limit value set by USEPA (10 ppb). Moreover, the probe was successfully applied to detect hydrazine in different water samples and living cells.
Dagnino, Alessandro; Bo, Tiziano; Copetta, Andrea; Fenoglio, Stefano; Oliveri, Caterina; Bencivenga, Mauro; Felli, Angelo; Viarengo, Aldo
2013-10-01
With the aim of supporting decision makers to manage contamination in freshwater environments, an innovative expert decision support system (EDSS) was developed. The EDSS was applied in a sediment quality assessment along the Bormida river (NW, Italy) which has been heavily contaminated by an upstream industrial site for more than a century. Sampling sites were classified by means of comparing chemical concentrations with effect-based target values (threshold and probable effect concentrations). The level of each contaminant and the combined toxic pressure were used to rank sites into three categories: (i) uncontaminated (8 sites), (ii) mildly contaminated (4) and (iii) heavily contaminated (19). In heavily contaminated sediments, an environmental risk index (EnvRI) was determined by means of integrating chemical data with ecotoxicological and ecological parameters (triad approach). In addition a sediment risk index (SedRI) was computed from combining chemical and ecotoxicological data. Eight sites exhibited EnvRI values ≥0.25, the safety threshold level (range of EnvRI values: 0.14-0.31) whereas SedRI exceeded the safety threshold level at 6 sites (range of SedRI values: 0.16-0.36). At sites classified as mildly contaminated, sublethal biomarkers were integrated with chemical data into a biological vulnerability index (BVI), which exceeded the safety threshold level at one site (BVI value: 0.28). Finally, potential human risk was assessed in selected stations (11 sites) by integrating genotoxicity biomarkers (GTI index falling in the range 0.00-0.53). General conclusions drawn from the EDSS data include: (i) in sites classified as heavily contaminated, only a few exhibited some significant, yet limited, effects on biodiversity; (ii) restrictions in re-using sediments from heavily contaminated sites found little support in ecotoxicological data; (iii) in the majority of the sites classified as mildly contaminated, tested organisms exhibited low response levels; (iv) preliminary results on genotoxicity biomarkers indicate possible negative consequences for humans if exposed to river sediments from target areas. © 2013.
Failure modes in electroactive polymer thin films with elastic electrodes
NASA Astrophysics Data System (ADS)
De Tommasi, D.; Puglisi, G.; Zurlo, G.
2014-02-01
Based on an energy minimization approach, we analyse the elastic deformations of a thin electroactive polymer (EAP) film sandwiched by two elastic electrodes with non-negligible stiffness. We analytically show the existence of a critical value of the electrode voltage for which non-homogeneous solutions bifurcate from the homogeneous equilibrium state, leading to the pull-in phenomenon. This threshold strongly decreases the limit value proposed in the literature considering only homogeneous deformations. We explicitly discuss the influence of geometric and material parameters together with boundary conditions in the attainment of the different failure modes observed in EAP devices. In particular, we obtain the optimum values of these parameters leading to the maximum activation performances of the device.
Uranium hexafluoride public risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, D.R.; Hui, T.E.; Yurconic, M.
1994-08-01
The limiting value for uranium toxicity in a human being should be based on the concentration of uranium (U) in the kidneys. The threshold for nephrotoxicity appears to lie very near 3 {mu}g U per gram kidney tissue. There does not appear to be strong scientific support for any other improved estimate, either higher or lower than this, of the threshold for uranium nephrotoxicity in a human being. The value 3 {mu}g U per gram kidney is the concentration that results from a single intake of about 30 mg soluble uranium by inhalation (assuming the metabolism of a standard person).more » The concentration of uranium continues to increase in the kidneys after long-term, continuous (or chronic) exposure. After chronic intakes of soluble uranium by workers at the rate of 10 mg U per week, the concentration of uranium in the kidneys approaches and may even exceed the nephrotoxic limit of 3 {mu}g U per gram kidney tissue. Precise values of the kidney concentration depend on the biokinetic model and model parameters assumed for such a calculation. Since it is possible for the concentration of uranium in the kidneys to exceed 3 {mu}g per gram tissue at an intake rate of 10 mg U per week over long periods of time, we believe that the kidneys are protected from injury when intakes of soluble uranium at the rate of 10 mg U per week do not continue for more than two consecutive weeks. For long-term, continuous occupational exposure to low-level, soluble uranium, we recommend a reduced weekly intake limit of 5 mg uranium to prevent nephrotoxicity in workers. Our analysis shows that the nephrotoxic limit of 3 {mu}g U per gram kidney tissues is not exceeded after long-term, continuous uranium intake at the intake rate of 5 mg soluble uranium per week.« less
Kotsiou, Ourania S; Tzortzi, Panagiota; Beta, Rafailia A A; Kyritsis, Athanasios; Gourgoulianis, Konstantinos I
2018-06-01
A follow-up thoracentesis is proposed in suspected atypical tuberculosis cases. The study aimed to define the variability of pleural ADA values across repeated thoracenteses in different types of pleural effusions (PEs) and to evaluate whether ADA variance, in regard to the cutoff value of 40 U/L, affected final diagnosis. A total of 131 patients with PEs of various etiologies underwent three repeated thoracenteses. ADA values were subsequently estimated. 82% and 55% of patients had greater than 10% and 20% deviation from the highest ADA value, respectively. From those patients who had a variance of 20%, 36% had only increasing ADA values, while 19% had only decreasing values. Considering the cutoff value of 40 U/L, only in two cases, ADA decreased below this threshold, which concerned a man with tuberculous pleurisy and a woman with lymphoma both in the course of treatment. Furthermore, only in two cases with rising values, ADA finally exceeded the cutoff limit, which concerned a man with rheumatoid pleurisy and a man with tuberculous pleurisy. Surprisingly, malignant PEs (MPEs) showed a higher percentage of increasing values compared to all other exudates that did not, however, exceed the threshold. The determination of pleural ADA levels is a reproducible method for rapid tuberculosis diagnosis. The detected measurement deviations do not appear to affect final diagnosis. In specific situations, repeated ADA measurements may be valuable in directing further diagnostic evaluation. More investigation is needed to elucidate the possible prognostic significance of the increasing trend in ADA values in MPEs. © 2017 Wiley Periodicals, Inc.
Is ``No-Threshold'' a ``Non-Concept''?
NASA Astrophysics Data System (ADS)
Schaeffer, David J.
1981-11-01
A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.
NASA Technical Reports Server (NTRS)
Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.
2015-01-01
The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.
NASA Astrophysics Data System (ADS)
Liu, Laqun; Wang, Huihui; Guo, Fan; Zou, Wenkang; Liu, Dagang
2017-04-01
Based on the 3-dimensional Particle-In-Cell (PIC) code CHIPIC3D, with a new circuit boundary algorithm we developed, a conical magnetically insulated transmission line (MITL) with a 1.0-MV linear transformer driver (LTD) is explored numerically. The values of switch jitter time of LTD are critical parameters for the system, which are difficult to be measured experimentally. In this paper, these values are obtained by comparing the PIC results with experimental data of large diode-gap MITL. By decreasing the diode gap, we find that all PIC results agree well with experimental data only if MITL works on self-limited flow no matter how large the diode gap is. However, when the diode gap decreases to a threshold, the self-limited flow would transfer to a load-limited flow. In this situation, PIC results no longer agree with experimental data anymore due to the anode plasma expansion in the diode load. This disagreement is used to estimate the plasma expansion speed.
Keir, Gregory J; Wort, S John; Kokosi, Maria; George, Peter M; Walsh, Simon L F; Jacob, Joseph; Price, Laura; Bax, Simon; Renzoni, Elisabetta A; Maher, Toby M; MacDonald, Peter; Hansell, David M; Wells, Athol U
2018-01-12
In interstitial lung disease (ILD), pulmonary hypertension (PH) is a major adverse prognostic determinant. Transthoracic echocardiography (TTE) is the most widely used tool when screening for PH, although discordance between TTE and right heart catheter (RHC) measured pulmonary haemodynamics is increasingly recognized. We evaluated the predictive utility of the updated European Society of Cardiology/European Respiratory Society (ESC/ERS) TTE screening recommendations against RHC testing in a large, well-characterized ILD cohort. Two hundred and sixty-five consecutive patients with ILD and suspected PH underwent comprehensive assessment, including RHC, between 2006 and 2012. ESC/ERS recommended tricuspid regurgitation (TR) velocity thresholds for assigning high (>3.4 m/s), intermediate (2.9-3.4 m/s) and low (<2.8 m/s) probabilities of PH were evaluated against RHC testing. RHC testing confirmed PH in 86% of subjects with a peak TR velocity >3.4 m/s, and excluded PH in 60% of ILD subjects with a TR velocity <2.8 m/s. Thus, the ESC/ERS guidelines misclassified 40% of subjects as 'low probability' of PH, when PH was confirmed on subsequent RHC. Evaluating alternative TR velocity thresholds for assigning a low probability of PH did not significantly improve the ability of TR velocity to exclude a diagnosis of PH. In patients with ILD and suspected PH, currently recommended ESC/ERS TR velocity screening thresholds were associated with a high positive predictive value (86%) for confirming PH, but were of limited value in excluding PH, with 40% of patients misclassified as low probability when PH was confirmed at subsequent RHC. © 2018 Asian Pacific Society of Respirology.
NASA Astrophysics Data System (ADS)
Cho, Seung-Hyun; Lee, Sang-Soo; Shin, Dong-Wook
2010-06-01
We have experimentally demonstrated that the use of an optical receiver with decision threshold level adjustment (DTLA) improved the performance of an upstream transmission in reflective semiconductor optical amplifier (RSOA)-based loopback wavelength division multiplexing-passive optical network (WDM-PON). Even though the extinction ratio (ER) of the downstream signal was as much as 9 dB and the injection power into the RSOA at the optical network unit was about -24 dBm, we successfully obtained error-free transmission results for the upstream signal through careful control of the decision threshold value in the optical receiver located at optical line terminal (OLT). Using an optical receiver with DTLA for upstream signal detection overcame significant obstacles related to the injection power into the RSOA and the ER of the downstream signal, which were previously considered limitations of the wavelength remodulation scheme. This technique is expected to provide flexibility for the optical link design in the practical deployment of a WDM-PON.
Novel methodologies for spectral classification of exon and intron sequences
NASA Astrophysics Data System (ADS)
Kwan, Hon Keung; Kwan, Benjamin Y. M.; Kwan, Jennifer Y. Y.
2012-12-01
Digital processing of a nucleotide sequence requires it to be mapped to a numerical sequence in which the choice of nucleotide to numeric mapping affects how well its biological properties can be preserved and reflected from nucleotide domain to numerical domain. Digital spectral analysis of nucleotide sequences unfolds a period-3 power spectral value which is more prominent in an exon sequence as compared to that of an intron sequence. The success of a period-3 based exon and intron classification depends on the choice of a threshold value. The main purposes of this article are to introduce novel codes for 1-sequence numerical representations for spectral analysis and compare them to existing codes to determine appropriate representation, and to introduce novel thresholding methods for more accurate period-3 based exon and intron classification of an unknown sequence. The main findings of this study are summarized as follows: Among sixteen 1-sequence numerical representations, the K-Quaternary Code I offers an attractive performance. A windowed 1-sequence numerical representation (with window length of 9, 15, and 24 bases) offers a possible speed gain over non-windowed 4-sequence Voss representation which increases as sequence length increases. A winner threshold value (chosen from the best among two defined threshold values and one other threshold value) offers a top precision for classifying an unknown sequence of specified fixed lengths. An interpolated winner threshold value applicable to an unknown and arbitrary length sequence can be estimated from the winner threshold values of fixed length sequences with a comparable performance. In general, precision increases as sequence length increases. The study contributes an effective spectral analysis of nucleotide sequences to better reveal embedded properties, and has potential applications in improved genome annotation.
Olfactory Threshold of Chlorine in Oxygen.
1977-09-01
The odor threshold of chlorine in oxygen was determined. Measurements were conducted in an altitude chamber, which provided an odor-free and noise...free background. Human male volunteers, with no previous olfactory acuity testing experience, served as panelists. Threshold values were affected by...time intervals between trials and by age differences. The mean threshold value for 11 subjects was 0.08 ppm obtained by positive responses to the lowest detectable level of chlorine in oxygen, 50% of the time. (Author)
42 CFR 423.336 - Risk-sharing arrangements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... payments to a Part D sponsor subject to risk—(1) Adjusted allowable risk corridor costs. For purposes of... equal to— (1) The target amount for the plan; minus (2) An amount equal to the first threshold risk.... (B) Second threshold lower limit. The second threshold lower limit of the corridor is equal to— (1...
42 CFR 423.336 - Risk-sharing arrangements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... payments to a Part D sponsor subject to risk—(1) Adjusted allowable risk corridor costs. For purposes of... equal to— (1) The target amount for the plan; minus (2) An amount equal to the first threshold risk.... (B) Second threshold lower limit. The second threshold lower limit of the corridor is equal to— (1...
42 CFR 423.336 - Risk-sharing arrangements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... payments to a Part D sponsor subject to risk—(1) Adjusted allowable risk corridor costs. For purposes of... equal to— (1) The target amount for the plan; minus (2) An amount equal to the first threshold risk.... (B) Second threshold lower limit. The second threshold lower limit of the corridor is equal to— (1...
Nonlinear multilayers as optical limiters
NASA Astrophysics Data System (ADS)
Turner-Valle, Jennifer Anne
1998-10-01
In this work we present a non-iterative technique for computing the steady-state optical properties of nonlinear multilayers and we examine nonlinear multilayer designs for optical limiters. Optical limiters are filters with intensity-dependent transmission designed to curtail the transmission of incident light above a threshold irradiance value in order to protect optical sensors from damage due to intense light. Thin film multilayers composed of nonlinear materials exhibiting an intensity-dependent refractive index are used as the basis for optical limiter designs in order to enhance the nonlinear filter response by magnifying the electric field in the nonlinear materials through interference effects. The nonlinear multilayer designs considered in this work are based on linear optical interference filter designs which are selected for their spectral properties and electric field distributions. Quarter wave stacks and cavity filters are examined for their suitability as sensor protectors and their manufacturability. The underlying non-iterative technique used to calculate the optical response of these filters derives from recognizing that the multi-valued calculation of output irradiance as a function of incident irradiance may be turned into a single-valued calculation of incident irradiance as a function of output irradiance. Finally, the benefits and drawbacks of using nonlinear multilayer for optical limiting are examined and future research directions are proposed.
Material Compatability with Threshold Limit Value Levels of Monomethyl Hydrazine
1988-10-26
supply was house- compressed air conditioned by passing through a series of demisters, a hot Hopcalite catalyst bed, a reciprocating dual-tower...recorded. At the end of a test, the tubing was rinsed with methanol and dried with compressed breathing air . Cleaning the tubing material between tests had...niecessary and identify by block wbr -’Materials were evaluated for potential use as ambient air sample lines for hydrazines. Fluorinated poly- mers
Rationale for a Threshold Limit Value (TLV)R for JP-4/Jet B Wide Cut Aviation Turbine Fuel.
1983-04-01
Additional copies may be purchased from: National Technical Information Service 5285 Port Royal Road Springfield, Virginia 22161 Government agencies...Cameron Station Alexandria, Virginia 22314 This report has been reviewed by the Public Affairs Office and is releasable to the National Technical...Information Service (NTIS). At NTIS, it will be available to the general public, including foreign nations . This technical report has been reviewed and is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, F; Shandong Cancer Hospital and Insititute, Jinan, Shandong; Bowsher, J
2014-06-01
Purpose: PET imaging with F18-FDG is utilized for treatment planning, treatment assessment, and prognosis. A region of interest (ROI) encompassing the tumor may be determined on the PET image, often by a threshold T on the PET standard uptake values (SUVs). Several studies have shown prognostic value for relevant ROI properties including maximum SUV value (SUVmax), metabolic tumor volume (MTV), and total glycolytic activity (TGA). The choice of threshold T may affect mean SUV value (SUVmean), MTV, and TGA. Recently spatial resolution modeling (SRM) has been introduced on many PET systems. SRM may also affect these ROI properties. The purposemore » of this work is to investigate the relative influence of SRM and threshold choice T on SUVmean, MTV, TGA, and SUVmax. Methods: For 9 anal cancer patients, 18F-FDG PET scans were performed prior to treatment. PET images were reconstructed by 2 iterations of Ordered Subsets Expectation Maximization (OSEM), with and without SRM. ROI contours were generated by 5 different SUV threshold values T: 2.5, 3.0, 30%, 40%, and 50% of SUVmax. Paired-samples t tests were used to compare SUVmean, MTV, and TGA (a) for SRM on versus off and (b) between each pair of threshold values T. SUVmax was also compared for SRM on versus off. Results: For almost all (57/60) comparisons of 2 different threshold values, SUVmean, MTV, and TGA showed statistically significant variation. For comparison of SRM on versus off, there were no statistically significant changes in SUVmax and TGA, but there were statistically significant changes in MTV for T=2.5 and T=3.0 and in SUVmean for all T. Conclusion: The near-universal statistical significance of threshold choice T suggests that, regarding harmonization across sites, threshold choice may be a greater concern than choice of SRM. However, broader study is warranted, e.g. other iterations of OSEM should be considered.« less
Contribution to the hygienic assessment of atmospheric ozone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eglite, M.E.
1968-01-01
The concentration of atmospheric ozone in Moscow in autumn and winter of 1965 and 1966 and in spring of 1967 amounted to 0.065 mg/mT and in Riga in spring and summer months of 1966 it oscillated in the limits of 0.01 to 0.093 mg/mT. The threshold value of the smell of ozone for the most sensitive persons attained 0.015 mg/mT, the subthreshold concentration equaled 0.01 mg/mT. The threshold value of the reflex action of ozone on the electric activity of the cerebral cortex amounted to 0.01 mg/mT, the subthreshold value equaled 0.005 mg/mT. A 24-hour chronic poisoning with ozone atmore » a concentration of 0.11 mg/mT for a period of 93 days produced in experimental rats weight decrease, an inhibition of the blood cholinesterase activity, a decrease of oxygen consumption rate, a rise of 17-ketosteroids content in the urine, a fall in ascorbic acid content of the suprarenal glands. Ozone at concentrations of 0.02 and 0.005 mg/mT proved to be ineffective.« less
Somatosensory temporal discrimination is prolonged during migraine attacks.
Boran, H Evren; Cengiz, Bülent; Bolay, Hayrunnisa
2016-01-01
Symptoms and signs of sensorial disturbances are characteristic features of a migraine headache. Somatosensory temporal discrimination measures the temporal threshold to perceive two separate somaesthetic stimuli as clearly distinct. This study aimed to evaluate somaesthetic perception in migraine patients by measuring the somatosensory temporal discrimination thresholds. The study included 12 migraine patients without aura and 12 volunteers without headache. Somatosensory temporal discrimination threshold (STDT) values were measured in the face (V3) and hands (C7) during a lateralized headache attack and the headache-free interictal period. The disease duration, pain intensity, phonophobia, photophobia, nausea, vomiting, and brush allodynia were also recorded during the migraine attack. STDT values were within normal limits and not different between the control group and the interictal period in migraine patients. Compared to the headache-free period, STDT values during the attack were significantly prolonged in the contralateral hand (C7) (155.7 ± 84.2 vs 40.6 ± 16.1 ms [P < .001]), ipsilateral hand (C7) (88.6 ± 51.3 vs 31.4 ± 14.2 ms [P < 0.001]), contralateral face (V3) (65.5 ± 35.4 vs 37.6 ± 22.2 ms [P = .006]) and ipsilateral face (V3) (104.1 ± 44.5 vs 37.5 ± 21.4 ms [P < 0.001]) according to the lateralization of the headache. Ictal STDT values of the contralateral hand and ipsilateral face were significantly increased compared to that of the ipsilateral hand and contralateral face (155.7 ± 84.2 ms vs 88.6 ± 5.1.3 ms [P = .001], 104.1 ± 44.5 ms vs 65.5 ± 35.4 ms [P = 0.001]). No allodynia was detected in the areas that were tested for somatosensory temporal discrimination. The visual analog scale scores were correlated with the somatosensory temporal discrimination thresholds of the contralateral hand (r = 0.602, P = .038), whereas no correlation was detected between the somatosensory temporal discrimination thresholds and disease duration, brush allodynia in the forehead, phonophobia, photophobia, nausea and vomiting. The study demonstrates for the first time that somatosensory temporal discrimination thresholds are elevated during migraine attacks. A transient disruption of the central processing of somaesthetic stimuli during the lateralized migraine attack may provide additional information to understand the mechanisms of the cognitive and sensory perception impairment associated with migraine headache and may have diagnostic value. © 2015 American Headache Society.
Regression Discontinuity Designs in Epidemiology
Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till
2014-01-01
When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922
Thresholds for the cost-effectiveness of interventions: alternative approaches.
Marseille, Elliot; Larson, Bruce; Kazi, Dhruv S; Kahn, James G; Rosen, Sydney
2015-02-01
Many countries use the cost-effectiveness thresholds recommended by the World Health Organization's Choosing Interventions that are Cost-Effective project (WHO-CHOICE) when evaluating health interventions. This project sets the threshold for cost-effectiveness as the cost of the intervention per disability-adjusted life-year (DALY) averted less than three times the country's annual gross domestic product (GDP) per capita. Highly cost-effective interventions are defined as meeting a threshold per DALY averted of once the annual GDP per capita. We argue that reliance on these thresholds reduces the value of cost-effectiveness analyses and makes such analyses too blunt to be useful for most decision-making in the field of public health. Use of these thresholds has little theoretical justification, skirts the difficult but necessary ranking of the relative values of locally-applicable interventions and omits any consideration of what is truly affordable. The WHO-CHOICE thresholds set such a low bar for cost-effectiveness that very few interventions with evidence of efficacy can be ruled out. The thresholds have little value in assessing the trade-offs that decision-makers must confront. We present alternative approaches for applying cost-effectiveness criteria to choices in the allocation of health-care resources.
Viewer. Reaction Q-Values and Thresholds This tool computes reaction Q-values and thresholds using , uncertainties, and correlations using 30 energy ranges. Simple tables of reaction uncertainties are also
Vogel, M N; Schmücker, S; Maksimovic, O; Hartmann, J; Claussen, C D; Horger, M
2012-01-01
Objectives This study compares tumour response assessment by automated CT volumetry and standard manual measurements regarding the impact on treatment decisions and patient outcome. Methods 58 consecutive patients with 203 pulmonary metastases undergoing baseline and follow-up multirow detector CT (MDCT) under chemotherapy were assessed for response to chemotherapy. Tumour burden of pulmonary target lesions was quantified in three ways: (1) following response evaluation criteria in solid tumours (RECIST); (2) following the volume equivalents of RECIST (i.e. with a threshold of −65/+73%); and (3) using calculated limits for stable disease (SD). For volumetry, calculated limits had been set at ±38% prior to the study by repeated quantification of nodules scanned twice. Results were compared using non-weighted κ-values and were evaluated for their impact on treatment decisions and patient outcome. Results In 15 (17%) of the 58 patients, the results of response assessment were inconsistent with 1 of the 3 methods, which would have had an impact on treatment decisions in 8 (13%). Patient outcome regarding therapy response could be verified in 5 (33%) of the 15 patients with inconsistent measurement results and was consistent with both RECIST and volumetry in 1, with calculated limits in 3 and with none in 1. Diagnosis as to the overall response was consistent with RECIST in six patients, with volumetry in six and with calculated limits in eight cases. There is an impact of different methods for therapy response assessment on treatment decisions. Conclusion A reduction of threshold for SD to ±30–40% of volume change seems reasonable when using volumetry. PMID:22745205
Thresholds of Extinction: Simulation Strategies in Environmental Values Education.
ERIC Educational Resources Information Center
Glew, Frank
1990-01-01
Describes a simulation exercise for campers and an accompanying curriculum unit--"Thresholds of Extinction"--that addresses the issues of endangered species. Uses this context to illustrate steps in the process of values development: awareness, gathering data, resolution (decision making), responsibility (acting on values), and…
NASA Astrophysics Data System (ADS)
Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande
2017-11-01
This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.
The stability of color discrimination threshold determined using pseudoisochromatic test plates
NASA Astrophysics Data System (ADS)
Zutere, B.; Jurasevska Luse, K.; Livzane, A.
2014-09-01
Congenital red-green color vision deficiency is one of the most common genetic disorders. A previously printed set of pseudoisochromatic plates (KAMS test, 2012) was created for individual discrimination threshold determination in case of mild congenital red-green color vision deficiency using neutral colors (colors confused with gray). The diagnostics of color blind subjects was performed with Richmond HRR (4th edition, 2002) test, Oculus HMC anomaloscope, and further the examination was made using the KAMS test. 4 male subjects aged 20 to 24 years old participated in the study: all of them were diagnosed with deuteranomalia. Due to the design of the plates, the threshold of every subject in each trial was defined as the plate total color difference value ΔE at which the stimulus was detected 75% of the time, so the just-noticeable difference (jnd) was calculated in CIE LAB DeltaE (ΔE) units. Authors performed repeated discrimination threshold measurements (5 times) for all four subjects under controlled illumination conditions. Psychophysical data were taken by sampling an observer's performance on a psychophysical task at a number of different stimulus saturation levels. Results show that a total color difference value ΔE threshold exists for each individual tested with the KAMS pseudoisochromatic plates, this threshold value does not change significantly in multiple measurements. Deuteranomal threshold values aquired using greenish plates of KAMS test are significantly higher than thresholds acquired using reddish plates. A strong positive correlation (R=0.94) exists between anomaloscope matching range (MR) and deuteranomal thresholds aquired by the KAMS test and (R=0.81) between error score in the Richmond HRR test and thresholds aquired by the KAMS test.
Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O
2017-06-01
In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.
NASA Astrophysics Data System (ADS)
Nlandu Kamavuako, Ernest; Scheme, Erik Justin; Englehart, Kevin Brian
2016-08-01
Objective. For over two decades, Hudgins’ set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. Approach. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. Main results. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. Significance. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.
Kamavuako, Ernest Nlandu; Scheme, Erik Justin; Englehart, Kevin Brian
2016-08-01
For over two decades, Hudgins' set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.
Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M
2017-06-30
There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Cliatt, Larry J.; Hill, Michael A.; Haering, Edward A.; Arnac, Sarah R.
2015-10-01
In support of the ongoing effort by the National Aeronautics and Space Administration (NASA) to bring supersonic commercial travel to the public, NASA, in partnership with other industry organizations, conducted a flight research experiment to analyze acoustic propagation at the lateral edge of the sonic boom carpet. The name of the effort was the Farfield Investigation of No-boom Thresholds (FaINT). The research from FaINT determined an appropriate metric for sonic boom waveforms in the transition and shadow zones called Perceived Sound Exposure Level, established a value of 65 dB as a limit for the acoustic lateral extent of a sonic boom's noise region, analyzed change in sonic boom levels near lateral cutoff, and compared between real sonic boom measurements and numerical predictions.
Reimerdes, H; Garofalo, A M; Jackson, G L; Okabayashi, M; Strait, E J; Chu, M S; In, Y; La Haye, R J; Lanctot, M J; Liu, Y Q; Navratil, G A; Solomon, W M; Takahashi, H; Groebner, R J
2007-02-02
Recent DIII-D experiments with reduced neutral beam torque and minimum nonaxisymmetric perturbations of the magnetic field show a significant reduction of the toroidal plasma rotation required for the stabilization of the resistive-wall mode (RWM) below the threshold values observed in experiments that apply nonaxisymmetric magnetic fields to slow the plasma rotation. A toroidal rotation frequency of less than 10 krad/s at the q=2 surface (measured with charge exchange recombination spectroscopy using C VI) corresponding to 0.3% of the inverse of the toroidal Alfvén time is sufficient to sustain the plasma pressure above the ideal MHD no-wall stability limit. The low-rotation threshold is found to be consistent with predictions by a kinetic model of RWM damping.
NASA Technical Reports Server (NTRS)
Cliatt, Larry J., II; Hill, Michael A.; Haering, Edward A., Jr.; Arnac, Sarah R.
2015-01-01
In support of the ongoing effort by the National Aeronautics and Space Administration (NASA) to bring supersonic commercial travel to the public, NASA, in partnership with other industry organizations, conducted a flight research experiment to analyze acoustic propagation at the lateral edge of the sonic boom carpet. The name of the effort was the Farfield Investigation of No-boom Thresholds (FaINT). The research from FaINT determined an appropriate metric for sonic boom waveforms in the transition and shadow zones called Perceived Sound Exposure Level, established a value of 65 dB as a limit for the acoustic lateral extent of a sonic boom's noise region, analyzed change in sonic boom levels near lateral cutoff, and compared between real sonic boom measurements and numerical predictions.
Threshold-free method for three-dimensional segmentation of organelles
NASA Astrophysics Data System (ADS)
Chan, Yee-Hung M.; Marshall, Wallace F.
2012-03-01
An ongoing challenge in the field of cell biology is to how to quantify the size and shape of organelles within cells. Automated image analysis methods often utilize thresholding for segmentation, but the calculated surface of objects depends sensitively on the exact threshold value chosen, and this problem is generally worse at the upper and lower zboundaries because of the anisotropy of the point spread function. We present here a threshold-independent method for extracting the three-dimensional surface of vacuoles in budding yeast whose limiting membranes are labeled with a fluorescent fusion protein. These organelles typically exist as a clustered set of 1-10 sphere-like compartments. Vacuole compartments and center points are identified manually within z-stacks taken using a spinning disk confocal microscope. A set of rays is defined originating from each center point and radiating outwards in random directions. Intensity profiles are calculated at coordinates along these rays, and intensity maxima are taken as the points the rays cross the limiting membrane of the vacuole. These points are then fit with a weighted sum of basis functions to define the surface of the vacuole, and then parameters such as volume and surface area are calculated. This method is able to determine the volume and surface area of spherical beads (0.96 to 2 micron diameter) with less than 10% error, and validation using model convolution methods produce similar results. Thus, this method provides an accurate, automated method for measuring the size and morphology of organelles and can be generalized to measure cells and other objects on biologically relevant length-scales.
Using ecosystem services to represent the environment in hydro-economic models
NASA Astrophysics Data System (ADS)
Momblanch, Andrea; Connor, Jeffery D.; Crossman, Neville D.; Paredes-Arquiola, Javier; Andreu, Joaquín
2016-07-01
Demand for water is expected to grow in line with global human population growth, but opportunities to augment supply are limited in many places due to resource limits and expected impacts of climate change. Hydro-economic models are often used to evaluate water resources management options, commonly with a goal of understanding how to maximise water use value and reduce conflicts among competing uses. The environment is now an important factor in decision making, which has resulted in its inclusion in hydro-economic models. We reviewed 95 studies applying hydro-economic models, and documented how the environment is represented in them and the methods they use to value environmental costs and benefits. We also sought out key gaps and inconsistencies in the treatment of the environment in hydro-economic models. We found that representation of environmental values of water is patchy in most applications, and there should be systematic consideration of the scope of environmental values to include and how they should be valued. We argue that the ecosystem services framework offers a systematic approach to identify the full range of environmental costs and benefits. The main challenges to more holistic representation of the environment in hydro-economic models are the current limits to understanding of ecological functions which relate physical, ecological and economic values and critical environmental thresholds; and the treatment of uncertainty.
Setting Occupational Exposure Limits for Genotoxic Substances in the Pharmaceutical Industry.
Lovsin Barle, Ester; Winkler, Gian Christian; Glowienke, Susanne; Elhajouji, Azeddine; Nunic, Jana; Martus, Hans-Joerg
2016-05-01
In the pharmaceutical industry, genotoxic drug substances are developed for life-threatening indications such as cancer. Healthy employees handle these substances during research, development, and manufacturing; therefore, safe handling of genotoxic substances is essential. When an adequate preclinical dataset is available, a risk-based decision related to exposure controls for manufacturing is made following a determination of safe health-based limits, such as an occupational exposure limit (OEL). OELs are calculated for substances based on a threshold dose-response once a threshold is identified. In this review, we present examples of genotoxic mechanisms where thresholds can be demonstrated and OELs can be calculated, including a holistic toxicity assessment. We also propose a novel approach for inhalation Threshold of Toxicological Concern (TTC) limit for genotoxic substances in cases where the database is not adequate to determine a threshold. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Cooper, Justin; Marx, Bernd; Buhl, Johannes; Hombach, Volker
2002-09-01
This paper investigates the minimum distance for a human body in the near field of a cellular telephone base station antenna for which there is compliance with the IEEE or ICNIRP threshold values for radio frequency electromagnetic energy absorption in the human body. First, local maximum specific absorption rates (SARs), measured and averaged over volumes equivalent to 1 and to 10 g tissue within the trunk region of a physical, liquid filled shell phantom facing and irradiated by a typical GSM 900 base station antenna, were compared to corresponding calculated SAR values. The calculation used a homogeneous Visible Human body model in front of a simulated base station antenna of the same type. Both real and simulated base station antennas operated at 935 MHz. Antenna-body distances were between 1 and 65 cm. The agreement between measurements and calculations was excellent. This gave confidence in the subsequent calculated SAR values for the heterogeneous Visible Human model, for which each tissue was assigned the currently accepted values for permittivity and conductivity at 935 MHz. Calculated SAR values within the trunk of the body were found to be about double those for the homogeneous case. When the IEEE standard and the ICNIRP guidelines are both to be complied with, the local SAR averaged over 1 g tissue was found to be the determining parameter. Emitted power values from the antenna that produced the maximum SAR value over 1 g specified in the IEEE standard at the base station are less than those needed to reach the ICNIRP threshold specified for the local SAR averaged over 10 g. For the GSM base station antenna investigated here operating at 935 MHz with 40 W emitted power, the model indicates that the human body should not be closer to the antenna than 18 cm for controlled environment exposure, or about 95 cm for uncontrolled environment exposure. These safe distance limits are for SARs averaged over 1 g tissue. The corresponding safety distance limits under the ICNIRP guidelines for SAR taken over 10 g tissue are 5 cm for occupational exposure and about 75 cm for general-public exposure. Copyright 2002 Wiley-Liss, Inc.
Anderson, John D.
1951-01-01
The plasmodium of Physarum polycephalum reacts to direct current by migration toward the cathode. Cathodal migration was obtained upon a variety of substrata such as baked clay, paper, cellophane, and agar with a current density in the substratum of 1.0 µa./mm.2 Injury was produced by current densities of 8.0 to 12.0 µa./mm.2 The negative galvanotactic response was not due to electrode products. Attempts to demonstrate that the response was due to gradients or orientation in the substratum, pH changes in the mold, cataphoresis, electroosmosis, or endosmosis were not successful. The addition of salts (CaCl2, LiCl, NaCl, Na2SO4, NaHCO3, KCl, MgSO4, sodium citrate, and sea water) to agar indicated that change of cations had more effect than anions upon galvanotaxis and that the effect was upon threshold values. K ion (0.01 M KCl) increased the lower threshold value to 8.0 µa./mm.2 and the upper threshold value to 32.0 µa./mm.2, whereas the Li ion (0.01 M LiCl) increased the lower threshold to only 4.0 µa./mm.2 and the upper threshold to only 16.0 µa./mm.2 The passage of electric current produced no increase in the rate of cathodal migration; neither was there a decrease until injurious current densities were reached. With increase of subthreshold current densities there was a progressive decrease in rate of migration toward the anode until complete anodal inhibition occurred. There was orientation at right angles to the electrodes in alternating current (60 cycle) with current density of 4.0 µa./mm.2 and in direct current of 5.0 µa./mm.2 when polarity of current was reversed every minute. It is concluded that the negative galvanotactic response of P. polycephalum is due to inhibition of migration on the anodal side of the plasmodium and that this inhibition results in the limitation of the normal migration of the mold to a cathodal direction. The mechanism of the anodal inhibition has not been elucidated. PMID:14873916
Seymour, Erlene K; Schiffer, Charles A; de Souza, Jonas A
2017-12-01
The ASCO Value Framework calculates the value of cancer therapies. Given costly novel therapeutics for chronic lymphocytic leukemia, we used the framework to compare net health benefit (NHB) and cost within Medicare of all regimens listed in the National Comprehensive Cancer Network (NCCN) guidelines. The current NCCN guidelines for chronic lymphocytic leukemia were reviewed. All referenced studies were screened, and only randomized controlled prospective trials were included. The revised ASCO Value Framework was used to calculate NHB. Medicare drug pricing was used to calculate the cost of therapies. Forty-nine studies were screened. The following observations were made: only 10 studies (20%) could be evaluated; when comparing regimens studied against the same control arm, ranking NHB scores were comparable to their preference in guidelines; NHB scores varied depending on which variables were used, and there were no clinically validated thresholds for low or high values; treatment-related deaths were not weighted in the toxicity scores; and six of the 10 studies used less potent control arms, ranked as the least-preferred NCCN-recommended regimens. The ASCO Value Framework is an important initial step to quantify value of therapies. Essential limitations include the lack of clinically relevant validated thresholds for NHB scores and lack of incorporation of grade 5 toxicities/treatment-related mortality into its methodology. To optimize its application for clinical practice, we urge investigators/sponsors to incorporate and report the required variables to calculate the NHB of regimens and encourage trials with stronger comparator arms to properly quantify the relative value of therapies.
Thermal detection thresholds in 5-year-old preterm born children; IQ does matter.
de Graaf, Joke; Valkenburg, Abraham J; Tibboel, Dick; van Dijk, Monique
2012-07-01
Experiencing pain at newborn age may have consequences on one's somatosensory perception later in life. Children's perception for cold and warm stimuli may be determined with the Thermal Sensory Analyzer (TSA) device by two different methods. This pilot study in 5-year-old children born preterm aimed at establishing whether the TSA method of limits, which is dependent of reaction time, and the method of levels, which is independent of reaction time, would yield different cold and warm detection thresholds. The second aim was to establish possible associations between intellectual ability and the detection thresholds obtained with either method. A convenience sample was drawn from the participants in an ongoing 5-year follow-up study of a randomized controlled trial on effects of morphine during mechanical ventilation. Thresholds were assessed using both methods and statistically compared. Possible associations between the child's intelligence quotient (IQ) and threshold levels were analyzed. The method of levels yielded more sensitive thresholds than did the method of limits, i.e. mean (SD) cold detection thresholds: 30.3 (1.4) versus 28.4 (1.7) (Cohen'sd=1.2, P=0.001) and warm detection thresholds; 33.9 (1.9) versus 35.6 (2.1) (Cohen's d=0.8, P=0.04). IQ was statistically significantly associated only with the detection thresholds obtained with the method of limits (cold: r=0.64, warm: r=-0.52). The TSA method of levels, is to be preferred over the method of limits in 5-year-old preterm born children, as it establishes more sensitive detection thresholds and is independent of IQ. Copyright © 2011 Elsevier Ltd. All rights reserved.
Defining operating rules for mitigation of drought effects on water supply systems
NASA Astrophysics Data System (ADS)
Rossi, G.; Caporali, E.; Garrote, L.; Federici, G. V.
2012-04-01
Reservoirs play a pivotal role for water supply systems regulation and management especially during drought periods. Optimization of reservoir releases, related to drought mitigation rules is particularly required. The hydrologic state of the system is evaluated defining some threshold values, expressed in probabilistic terms. Risk deficit curves are used to reduce the ensemble of possible rules for simulation. Threshold values can be linked to specific actions in an operational context in different levels of severity, i.e. normal, pre-alert, alert and emergency scenarios. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are validated with a long term simulation that takes into account the characteristics of the evaluated system. The threshold levels determine some curves that define reservoir releases as a function of existing storage volume. A demand reduction is related to each threshold level. The rules to manage the system in drought conditions, the threshold levels and the reductions are optimized using long term simulations with different hypothesized states of the system. Synthetic sequences of flows with the same statistical properties of the historical ones are produced to evaluate the system behaviour. Performances of different values of reduction and different threshold curves are evaluated using different objective function and performances indices. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in Central Italy. The considered demand centres are Firenze and Bagno a Ripoli that have, accordingly to the census ISTAT 2001, a total of 395.000 inhabitants.
Sato, Atsushi; Shimizu, Yusaku; Koyama, Junichi; Hongo, Kazuhiro
2017-06-01
Tissue plasminogen activator (tPA) is effective for the treatment of acute brain ischemia, but may trigger fatal brain edema or hemorrhage if the brain ischemia results in a large infarct. Herein, we attempted to predict the extent of infarcts by determining the optimal threshold of ADC values on DWI that predictively distinguishes between infarct and reversible areas, and by reconstructing color-coded images based on this threshold. The study subjects consisted of 36 patients with acute brain ischemia in whom MRA had confirmed reopening of the occluded arteries in a short time (mean: 99min) after tPA treatment. We measured the apparetnt diffusion coefficient (ADC) values in several small regions of interest over the white matter within high-intensity areas on the initial diffusion weighted image (DWI); then, by comparing the findings to the follow-up images, we obtained the optimal threshold of ADC values using receiver-operating characteristic analysis. The threshold obtained (583×10 -6 m 2 /s) was lower than those previously reported; this threshold could distinguish between infarct and reversible areas with considerable accuracy (sensitivity: 0.87, specificity: 0.94). The threshold obtained and the reconstructed images were predictive of the final radiological result of tPA treatment, and this threshold may be helpful in determining the appropriate management of patients with acute brain ischemia. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Miao, Qinghua; Yang, Dawen; Yang, Hanbo; Li, Zhe
2016-10-01
Flash flooding is one of the most common natural hazards in China, particularly in mountainous areas, and usually causes heavy damage and casualties. However, the forecasting of flash flooding in mountainous regions remains challenging because of the short response time and limited monitoring capacity. This paper aims to establish a strategy for flash flood warnings in mountainous ungauged catchments across humid, semi-humid and semi-arid regions of China. First, we implement a geomorphology-based hydrological model (GBHM) in four mountainous catchments with drainage areas that ranges from 493 to 1601 km2. The results show that the GBHM can simulate flash floods appropriately in these four study catchments. We propose a method to determine the rainfall threshold for flood warning by using frequency analysis and binary classification based on long-term GBHM simulations that are forced by historical rainfall data to create a practically easy and straightforward approach for flash flood forecasting in ungauged mountainous catchments with drainage areas from tens to hundreds of square kilometers. The results show that the rainfall threshold value decreases significantly with increasing antecedent soil moisture in humid regions, while this value decreases slightly with increasing soil moisture in semi-humid and semi-arid regions. We also find that accumulative rainfall over a certain time span (or rainfall over a long time span) is an appropriate threshold for flash flood warnings in humid regions because the runoff is dominated by excess saturation. However, the rainfall intensity (or rainfall over a short time span) is more suitable in semi-humid and semi-arid regions because excess infiltration dominates the runoff in these regions. We conduct a comprehensive evaluation of the rainfall threshold and find that the proposed method produces reasonably accurate flash flood warnings in the study catchments. An evaluation of the performance at uncalibrated interior points in the four gauged catchments provides results that are indicative of the expected performance at ungauged locations. We also find that insufficient historical data lengths (13 years with a 5-year flood return period in this study) may introduce uncertainty in the estimation of the flood/rainfall threshold because of the small number of flood events that are used in binary classification. A data sample that contains enough flood events (10 events suggested in the present study) that exceed the threshold value is necessary to obtain acceptable results from binary classification.
Novel threshold pressure sensors based on nonlinear dynamics of MEMS resonators
NASA Astrophysics Data System (ADS)
Hasan, Mohammad H.; Alsaleem, Fadi M.; Ouakad, Hassen M.
2018-06-01
Triggering an alarm in a car for low air-pressure in the tire or tripping an HVAC compressor if the refrigerant pressure is lower than a threshold value are examples for applications where measuring the amount of pressure is not as important as determining if the pressure has exceeded a threshold value for an action to occur. Unfortunately, current technology still relies on analog pressure sensors to perform this functionality by adding a complex interface (extra circuitry, controllers, and/or decision units). In this paper, we demonstrate two new smart tunable-threshold pressure switch concepts that can reduce the complexity of a threshold pressure sensor. The first concept is based on the nonlinear subharmonic resonance of a straight double cantilever microbeam with a proof mass and the other concept is based on the snap-through bi-stability of a clamped-clamped MEMS shallow arch. In both designs, the sensor operation concept is simple. Any actuation performed at a certain pressure lower than a threshold value will activate a nonlinear dynamic behavior (subharmonic resonance or snap-through bi-stability) yielding a large output that would be interpreted as a logic value of ONE, or ON. Once the pressure exceeds the threshold value, the nonlinear response ceases to exist, yielding a small output that would be interpreted as a logic value of ZERO, or OFF. A lumped, single degree of freedom model for the double cantilever beam, that is validated using experimental data, and a continuous beam model for the arch beam, are used to simulate the operation range of the proposed sensors by identifying the relationship between the excitation signal and the critical cut-off pressure.
Djulbegovic, Benjamin; van den Ende, Jef; Hamm, Robert M; Mayrhofer, Thomas; Hozo, Iztok; Pauker, Stephen G
2015-05-01
The threshold model represents an important advance in the field of medical decision-making. It is a linchpin between evidence (which exists on the continuum of credibility) and decision-making (which is a categorical exercise - we decide to act or not act). The threshold concept is closely related to the question of rational decision-making. When should the physician act, that is order a diagnostic test, or prescribe treatment? The threshold model embodies the decision theoretic rationality that says the most rational decision is to prescribe treatment when the expected treatment benefit outweighs its expected harms. However, the well-documented large variation in the way physicians order diagnostic tests or decide to administer treatments is consistent with a notion that physicians' individual action thresholds vary. We present a narrative review summarizing the existing literature on physicians' use of a threshold strategy for decision-making. We found that the observed variation in decision action thresholds is partially due to the way people integrate benefits and harms. That is, explanation of variation in clinical practice can be reduced to a consideration of thresholds. Limited evidence suggests that non-expected utility threshold (non-EUT) models, such as regret-based and dual-processing models, may explain current medical practice better. However, inclusion of costs and recognition of risk attitudes towards uncertain treatment effects and comorbidities may improve the explanatory and predictive value of the EUT-based threshold models. The decision when to act is closely related to the question of rational choice. We conclude that the medical community has not yet fully defined criteria for rational clinical decision-making. The traditional notion of rationality rooted in EUT may need to be supplemented by reflective rationality, which strives to integrate all aspects of medical practice - medical, humanistic and socio-economic - within a coherent reasoning system. © 2015 Stichting European Society for Clinical Investigation Journal Foundation.
NASA Astrophysics Data System (ADS)
Li, Hui-Jia; Cheng, Qing; Mao, He-Jin; Wang, Huanian; Chen, Junhua
2017-03-01
The study of community structure is a primary focus of network analysis, which has attracted a large amount of attention. In this paper, we focus on two famous functions, i.e., the Hamiltonian function H and the modularity density measure D, and intend to uncover the effective thresholds of their corresponding resolution parameter γ without resolution limit problem. Two widely used example networks are employed, including the ring network of lumps as well as the ad hoc network. In these two networks, we use discrete convex analysis to study the interval of resolution parameter of H and D that will not cause the misidentification. By comparison, we find that in both examples, for Hamiltonian function H, the larger the value of resolution parameter γ, the less resolution limit the network suffers; while for modularity density D, the less resolution limit the network suffers when we decrease the value of γ. Our framework is mathematically strict and efficient and can be applied in a lot of scientific fields.
Investigating the Effects of the Interaction Intensity in a Weak Measurement.
Piacentini, Fabrizio; Avella, Alessio; Gramegna, Marco; Lussana, Rudi; Villa, Federica; Tosi, Alberto; Brida, Giorgio; Degiovanni, Ivo Pietro; Genovese, Marco
2018-05-03
Measurements are crucial in quantum mechanics, for fundamental research as well as for applicative fields like quantum metrology, quantum-enhanced measurements and other quantum technologies. In the recent years, weak-interaction-based protocols like Weak Measurements and Protective Measurements have been experimentally realized, showing peculiar features leading to surprising advantages in several different applications. In this work we analyze the validity range for such measurement protocols, that is, how the interaction strength affects the weak value extraction, by measuring different polarization weak values on heralded single photons. We show that, even in the weak interaction regime, the coupling intensity limits the range of weak values achievable, setting a threshold on the signal amplification effect exploited in many weak measurement based experiments.
Experimental evidence of hot carriers solar cell operation in multi-quantum wells heterostructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodière, Jean; Lombez, Laurent, E-mail: laurent.lombez@chimie-paristech.fr; Le Corre, Alain
We investigated a semiconductor heterostructure based on InGaAsP multi quantum wells (QWs) using optical characterizations and demonstrate its potential to work as a hot carrier cell absorber. By analyzing photoluminescence spectra, the quasi Fermi level splitting Δμ and the carrier temperature are quantitatively measured as a function of the excitation power. Moreover, both thermodynamics values are measured at the QWs and the barrier emission energy. High values of Δμ are found for both transition, and high carrier temperature values in the QWs. Remarkably, the quasi Fermi level splitting measured at the barrier energy exceeds the absorption threshold of the QWs.more » This indicates a working condition beyond the classical Shockley-Queisser limit.« less
Kawase, Jun; Asakura, Hiroshi; Kurosaki, Morito; Oshiro, Hitoshi; Etoh, Yoshiki; Ikeda, Tetsuya; Watahiki, Masanori; Kameyama, Mitsuhiro; Hayashi, Fumi; Kawakami, Yuta; Murakami, Yoshiko; Tsunomori, Yoshie
2018-01-23
We previously developed a multiplex real-time PCR assay (Rapid Foodborne Bacterial Screening 24 ver.5, [RFBS24 ver.5]) for simultaneous detection of 24 foodborne bacterial targets. Here, to overcome the discrepancy of the results from RFBS24 ver.5 and bacterial culture methods (BC), we analyzed 246 human clinical samples from 49 gastroenteritis outbreaks using RFBS24 ver.5 and evaluated the correlation between the cycle threshold (CT) value of RFBS24 ver.5 and the BC results. The results showed that the RFBS24 ver.5 was more sensitive than BC for Campylobacter jejuni and Escherichia coli harboring astA or eae, with positive predictive values (PPV) of 45.5-87.0% and a kappa coefficient (KC) of 0.60-0.92, respectively. The CTs were significantly different between BC-positive and -negative samples (p < 0.01). All RFBS24 ver.5-positive samples were BC-positive under the lower confidence interval (CI) limit of 95% or 99% for the CT of the BC-negative samples. We set the 95% or 99% CI lower limit to the determination CT (d-CT) to discriminate for assured BC-positive results (d-CTs: 27.42-30.86), and subsequently the PPVs (94.7%-100.0%) and KCs (0.89-0.95) of the 3 targets were increased. Together, we concluded that the implication of a d-CT-based approach would be a valuable tool for rapid and accurate diagnoses using the RFBS24 ver.5 system.
Efficiency degradation due to tracking errors for point focusing solar collectors
NASA Technical Reports Server (NTRS)
Hughes, R. O.
1978-01-01
An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.
Neghab, Masoud; Hosseinzadeh, Kiamars; Hassanzadeh, Jafar
2015-01-01
Background Unleaded petrol contains significant amounts of monocyclic aromatic hydrocarbons such as benzene, toluene, and xylenes (BTX). Toxic responses following occupational exposure to unleaded petrol have been evaluated only in limited studies. The main purpose of this study was to ascertain whether (or not) exposure to unleaded petrol, under normal working conditions, is associated with any hepatotoxic or nephrotoxic response. Methods This was a cross-sectional study in which 200 employees of Shiraz petrol stations with current exposure to unleaded petrol, as well as 200 unexposed employees, were investigated. Atmospheric concentrations of BTX were measured using standard methods. Additionally, urine and fasting blood samples were taken from individuals for urinalysis and routine biochemical tests of kidney and liver function. Results The geometric means of airborne concentrations of BTX were found to be 0.8 mg m−3, 1.4 mg m−3, and 2.8 mg m−3, respectively. Additionally, means of direct bilirubin, alanine aminotransferase, aspartate aminotransferase, blood urea and plasma creatinine were significantly higher in exposed individuals than in unexposed employees. Conversely, serum albumin, total protein, and serum concentrations of calcium and sodium were significantly lower in petrol station workers than in their unexposed counterparts. Conclusion The average exposure of petrol station workers to BTX did not exceed the current threshold limit values (TLVs) for these chemicals. However, evidence of subtle, subclinical and prepathologic early liver and kidney dysfunction was evident in exposed individuals. PMID:26929843
Gillespie-Gallery, Hanna; Konstantakopoulou, Evgenia; Harlow, Jonathan A; Barbur, John L
2013-09-09
It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance, and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. We recruited 95 participants aged 20 to 85 years. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C optotype were measured using a 4-alternative, forced-choice (4AFC) procedure at screen luminances from 34 to 0.12 cd/m(2) at the fovea and parafovea (0° and ±4°). Pupil size was measured continuously. The Health of the Retina index (HRindex) was computed to capture the loss of contrast sensitivity with decreasing light level. Participants were excluded if they exhibited performance outside the normal limits of interocular differences or HRindex values, or signs of ocular disease. Parafoveal contrast thresholds showed a steeper decline and higher correlation with age at the parafovea than the fovea. Of participants with clinical signs of ocular disease, 83% had HRindex values outside the normal limits. Binocular summation of contrast signals declined with age, independent of interocular differences. The HRindex worsens more rapidly with age at the parafovea, consistent with histologic findings of rod loss and its link to age-related degenerative disease of the retina. The HRindex and interocular differences could be used to screen for and separate the earliest stages of subclinical disease from changes caused by normal aging.
Jeong, Jee Yeon; Park, Jong Su; Kim, Pan Gyi
2016-06-01
Shipbuilding involves intensive welding activities, and welders are exposed to a variety of metal fumes, including manganese, that may be associated with neurological impairments. This study aimed to characterize total and size-fractionated manganese exposure resulting from welding operations in shipbuilding work areas. In this study, we characterized manganese-containing particulates with an emphasis on total mass (n = 86, closed-face 37-mm cassette samplers) and particle size-selective mass concentrations (n = 86, 8-stage cascade impactor samplers), particle size distributions, and a comparison of exposure levels determined using personal cassette and impactor samplers. Our results suggest that 67.4% of all samples were above the current American Conference of Governmental Industrial Hygienists manganese threshold limit value of 100 μg/m(3) as inhalable mass. Furthermore, most of the particles containing manganese in the welding process were of the size of respirable particulates, and 90.7% of all samples exceeded the American Conference of Governmental Industrial Hygienists threshold limit value of 20 μg/m(3) for respirable manganese. The concentrations measured with the two sampler types (cassette: total mass; impactor: inhalable mass) were significantly correlated (r = 0.964, p < 0.001), but the total concentration obtained using cassette samplers was lower than the inhalable concentration of impactor samplers.
[Occupational exposure to hand-transmitted vibration in Poland].
Harazin, Barbara; Zieliński, Grzegorz
2004-01-01
Occupational exposure to hand transmitted vibration may cause disorders in upper extremities known as hand-arm vibration syndrome. Therefore it is essential to know the sources of vibration, occupational groups exposed to vibration and the number of exposed workers. The aim of the study was to estimate the number of men and women exposed to hand-transmitted vibration in Poland. The completed questionnaires were obtained from 265 (80%) sanitary inspection stations. They included questions on: the name of workplaces, the name and the type of vibration sources, workers' gender, the number of workers exposed to vibration, indicating the extent of exposure measured against the three threshold limit values (< 0.5 TLV; 0.5 < TLV < 1 and > 1 TLV), and the number of workers exposed to hand-transmitted vibration not documented by measurements in a particular workplaces, indicating one of the three possible kinds of exposure (occasional, periodical and constant). The questionnaire data were based on measurements and analyses performed in 1997-2000. The results of the study showed that vibrating tools used by grinders, fitters, locksmiths, rammers, road workers, carpenters and smiths proved to be the most frequent sources of hand-transmitted vibration. It was revealed that 78.6% of operators of these tools were exposed to vibration exceeding 1 TLV. The study also indicated that 17,000 workers, including 1700 women, were exposed to vibration exceeding the threshold limit values.
Vehicle anti-rollover control strategy based on load transferring rate
NASA Astrophysics Data System (ADS)
Dai, W. T.; Du, H. Q.; Zhang, L.
2018-03-01
When vehicles is drived on a low adhesion road or going on a high speed and sharp turn, it is prone to product some lateral stability problems, such as lateral sideslip or rollover. In order to improve the vehicle anti-rollover stability under these limited conditions, a SUV vehicle model with high mass center was built based on the software of CarSim and the rollover stability controller was designed using the static threshold value method for the lateral load transferring rate (LTR). The simulations are shown that the vehicle anti-rollover stability under limit conditions is improved using the SUV model.
Bettembourg, Charles; Diot, Christian; Dameron, Olivier
2015-01-01
Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274
The mutation-drift balance in spatially structured populations.
Schneider, David M; Martins, Ayana B; de Aguiar, Marcus A M
2016-08-07
In finite populations the action of neutral mutations is balanced by genetic drift, leading to a stationary distribution of alleles that displays a transition between two different behaviors. For small mutation rates most individuals will carry the same allele at equilibrium, whereas for high mutation rates of the alleles will be randomly distributed with frequencies close to one half for a biallelic gene. For well-mixed haploid populations the mutation threshold is μc=1/2N, where N is the population size. In this paper we study how spatial structure affects this mutation threshold. Specifically, we study the stationary allele distribution for populations placed on regular networks where connected nodes represent potential mating partners. We show that the mutation threshold is sensitive to spatial structure only if the number of potential mates is very small. In this limit, the mutation threshold decreases substantially, increasing the diversity of the population at considerably low mutation rates. Defining kc as the degree of the network for which the mutation threshold drops to half of its value in well-mixed populations we show that kc grows slowly as a function of the population size, following a power law. Our calculations and simulations are based on the Moran model and on a mapping between the Moran model with mutations and the voter model with opinion makers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Doctoral conceptual thresholds in cellular and molecular biology
NASA Astrophysics Data System (ADS)
Feldon, David F.; Rates, Christopher; Sun, Chongning
2017-12-01
In the biological sciences, very little is known about the mechanisms by which doctoral students acquire the skills they need to become independent scientists. In the postsecondary biology education literature, identification of specific skills and effective methods for helping students to acquire them are limited to undergraduate education. To establish a foundation from which to investigate the developmental trajectory of biologists' research skills, it is necessary to identify those skills which are integral to doctoral study and distinct from skills acquired earlier in students' educational pathways. In this context, the current study engages the framework of threshold concepts to identify candidate skills that are both obstacles and significant opportunities for developing proficiency in conducting research. Such threshold concepts are typically characterised as transformative, integrative, irreversible, and challenging. The results from interviews and focus groups with current and former doctoral students in cellular and molecular biology suggest two such threshold concepts relevant to their subfield: the first is an ability to effectively engage primary research literature from the biological sciences in a way that is critical without dismissing the value of its contributions. The second is the ability to conceptualise appropriate control conditions necessary to design and interpret the results of experiments in an efficient and effective manner for research in the biological sciences as a discipline. Implications for prioritising and sequencing graduate training experiences are discussed on the basis of the identified thresholds.
2012-01-01
values of EAFP, EAFN, and EAF, can be compared with three user-defined threshold values, TAFP, TAFN, and TAF . These threshold values determine the update...values were chosen as TAFP = E0AFP + 0.02, TAFN = E0AFN + 0.02, and TAF = E0AF + 0.02). We called the value of 0.02 the margin of error tolerance. In
The effects of prolonged weightlessness and reduced gravity environments on human survival.
Taylor, R L
1993-03-01
The manned exploration of the solar system and the surfaces of some of the smaller planets and larger satellites requires that we are able to keep the adverse human physiological response to long term exposure to near zero and greatly reduced gravity environments within acceptable limits consistent with metabolic function. This paper examines the physiological changes associated with microgravity conditions with particular reference to the weightless demineralizatoin of bone (WDB). It is suggested that many of these changes are the result of physical/mechanical processes and are not primarily a medical problem. There are thus two immediately obvious and workable, if relatively costly, solutions to the problem of weightlessness. The provision of a near 1 g field during prolonged space flights, and/or the development of rapid transit spacecraft capable of significant acceleration and short flight times. Although these developments could remove or greatly ameliorate the effects of weightlessness during long-distance space flights there remains a problem relating to the long term colonization of the surfaces of Mars, the Moon, and other small solar system bodies. It is not yet known whether or not there is a critical threshold value of 'g' below which viable human physiological function cannot be sustained. If such a threshold exists permanent colonization may only be possible if the threshold value of 'g' is less than that at the surface of the planet on which we wish to settle.
Uncertainties in extreme surge level estimates from observational records.
van den Brink, H W; Können, G P; Opsteegh, J D
2005-06-15
Ensemble simulations with a total length of 7540 years are generated with a climate model, and coupled to a simple surge model to transform the wind field over the North Sea to the skew surge level at Delfzijl, The Netherlands. The 65 constructed surge records, each with a record length of 116 years, are analysed with the generalized extreme value (GEV) and the generalized Pareto distribution (GPD) to study both the model and sample uncertainty in surge level estimates with a return period of 104 years, as derived from 116-year records. The optimal choice of the threshold, needed for an unbiased GPD estimate from peak over threshold (POT) values, cannot be determined objectively from a 100-year dataset. This fact, in combination with the sensitivity of the GPD estimate to the threshold, and its tendency towards too low estimates, leaves the application of the GEV distribution to storm-season maxima as the best approach. If the GPD analysis is applied, then the exceedance rate, lambda, chosen should not be larger than 4. The climate model hints at the existence of a second population of very intense storms. As the existence of such a second population can never be excluded from a 100-year record, the estimated 104-year wind-speed from such records has always to be interpreted as a lower limit.
Orellana, Luis H.; Rodriguez-R, Luis M.; Konstantinidis, Konstantinos T.
2016-10-07
Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles andmore » related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N 2O, to inert N 2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orellana, Luis H.; Rodriguez-R, Luis M.; Konstantinidis, Konstantinos T.
Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles andmore » related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N 2O, to inert N 2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes.« less
2017-01-01
Abstract Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles and related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N2O, to inert N2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes. PMID:28180325
Nadkarni, Tanvi N; Andreoli, Matthew J; Nair, Veena A; Yin, Peng; Young, Brittany M; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S; Field, Aaron S; Baskaya, Mustafa K; Moritz, Chad H; Meyerand, M Elizabeth; Prabhakaran, Vivek
2015-01-01
Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits.
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
DeForest, David K; Gilron, Guy; Armstrong, Sarah A; Robertson, Erin L
2012-01-01
A freshwater Se guideline was developed for consideration based on concentrations in fish eggs or ovaries, with a focus on Canadian species, following the Canadian Council of Ministers of the Environment protocol for developing guideline values. When sufficient toxicity data are available, the protocol recommends deriving guidelines as the 5th percentile of the species sensitivity distribution (SSD). When toxicity data are limited, the protocol recommends a lowest value approach, where the lowest toxicity threshold is divided by a safety factor (e.g., 10). On the basis of a comprehensive review of the current literature and an assessment of the data therein, there are sufficient egg and ovary Se data available for freshwater fish to develop an SSD. For most fish species, Se EC10 values (10% effect concentrations) could be derived, but for some species, only no-observed-effect concentrations and/or lowest-observed-effect concentrations could be identified. The 5th percentile egg and ovary Se concentrations from the SSD were consistently 20 µg/g dry weight (dw) for the best-fitting distributions. In contrast, the lowest value approach using a safety factor of 10 would result in a Se egg and ovary guideline of 2 µg/g dw, which is unrealistically conservative, as this falls within the range of egg and ovary Se concentrations in laboratory control fish and fish collected from reference sites. An egg and ovary Se guideline of 20 µg/g dw should be considered a conservative, broadly applicable guideline, as no species mean toxicity thresholds lower than this value have been identified to date. When concentrations exceed this guideline, site-specific studies with local fish species, conducted using a risk-based approach, may result in higher egg and ovary Se toxicity thresholds. Copyright © 2011 SETAC.
Prognostic Value of Quantitative Stress Perfusion Cardiac Magnetic Resonance.
Sammut, Eva C; Villa, Adriana D M; Di Giovine, Gabriella; Dancy, Luke; Bosio, Filippo; Gibbs, Thomas; Jeyabraba, Swarna; Schwenke, Susanne; Williams, Steven E; Marber, Michael; Alfakih, Khaled; Ismail, Tevfik F; Razavi, Reza; Chiribiri, Amedeo
2018-05-01
This study sought to evaluate the prognostic usefulness of visual and quantitative perfusion cardiac magnetic resonance (CMR) ischemic burden in an unselected group of patients and to assess the validity of consensus-based ischemic burden thresholds extrapolated from nuclear studies. There are limited data on the prognostic value of assessing myocardial ischemic burden by CMR, and there are none using quantitative perfusion analysis. Patients with suspected coronary artery disease referred for adenosine-stress perfusion CMR were included (n = 395; 70% male; age 58 ± 13 years). The primary endpoint was a composite of cardiovascular death, nonfatal myocardial infarction, aborted sudden death, and revascularization after 90 days. Perfusion scans were assessed visually and with quantitative analysis. Cross-validated Cox regression analysis and net reclassification improvement were used to assess the incremental prognostic value of visual or quantitative perfusion analysis over a baseline clinical model, initially as continuous covariates, then using accepted thresholds of ≥2 segments or ≥10% myocardium. After a median 460 days (interquartile range: 190 to 869 days) follow-up, 52 patients reached the primary endpoint. At 2 years, the addition of ischemic burden was found to increase prognostic value over a baseline model of age, sex, and late gadolinium enhancement (baseline model area under the curve [AUC]: 0.75; visual AUC: 0.84; quantitative AUC: 0.85). Dichotomized quantitative ischemic burden performed better than visual assessment (net reclassification improvement 0.043 vs. 0.003 against baseline model). This study was the first to address the prognostic benefit of quantitative analysis of perfusion CMR and to support the use of consensus-based ischemic burden thresholds by perfusion CMR for prognostic evaluation of patients with suspected coronary artery disease. Quantitative analysis provided incremental prognostic value to visual assessment and established risk factors, potentially representing an important step forward in the translation of quantitative CMR perfusion analysis to the clinical setting. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
NASA Astrophysics Data System (ADS)
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
NASA Astrophysics Data System (ADS)
Zhu, Yanli; Chen, Haiqiang
2017-05-01
In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.
NASA Astrophysics Data System (ADS)
Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.
2015-07-01
The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.
NASA Astrophysics Data System (ADS)
Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo
2014-05-01
This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.
NASA Astrophysics Data System (ADS)
Rajesh, K.; Arun, A.; Mani, A.; Praveen Kumar, P.
2016-10-01
The 4-methylimidazolium picrate has been synthesized and characterized successfully. Single and powder x-ray diffraction studies were conducted which confirmed the crystal structure, and the value of the strain was calculated. The crystal perfection was determined by a HRXR diffractometer. The transmission spectrum exhibited a better transmittance of the crystal in the entire visible region with a lower cut-off wavelength of 209 nm. The linear absorption value was calculated by the optical limiting method. A birefringence study was also carried out. Second and third order nonlinear optical properties of the crystal were found by second harmonic generation and the z-scan technique. The crystals were also characterized by dielectric measurement and a photoconductivity analyzer to determine the dielectric property and the optical conductivity of the crystal. The laser damage threshold activity of the grown crystal was studied by a Q-switched Nd:YAG laser beam. Thermal studies established that the compound did not undergo a phase transition and was stable up to 240 °C.
NASA Technical Reports Server (NTRS)
Bassom, Andrew P.; Seddougui, Sharon O.
1991-01-01
There exist two types of stationary instability of the flow over a rotating disc corresponding to the upper branch, inviscid mode and the lower branch mode, which has a triple deck structure, of the neutral stability curve. A theoretical study of the linear problem and an account of the weakly nonlinear properties of the lower branch modes have been undertaken by Hall and MacKerrell respectively. Motivated by recent reports of experimental sightings of the lower branch mode and an examination of the role of suction on the linear stability properties of the flow here, the effects are studied of suction on the nonlinear disturbance described by MacKerrell. The additional analysis required in order to incorporate suction is relatively straightforward and enables the derivation of an amplitude equation which describes the evolution of the mode. For each value of the suction, a threshold value of the disturbance amplitude is obtained; modes of size greater than this threshold grow without limit as they develop away from the point of neutral stability.
NASA Astrophysics Data System (ADS)
Bochkarev, N. N.; Kabanov, A. M.; Stepanov, A. N.
2008-02-01
Using two optical acoustic approaches we experimentally investigated spatial location of filament zone of propagation channel of focused laser radiation. For femtosecond pulses passing in air it was shown that nonlinear focus length had spatial scale of 1/P at initial power P moderate for self-focusing and at optical system focus distance significantly lower than Rayleigh beam length. The results of experimental optical acoustic investigation of femto- and nanosecond pulses attenuation by some biological tissues (muscular tissue, adipose tissue, cutaneous covering, milk) and optical breakdown thresholds on these one are presented. It was shown that penetration depth of short laser pulse radiation into biological tissues is the same as for longer one. However, amplitude of acoustic response to a process of interaction of femtosecond laser pulse with biological tissue is larger in several times than that to interaction with nanosecond pulses of the same power and spectral distribution. The obtained of threshold values can be interesting for tabulation of limit allowable levels of irradiation at work with laser radiation. Such values are unknown for femtosecond laser pulses today.
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
Salicylate-induced changes in auditory thresholds of adolescent and adult rats.
Brennan, J F; Brown, C A; Jastreboff, P J
1996-01-01
Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Branion-Calles, Michael C; Nelson, Trisalyn A; Henderson, Sarah B
2015-11-19
There is no safe concentration of radon gas, but guideline values provide threshold concentrations that are used to map areas at higher risk. These values vary between different regions, countries, and organizations, which can lead to differential classification of risk. For example the World Health Organization suggests a 100 Bq m(-3)value, while Health Canada recommends 200 Bq m(-3). Our objective was to describe how different thresholds characterized ecological radon risk and their visual association with lung cancer mortality trends in British Columbia, Canada. Eight threshold values between 50 and 600 Bq m(-3) were identified, and classes of radon vulnerability were defined based on whether the observed 95(th) percentile radon concentration was above or below each value. A balanced random forest algorithm was used to model vulnerability, and the results were mapped. We compared high vulnerability areas, their estimated populations, and differences in lung cancer mortality trends stratified by smoking prevalence and sex. Classification accuracy improved as the threshold concentrations decreased and the area classified as high vulnerability increased. Majority of the population lived within areas of lower vulnerability regardless of the threshold value. Thresholds as low as 50 Bq m(-3) were associated with higher lung cancer mortality, even in areas with low smoking prevalence. Temporal trends in lung cancer mortality were increasing for women, while decreasing for men. Radon contributes to lung cancer in British Columbia. The results of the study contribute evidence supporting the use of a reference level lower than the current guideline of 200 Bq m(-3) for the province.
Shang, De-Sheng; Ruan, Ling-Xiang; Zhou, Shui-Hong; Bao, Yang-Yang; Cheng, Ke-Jia; Wang, Qin-Ying
2013-01-01
Background Diffusion-weighted magnetic resonance imaging (DWI) has been introduced in head and neck cancers. Due to limitations in the performance of laryngeal DWI, including the complex anatomical structure of the larynx leading to susceptibility effects, the value of DWI in differentiating benign from malignant laryngeal lesions has largely been ignored. We assessed whether a threshold for the apparent diffusion coefficient (ADC) was useful in differentiating preoperative laryngeal carcinomas from precursor lesions by turbo spin-echo (TSE) DWI and 3.0-T magnetic resonance. Methods We evaluated DWI and the ADC value in 33 pathologically proven laryngeal carcinomas and 17 precancerous lesions. Results The sensitivity, specificity, and accuracy were 81.8%, 64.7%, 76.0% by laryngostroboscopy, respectively. The sensitivity, specificity, and accuracy of conventional magnetic resonance imaging were 90.9%, 76.5%, 86.0%, respectively. Qualitative DWI analysis produced sensitivity, specificity, and accuracy values of 100.0, 88.2, and 96.0%, respectively. The ADC values were lower for patients with laryngeal carcinoma (mean 1.195±0.32×10−3 mm2/s) versus those with laryngeal precancerous lesions (mean 1.780±0.32×10−3 mm2/s; P<0.001). ROC analysis showed that the area under the curve was 0.956 and the optimum threshold for the ADC was 1.455×10−3 mm2/s, resulting in a sensitivity of 94.1%, a specificity of 90.9%, and an accuracy of 92.9%. Conclusions Despite some limitations, including the small number of laryngeal carcinomas included, DWI may detect changes in tumor size and shape before they are visible by laryngostroboscopy. The ADC values were lower for patients with laryngeal carcinoma than for those with laryngeal precancerous lesions. The proposed cutoff for the ADC may help distinguish laryngeal carcinomas from laryngeal precancerous lesions. PMID:23874693
NASA Technical Reports Server (NTRS)
Moore, E. N.; Altick, P. L.
1972-01-01
The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.
Damian, Anne M; Jacobson, Sandra A; Hentz, Joseph G; Belden, Christine M; Shill, Holly A; Sabbagh, Marwan N; Caviness, John N; Adler, Charles H
2011-01-01
To perform an item analysis of the Montreal Cognitive Assessment (MoCA) versus the Mini-Mental State Examination (MMSE) in the prediction of cognitive impairment, and to examine the characteristics of different MoCA threshold scores. 135 subjects enrolled in a longitudinal clinicopathologic study were administered the MoCA by a single physician and the MMSE by a trained research assistant. Subjects were classified as cognitively impaired or cognitively normal based on independent neuropsychological testing. 89 subjects were found to be cognitively normal, and 46 cognitively impaired (20 with dementia, 26 with mild cognitive impairment). The MoCA was superior in both sensitivity and specificity to the MMSE, although not all MoCA tasks were of equal predictive value. A MoCA threshold score of 26 had a sensitivity of 98% and a specificity of 52% in this population. In a population with a 20% prevalence of cognitive impairment, a threshold of 24 was optimal (negative predictive value 96%, positive predictive value 47%). This analysis suggests the potential for creating an abbreviated MoCA. For screening in primary care, the MoCA threshold of 26 appears optimal. For testing in a memory disorders clinic, a lower threshold has better predictive value. Copyright © 2011 S. Karger AG, Basel.
Grant, Wally; Curthoys, Ian
2017-09-01
Vestibular otolithic organs are recognized as transducers of head acceleration and they function as such up to their corner frequency or undamped natural frequency. It is well recognized that these organs respond to frequencies above their corner frequency up to the 2-3 kHz range (Curthoys et al., 2016). A mechanics model for the transduction of these organs is developed that predicts the response below the undamped natural frequency as an accelerometer and above that frequency as a seismometer. The model is converted to a transfer function using hair cell bundle deflection. Measured threshold acceleration stimuli are used along with threshold deflections for threshold transfer function values. These are compared to model predicted values, both below and above their undamped natural frequency. Threshold deflection values are adjusted to match the model transfer function. The resulting threshold deflection values were well within in measure threshold bundle deflection ranges. Vestibular Evoked Myogenic Potentials (VEMPs) today routinely uses stimulus frequencies of 500 and 1000 Hz, and otoliths have been established incontrovertibly by clinical and neural evidence as the stimulus source. The mechanism for stimulus at these frequencies above the undamped natural frequency of otoliths is presented where otoliths are utilizing a seismometer mode of response for VEMP transduction. Copyright © 2017 Elsevier B.V. All rights reserved.
Threshold-based insulin-pump interruption for reduction of hypoglycemia.
Bergenstal, Richard M; Klonoff, David C; Garg, Satish K; Bode, Bruce W; Meredith, Melissa; Slover, Robert H; Ahmann, Andrew J; Welsh, John B; Lee, Scott W; Kaufman, Francine R
2013-07-18
The threshold-suspend feature of sensor-augmented insulin pumps is designed to minimize the risk of hypoglycemia by interrupting insulin delivery at a preset sensor glucose value. We evaluated sensor-augmented insulin-pump therapy with and without the threshold-suspend feature in patients with nocturnal hypoglycemia. We randomly assigned patients with type 1 diabetes and documented nocturnal hypoglycemia to receive sensor-augmented insulin-pump therapy with or without the threshold-suspend feature for 3 months. The primary safety outcome was the change in the glycated hemoglobin level. The primary efficacy outcome was the area under the curve (AUC) for nocturnal hypoglycemic events. Two-hour threshold-suspend events were analyzed with respect to subsequent sensor glucose values. A total of 247 patients were randomly assigned to receive sensor-augmented insulin-pump therapy with the threshold-suspend feature (threshold-suspend group, 121 patients) or standard sensor-augmented insulin-pump therapy (control group, 126 patients). The changes in glycated hemoglobin values were similar in the two groups. The mean AUC for nocturnal hypoglycemic events was 37.5% lower in the threshold-suspend group than in the control group (980 ± 1200 mg per deciliter [54.4 ± 66.6 mmol per liter] × minutes vs. 1568 ± 1995 mg per deciliter [87.0 ± 110.7 mmol per liter] × minutes, P<0.001). Nocturnal hypoglycemic events occurred 31.8% less frequently in the threshold-suspend group than in the control group (1.5 ± 1.0 vs. 2.2 ± 1.3 per patient-week, P<0.001). The percentages of nocturnal sensor glucose values of less than 50 mg per deciliter (2.8 mmol per liter), 50 to less than 60 mg per deciliter (3.3 mmol per liter), and 60 to less than 70 mg per deciliter (3.9 mmol per liter) were significantly reduced in the threshold-suspend group (P<0.001 for each range). After 1438 instances at night in which the pump was stopped for 2 hours, the mean sensor glucose value was 92.6 ± 40.7 mg per deciliter (5.1 ± 2.3 mmol per liter). Four patients (all in the control group) had a severe hypoglycemic event; no patients had diabetic ketoacidosis. This study showed that over a 3-month period the use of sensor-augmented insulin-pump therapy with the threshold-suspend feature reduced nocturnal hypoglycemia, without increasing glycated hemoglobin values. (Funded by Medtronic MiniMed; ASPIRE ClinicalTrials.gov number, NCT01497938.).
Extreme value theory applied to the definition of bathing water quality discounting limits.
Haggarty, R A; Ferguson, C A; Scott, E M; Iroegbu, C; Stidson, R
2010-02-01
The European Community Bathing Water Directive (European Parliament, 2006) set compliance standards for bathing waters across Europe, with minimum standards for microbiological indicators to be attained at all locations by 2015. The Directive allows up to 15% of samples affected by short-term pollution episodes to be disregarded from the figures used to classify bathing waters, provided certain management criteria have been met, including informing the public of short-term water pollution episodes. Therefore, a scientifically justifiable discounting limit is required which could be used as a management tool to determine the samples that should be removed. This paper investigates different methods of obtaining discounting limits, focusing in particular on extreme value methodology applied to data from Scottish bathing waters. Return level based limits derived from threshold models applied at a site-specific level improved the percentage of sites which met at least the minimum required standard. This approach provides a method of obtaining limits which identify the samples that should be removed from compliance calculations, although care has to be taken in terms of the quantity of data which is removed. (c) 2009 Elsevier Ltd. All rights reserved.
Hauge-Nilsen, Kristin; Keller, Detlef
2015-01-01
Starting from a single generic limit value, the threshold of toxicological concern (TTC) concept has been further developed over the years, e.g., by including differentiated structural classes according to the rules of Cramer et al. (Food Chem Toxicol 16: 255-276, 1978). In practice, the refined TTC concept of Munro et al. (Food Chem Toxicol 34: 829-867, 1996) is often applied. The purpose of this work was to explore the possibility of refining the concept by introducing additional structure-activity relationships and available toxicity data. Computer modeling was performed using the OECD Toolbox. No observed (adverse) effect level (NO(A)EL) data of 176 substances were collected in a basic data set. New subgroups were created applying the following criteria: extended Cramer rules, low bioavailability, low acute toxicity, no protein binding affinity, and consideration of predicted liver metabolism. The highest TTC limit value of 236 µg/kg/day was determined for a subgroup that combined the criteria "no protein binding affinity" and "predicted liver metabolism." This value was approximately eight times higher than the original Cramer class 1 limit value of 30 µg/kg/day. The results of this feasibility study indicate that inclusion of the proposed criteria may lead to improved TTC values. Thereby, the applicability of the TTC concept in risk assessment could be extended which could reduce the need to perform animal tests.
NASA Astrophysics Data System (ADS)
Cavanaugh, K. C.; Kellner, J.; Cook-Patton, S.; Williams, P.; Feller, I. C.; Parker, J.
2014-12-01
Due to limitations of purely correlative species distribution models, there is a need for more integration of experimental approaches when studying impacts of climate change on species distributions. Here we used controlled experiments to identify physiological thresholds that control poleward range limits of three species of mangroves found in North America. We found that all three species exhibited a threshold response to extreme cold, but freeze tolerance thresholds varied among species. From these experiments we developed a climate metric, freeze degree days (FDD), which incorporates both the intensity and frequency of freezes. When included in distribution models, FDD was a better predictor of mangrove presence/absence than other temperature-based metrics. Using 27 years of satellite imagery, we linked FDD to past changes in mangrove abundance in Florida, further supporting the relevance of FDD. We then used downscaled climate projections of FDD to project poleward migration of these range limits over the next 50 years.
Limitations and opportunities for the social cost of carbon (Invited)
NASA Astrophysics Data System (ADS)
Rose, S. K.
2010-12-01
Estimates of the marginal value of carbon dioxide-the social cost of carbon (SCC)-were recently adopted by the U.S. Government in order to satisfy requirements to value estimated GHG changes of new federal regulations. However, the development and use of SCC estimates of avoided climate change impacts comes with significant challenges and controversial decisions. Fortunately, economics can provide some guidance for conceptually appropriate estimates. At the same time, economics defaults to a benefit-cost decision framework to identify socially optimal policies. However, not all current policy decisions are benefit-cost based, nor depend on monetized information, or even have the same threshold for information. While a conceptually appropriate SCC is a useful metric, how far can we take it? This talk discusses potential applications of the SCC, limitations based on the state of research and methods, as well as opportunities for among other things consistency with climate risk management and research and decision-making tools.
Digital audio watermarking using moment-preserving thresholding
NASA Astrophysics Data System (ADS)
Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong
2007-09-01
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
Harm is all you need? Best interests and disputes about parental decision-making.
Birchley, Giles
2016-02-01
A growing number of bioethics papers endorse the harm threshold when judging whether to override parental decisions. Among other claims, these papers argue that the harm threshold is easily understood by lay and professional audiences and correctly conforms to societal expectations of parents in regard to their children. English law contains a harm threshold which mediates the use of the best interests test in cases where a child may be removed from her parents. Using Diekema's seminal paper as an example, this paper explores the proposed workings of the harm threshold. I use examples from the practical use of the harm threshold in English law to argue that the harm threshold is an inadequate answer to the indeterminacy of the best interests test. I detail two criticisms: First, the harm standard has evaluative overtones and judges are loath to employ it where parental behaviour is misguided but they wish to treat parents sympathetically. Thus, by focusing only on 'substandard' parenting, harm is problematic where the parental attempts to benefit their child are misguided or wrong, such as in disputes about withdrawal of medical treatment. Second, when harm is used in genuine dilemmas, court judgments offer different answers to similar cases. This level of indeterminacy suggests that, in practice, the operation of the harm threshold would be indistinguishable from best interests. Since indeterminacy appears to be the greatest problem in elucidating what is best, bioethicists should concentrate on discovering the values that inform best interests. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
A new function for estimating local rainfall thresholds for landslide triggering
NASA Astrophysics Data System (ADS)
Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.
2009-04-01
The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.
Zhang, Lin; Huttin, Olivier; Marie, Pierre-Yves; Felblinger, Jacques; Beaumont, Marine; Chillou, Christian DE; Girerd, Nicolas; Mandry, Damien
2016-11-01
To compare three widely used methods for myocardial infarct (MI) sizing on late gadolinium-enhanced (LGE) magnetic resonance (MR) images: manual delineation and two semiautomated techniques (full-width at half-maximum [FWHM] and n-standard deviation [SD]). 3T phase-sensitive inversion-recovery (PSIR) LGE images of 114 patients after an acute MI (2-4 days and 6 months) were analyzed by two independent observers to determine both total and core infarct sizes (TIS/CIS). Manual delineation served as the reference for determination of optimal thresholds for semiautomated methods after thresholding at multiple values. Reproducibility and accuracy were expressed as overall bias ± 95% limits of agreement. Mean infarct sizes by manual methods were 39.0%/24.4% for the acute MI group (TIS/CIS) and 29.7%/17.3% for the chronic MI group. The optimal thresholds (ie, providing the closest mean value to the manual method) were FWHM30% and 3SD for the TIS measurement and FWHM45% and 6SD for the CIS measurement (paired t-test; all P > 0.05). The best reproducibility was obtained using FWHM. For TIS measurement in the acute MI group, intra-/interobserver agreements, from Bland-Altman analysis, with FWHM30%, 3SD, and manual were -0.02 ± 7.74%/-0.74 ± 5.52%, 0.31 ± 9.78%/2.96 ± 16.62% and -2.12 ± 8.86%/0.18 ± 16.12, respectively; in the chronic MI group, the corresponding values were 0.23 ± 3.5%/-2.28 ± 15.06, -0.29 ± 10.46%/3.12 ± 13.06% and 1.68 ± 6.52%/-2.88 ± 9.62%, respectively. A similar trend for reproducibility was obtained for CIS measurement. However, semiautomated methods produced inconsistent results (variabilities of 24-46%) compared to manual delineation. The FWHM technique was the most reproducible method for infarct sizing both in acute and chronic MI. However, both FWHM and n-SD methods showed limited accuracy compared to manual delineation. J. Magn. Reson. Imaging 2016;44:1206-1217. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Bérubé, P.-M.; Poirier, J.-S.; Margot, J.; Stafford, L.; Ndione, P. F.; Chaker, M.; Morandotti, R.
2009-09-01
The influence of surface chemistry in plasma etching of multicomponent oxides was investigated through measurements of the ion energy dependence of the etch yield. Using pulsed-laser-deposited CaxBa(1-x)Nb2O6 (CBN) and SrTiO3 thin films as examples, it was found that the etching energy threshold shifts toward values larger or smaller than the sputtering threshold depending on whether or not ion-assisted chemical etching is the dominant etching pathway and whether surface chemistry is enhancing or inhibiting desorption of the film atoms. In the case of CBN films etched in an inductively coupled Cl2 plasma, it is found that the chlorine uptake is inhibiting the etching reaction, with the desorption of nonvolatile NbCl2 and BaCl2 compounds being the rate-limiting step.
Radiation damage limits to XPCS studies of protein dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vodnala, Preeti, E-mail: preeti.vodnala@gmail.com; Karunaratne, Nuwan; Lurio, Laurence
2016-07-27
The limitations to x-ray photon correlation spectroscopy (XPCS) imposed by radiation damage have been evaluated for suspensions of alpha crystallin. We find that the threshold for radiation damage to the measured protein diffusion rate is significantly lower than the threshold for damage to the protein structure. We provide damage thresholds beyond which the measured diffusion coeffcients have been modified using both XPCS and dynamic light scattering (DLS).
Dependence of Interfacial Excess on the Threshold Value of the Isoconcentration Surface
NASA Technical Reports Server (NTRS)
Yoon, Kevin E.; Noebe, Ronald D.; Hellman, Olof C.; Seidman, David N.
2004-01-01
The proximity histogram (or proxigram for short) is used for analyzing data collected by a three-dimensional atom probe microscope. The interfacial excess of Re (2.41 +/- 0.68 atoms/sq nm) is calculated by employing a proxigram in a completely geometrically independent way for gamma/gamma' interfaces in Rene N6, a third-generation single-crystal Ni-based superalloy. A possible dependence of interfacial excess on the variation of the threshold value of an isoconcentration surface is investigated using the data collected for Rene N6 alloy. It is demonstrated that the dependence of the interfacial excess value on the threshold value of the isoconcentration surface is weak.
Vernon, John A; Goldberg, Robert; Golec, Joseph
2009-01-01
In this article we describe how reimbursement cost-effectiveness thresholds, per unit of health benefit, whether set explicitly or observed implicitly via historical reimbursement decisions, serve as a signal to firms about the commercial viability of their R&D projects (including candidate products for in-licensing). Traditional finance methods for R&D project valuations, such as net present value analyses (NPV), incorporate information from these payer reimbursement signals to help determine which R&D projects should be continued and which should be terminated (in the case of the latter because they yield an NPV < 0). Because the influence these signals have for firm R&D investment decisions is so significant, we argue that it is important for reimbursement thresholds to reflect the economic value of the unit of health benefit being considered for reimbursement. Thresholds set too low (below the economic value of the health benefit) will result in R&D investment levels that are too low relative to the economic value of R&D (on the margin). Similarly, thresholds set too high (above the economic value of the health benefit) will result in inefficiently high levels of R&D spending. The US in particular, which represents approximately half of the global pharmaceutical market (based on sales), and which seems poised to begin undertaking cost effectiveness in a systematic way, needs to exert caution in setting policies that explicitly or implicitly establish cost-effectiveness reimbursement thresholds for healthcare products and technologies, such as pharmaceuticals.
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
An Analysis of Changes in Threshold Limit Values Over Time
1993-01-01
Metribuzin 21087-64-91 1984 1984 Mevinphos 7786-34-7 1982: 1982 _ _ Mica 12001-26-2 1 I987 1987 1 Mineral wool fiber _ 1987 1 1987 _ i t Molybdenum as Mo...1 _ _ i alpha-Methyl styrene _ __ 1968 6 Metribuzin Mevinphos t Mica _ _ _ Mineral wool fiber _ Molybdenum as Mo _ _ Soluble compounds I_...Metribuzin 0 0 i _ __ Mevinphos 0 0 Mica 0 10 ,0___ Mineral wool fiber 0 0 P _ _ Molybdenum as Mo _ ___ Soluble compounds 0 0 Insoluble compounds 1 j 00 _ i
Melching-Kollmuss, Stephanie; Dekant, Wolfgang; Kalberlah, Fritz
2010-03-01
Limits for tolerable concentrations of ground water metabolites ("non-relevant metabolites" without targeted toxicities and specific classification and labeling) derived from active ingredients (AI) of plant protection products (PPPs) are discussed in the European Union. Risk assessments for "non-relevant metabolites" need to be performed when concentrations are above 0.75 microg/L. Since oral uptake is the only relevant exposure pathway for "non-relevant metabolites", risk assessment approaches as used for other chemicals with predominantly oral exposure in humans are applicable. The concept of "thresholds of toxicological concern" (TTC) defines tolerable dietary intakes for chemicals without toxicity data and is widely applied to chemicals present in food in low concentrations such as flavorings. Based on a statistical evaluation of the results of many toxicity studies and considerations of chemical structures, the TTC concept derives a maximum daily oral intake without concern of 90 microg/person/day for non-genotoxic chemicals, even for those with appreciable toxicity. When using the typical exposure assessment for drinking water contaminants (consumption of 2L of drinking water/person/day, allocation of 10% of the tolerable daily intake to drinking water), a TTC-based upper concentration limit of 4.5 microg/L for "non-relevant metabolites" in ground/drinking water is delineated. In the present publication it has been evaluated, whether this value would cover all relevant toxicities (repeated dose, reproductive and developmental, and immune effects). Taking into account, that after evaluation of specific reproduction toxicity data from chemicals and pharmaceuticals, a value of 1 microg/kgbw/day has been assessed as to cover developmental and reproduction toxicity, a TTC value of 60 microg/person/day was assessed as to represent a safe value. Based on these reasonable worst case assumptions, a TTC-derived threshold of 3 microg/L in drinking water is derived. When a non-relevant metabolite is present in concentration below 3 microg/L, animal testing for toxicity is not considered necessary for a compound-specific risk assessment since the application of the TTC covers all relevant toxicities to be considered in such assessment and any health risk resulting from these exposures is very low. (c) 2009 Elsevier Inc. All rights reserved.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
Low-threshold field emission in planar cathodes with nanocarbon materials
NASA Astrophysics Data System (ADS)
Zhigalov, V.; Petukhov, V.; Emelianov, A.; Timoshenkov, V.; Chaplygin, Yu.; Pavlov, A.; Shamanaev, A.
2016-12-01
Nanocarbon materials are of great interest as field emission cathodes due to their low threshold voltage. In this work current-voltage characteristics of nanocarbon electrodes were studied. Low-threshold emission was found in planar samples where field enhancement is negligible (<10). Electron work function values, calculated by Fowler-Nordheim theory, are anomalous low (<1 eV) and come into collision with directly measured work function values in fabricated planar samples (4.1-4.4 eV). Non-applicability of Fowler-Nordheim theory for the nanocarbon materials was confirmed. The reasons of low-threshold emission in nanocarbon materials are discussed.
Munford, Luke A; Sidaway, Mark; Blakemore, Amy; Sutton, Matt; Bower, Pete
2017-01-01
Background Community assets are promoted as a way to improve quality of life and reduce healthcare usage. However, the quantitative impact of participation in community assets on these outcomes is not known. Methods We examined the association between participation in community assets and health-related quality of life (HRQoL) (EuroQol-5D-5L) and healthcare usage in 3686 individuals aged ≥65 years. We estimated the unadjusted differences in EuroQol-5D-5L scores and healthcare usage between participants and non-participants in community assets and then used multivariate regression to examine scores adjusted for sociodemographic and limiting long-term health conditions. We derived the net benefits of participation using a range of threshold values for a quality-adjusted life year (QALY). Results 50% of individuals reported participation in community assets. Their EuroQol-5D-5L scores were 0.094 (95% CI 0.077 to 0.111) points higher than non-participants. Controlling for sociodemographic characteristics reduced this differential to 0.081 (95% CI 0.064 to 0.098). Further controlling for limiting long-term conditions reduced this effect to 0.039 (95% CI 0.025 to 0.052). Once we adjusted for sociodemographic and limiting long-term conditions, the reductions in healthcare usage and costs associated with community asset participation were not statistically significant. Based on a threshold value of £20 000 per QALY, the net benefits of participation in community assets were £763 (95% CI £478 to £1048) per participant per year. Conclusions Participation in community assets is associated with substantially higher HRQoL but is not associated with lower healthcare costs. The social value of developing community assets is potentially substantial. PMID:28183807
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
The Limits to Adaptation; A Systems Approach
The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...
Method and apparatus for analog pulse pile-up rejection
De Geronimo, Gianluigi
2013-12-31
A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.
Method and apparatus for analog pulse pile-up rejection
De Geronimo, Gianluigi
2014-11-18
A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.
Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan
2018-04-01
Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.
Joint Dictionary Learning for Multispectral Change Detection.
Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao
2017-04-01
Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.
Chandrasekar, Vaishnavi; Janes, Dustin W; Saylor, David M; Hood, Alan; Bajaj, Akhil; Duncan, Timothy V; Zheng, Jiwen; Isayeva, Irada S; Forrey, Christopher; Casey, Brendan J
2018-01-01
A novel approach for rapid risk assessment of targeted leachables in medical device polymers is proposed and validated. Risk evaluation involves understanding the potential of these additives to migrate out of the polymer, and comparing their exposure to a toxicological threshold value. In this study, we propose that a simple diffusive transport model can be used to provide conservative exposure estimates for phase separated color additives in device polymers. This model has been illustrated using a representative phthalocyanine color additive (manganese phthalocyanine, MnPC) and polymer (PEBAX 2533) system. Sorption experiments of MnPC into PEBAX were conducted in order to experimentally determine the diffusion coefficient, D = (1.6 ± 0.5) × 10 -11 cm 2 /s, and matrix solubility limit, C s = 0.089 wt.%, and model predicted exposure values were validated by extraction experiments. Exposure values for the color additive were compared to a toxicological threshold for a sample risk assessment. Results from this study indicate that a diffusion model-based approach to predict exposure has considerable potential for use as a rapid, screening-level tool to assess the risk of color additives and other small molecule additives in medical device polymers.
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Alexandre Wang, Qiuping; Li, Wei; Cai, Xu
2017-09-01
The formation of continuous opinion dynamics is investigated based on a virtual gambling mechanism where agents fight for a limited resource. We propose a model with agents holding opinions between -1 and 1. Agents are segregated into two cliques according to the sign of their opinions. Local communication happens only when the opinion distance between corresponding agents is no larger than a pre-defined confidence threshold. Theoretical analysis regarding special cases provides a deep understanding of the roles of both the resource allocation parameter and confidence threshold in the formation of opinion dynamics. For a sparse network, the evolution of opinion dynamics is negligible in the region of low confidence threshold when the mindless agents are absent. Numerical results also imply that, in the presence of economic agents, high confidence threshold is required for apparent clustering of agents in opinion. Moreover, a consensus state is generated only when the following three conditions are satisfied simultaneously: mindless agents are absent, the resource is concentrated in one clique, and confidence threshold tends to a critical value(=1.25+2/ka ; k_a>8/3 , the average number of friends of individual agents). For fixed a confidence threshold and resource allocation parameter, the most chaotic steady state of the dynamics happens when the fraction of mindless agents is about 0.7. It is also demonstrated that economic agents are more likely to win at gambling, compared to mindless ones. Finally, the importance of three involved parameters in establishing the uncertainty of model response is quantified in terms of Latin hypercube sampling-based sensitivity analysis.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Jerolmack, D. J.
2017-12-01
Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.
Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa
2015-11-01
To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. Copyright © 2015 Elsevier Inc. All rights reserved.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Method for early detection of cooling-loss events
Bermudez, Sergio A.; Hamann, Hendrik; Marianno, Fernando J.
2015-06-30
A method of detecting cooling-loss event early is provided. The method includes defining a relative humidity limit and change threshold for a given space, measuring relative humidity in the given space, determining, with a processing unit, whether the measured relative humidity is within the defined relative humidity limit, generating a warning in an event the measured relative humidity is outside the defined relative humidity limit and determining whether a change in the measured relative humidity is less than the defined change threshold for the given space and generating an alarm in an event the change is greater than the defined change threshold.
Method for early detection of cooling-loss events
Bermudez, Sergio A.; Hamann, Hendrik F.; Marianno, Fernando J.
2015-12-22
A method of detecting cooling-loss event early is provided. The method includes defining a relative humidity limit and change threshold for a given space, measuring relative humidity in the given space, determining, with a processing unit, whether the measured relative humidity is within the defined relative humidity limit, generating a warning in an event the measured relative humidity is outside the defined relative humidity limit and determining whether a change in the measured relative humidity is less than the defined change threshold for the given space and generating an alarm in an event the change is greater than the defined change threshold.
Early Leakage Protection System of LPG (Liquefied Petroleum Gas) Based on ATMega 16 Microcontroller
NASA Astrophysics Data System (ADS)
Sriwati; Ikhsan Ilahi, Nur; Musrawati; Baco, Syarifuddin; Suyuti'Andani Achmad, Ansar; Umrianah, Ejah
2018-04-01
LPG (Liquefied Petroleum Gas). LPG is a hydrocarbon gas production from refineries and gas refinery with the major components of propane gas (C3H8) and butane (C4H10). Limit flame (Flammable Range) or also called gas with air. Value Lower Explosive Limit (LEL) is the minimum limit of the concentration of fuel vapor in the air which if there is no source of fire, the gas will be burned. While the value of the Upper Explosive Limit (UEL), which limits the maximum concentration of fuel vapor in the air, which if no source of fire, the gas will be burned. Protection system is a defend mechanism of human, equipment, and buildings around the protected area. Goals to be achieved in this research are to design a protection system against the consequences caused by the leakage of LPG gas based on ATmega16 microcontroller. The method used in this research is to reduce the levels of leaked LPG and turned off the power source when the leakage of LPG is on the verge of explosive limit. The design of this protection system works accurately between 200 ppm up to 10000 ppm, which is still below the threshold of explosive. Thus protecting the early result of that will result in the leakage of LPG gas.
Pre-impact fall detection system using dynamic threshold and 3D bounding box
NASA Astrophysics Data System (ADS)
Otanasap, Nuth; Boonbrahm, Poonpong
2017-02-01
Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.
The conventional tuning fork as a quantitative tool for vibration threshold.
Alanazy, Mohammed H; Alfurayh, Nuha A; Almweisheer, Shaza N; Aljafen, Bandar N; Muayqil, Taim
2018-01-01
This study was undertaken to describe a method for quantifying vibration when using a conventional tuning fork (CTF) in comparison to a Rydel-Seiffer tuning fork (RSTF) and to provide reference values. Vibration thresholds at index finger and big toe were obtained in 281 participants. Spearman's correlations were performed. Age, weight, and height were analyzed for their covariate effects on vibration threshold. Reference values at the fifth percentile were obtained by quantile regression. The correlation coefficients between CTF and RSTF values at finger/toe were 0.59/0.64 (P = 0.001 for both). Among covariates, only age had a significant effect on vibration threshold. Reference values for CTF at finger/toe for the age groups 20-39 and 40-60 years were 7.4/4.9 and 5.8/4.6 s, respectively. Reference values for RSTF at finger/toe for the age groups 20-39 and 40-60 years were 6.9/5.5 and 6.2/4.7, respectively. CTF provides quantitative values that are as good as those provided by RSTF. Age-stratified reference data are provided. Muscle Nerve 57: 49-53, 2018. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Johnson, Brittney; McCracken, I. Moriah
2016-01-01
In 2015, threshold concepts formed the foundation of two disciplinary documents: The "ACRL Framework for Information Literacy" (2015) and "Naming What We Know: Threshold Concepts of Writing Studies" (2015). While there is no consensus in the fields about the value of threshold concepts in teaching, reading the six Frames in the…
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
Achieving metrological precision limits through postselection
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos
2017-01-01
Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.
Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan
2015-06-01
[Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds.
Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan
2015-01-01
[Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds. PMID:26180348
Smolensky, Michael H; Reinberg, Alain E; Sackett-Lundeen, Linda
2017-01-01
The circadian time structure (CTS) and its disruption by rotating and nightshift schedules relative to work performance, accident risk, and health/wellbeing have long been areas of occupational medicine research. Yet, there has been little exploration of the relevance of the CTS to setting short-term, time-weighted, and ceiling threshold limit values (TLVs); conducting employee biological monitoring (BM); and establishing normative reference biological exposure indices (BEIs). Numerous publications during the past six decades document the CTS substantially affects the disposition - absorption, distribution, metabolism, and elimination - and effects of medications. Additionally, laboratory animal and human studies verify the tolerance to chemical, biological (contagious), and physical agents can differ extensively according to the circadian time of exposure. Because of slow and usually incomplete CTS adjustment by rotating and permanent nightshift workers, occupational chemical and other contaminant encounters occur during a different circadian stage than for dayshift workers. Thus, the intended protection of some TLVs when working the nightshift compared to dayshift might be insufficient, especially in high-risk settings. The CTS is germane to employee BM in that large-amplitude predictable-in-time 24h variation can occur in the concentration of urine, blood, and saliva of monitored chemical contaminants and their metabolites plus biomarkers indicative of adverse xenobiotic exposure. The concept of biological time-qualified (for rhythms) reference values, currently of interest to clinical laboratory pathology practice, is seemingly applicable to industrial medicine as circadian time and workshift-specific BEIs to improve surveillance of night workers, in particular. Furthermore, BM as serial assessments performed frequently both during and off work, exemplified by employee self-measurement of lung function using a small portable peak expiratory flow meter, can easily identify intolerance before induction of pathology.
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia
2018-04-01
This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.
48 CFR 41.401 - Monthly and annual review.
Code of Federal Regulations, 2010 CFR
2010-10-01
... values exceeding the simplified acquisition threshold, on an annual basis. Annual reviews of accounts with annual values at or below the simplified acquisition threshold shall be conducted when deemed... services to each facility under the utility's most economical, applicable rate and to examine competitive...
NASA Technical Reports Server (NTRS)
Waller, Jess M.; Williams, James H.; Fries, Joseph (Technical Monitor)
1999-01-01
The permeation resistance of chlorinated polyethylene (CPE) used in totally encapsulating chemical protective suits against the aerospace fuels hydrazine, monomethylhydrazine, and uns-dimethylhydrazine was determined by measuring the breakthrough time (BT) and time-averaged vapor transmission rate (VTR) using procedures consistent with ASTM F 739 and ASTM F 1383. Two exposure scenarios were simulated: a 2 hour (h) fuel vapor exposure, and a liquid fuel "splash" followed by a 2 h vapor exposure. To simulate internal suit pressure during operation, a positive differential pressure of 0.3 in. water (75 Pa) on the collection side of the permeation apparatus was used. Using the available data, a model was developed to estimate propellant concentrations inside an air-line fed, totally encapsulating chemical protective suit. Concentrations were calculated under simulated conditions of fixed vapor transmission rate, variable breathing air flow rate, and variable splash exposure area. Calculations showed that the maximum allowable permeation rates of hydrazine fuels through CPE were of the order of 0.05 to 0.08 ng/sq cm min for encapsulating suits with low breathing air flow rates (of the order of 5 scfm or 140 L min-1). Above these permeation rates, the 10 parts-per-billion (ppb) threshold limit value time-weighted average could be exceeded. To evaluate suit performance at 10 ppb threshold-limiting value/time-weighted average level concentrations, use of a sensitive analytical method such as cation exchange high performance liquid chromatography with amperometric detection was found to be essential. The analytical detection limit determines the lowest measurable VTR, which in turn governed the lowest per meant concentration that could be calculated inside the totally encapsulating chemical protective suit.
Kapellusch, Jay M; Gerr, Frederic E; Malloy, Elizabeth J; Garg, Arun; Harris-Adamson, Carisa; Bao, Stephen S; Burt, Susan E; Dale, Ann Marie; Eisen, Ellen A; Evanoff, Bradley A; Hegmann, Kurt T; Silverstein, Barbara A; Theise, Matthew S; Rempel, David M
2014-01-01
Objective This paper aimed to quantify exposure–response relationships between the American Conference of Governmental Industrial Hygienists’ (ACGIH) threshold limit value (TLV) for hand-activity level (HAL) and incidence of carpal tunnel syndrome (CTS). Methods Manufacturing and service workers previously studied by six research institutions had their data combined and re-analyzed. CTS cases were defined by symptoms and abnormal nerve conduction. Hazard ratios (HR) were calculated using proportional hazards regression after adjusting for age, gender, body mass index, and CTS predisposing conditions. Results The longitudinal study comprised 2751 incident-eligible workers, followed prospectively for up to 6.4 years and contributing 6243 person-years of data. Associations were found between CTS and TLV for HAL both as a continuous variable [HR 1.32 per unit, 95% confidence interval (95% CI) 1.11–1.57] and when categorized using the ACGIH action limit (AL) and TLV. Those between the AL and TLV and above the TLV had HR of 1.7 (95% CI 1.2–2.5) and 1.5 (95% CI 1.0–2.1), respectively. As independent variables (in the same adjusted model) the HR for peak force (PF) and HAL were 1.14 per unit (95% CI 1.05–1.25), and 1.04 per unit (95% CI 0.93–1.15), respectively. Conclusion Those with exposures above the AL were at increased risk of CTS, but there was no further increase in risk for workers above the TLV. This suggests that the current AL may not be sufficiently protective of workers. Combinations of PF and HAL are useful for predicting risk of CTS. PMID:25266844
Occupational Heat Stress Impacts on Health and Productivity in a Steel Industry in Southern India.
Krishnamurthy, Manikandan; Ramalingam, Paramesh; Perumal, Kumaravel; Kamalakannan, Latha Perumal; Chinnadurai, Jeremiah; Shanmugam, Rekha; Srinivasan, Krishnan; Venugopal, Vidhya
2017-03-01
Workers laboring in steel industries in tropical settings with high ambient temperatures are subjected to thermally stressful environments that can create well-known risks of heat-related illnesses and limit workers' productivity. A cross-sectional study undertaken in a steel industry in a city nicknamed "Steel City" in Southern India assessed thermal stress by wet bulb globe temperature (WBGT) and level of dehydration from urine color and urine specific gravity. A structured questionnaire captured self-reported heat-related health symptoms of workers. Some 90% WBGT measurements were higher than recommended threshold limit values (27.2-41.7°C) for heavy and moderate workloads and radiational heat from processes were very high in blooming-mill/coke-oven (67.6°C globe temperature). Widespread heat-related health concerns were prevalent among workers, including excessive sweating, fatigue, and tiredness reported by 50% workers. Productivity loss was significantly reported high in workers with direct heat exposures compared to those with indirect heat exposures (χ 2 = 26.1258, degrees of freedom = 1, p < 0.001). Change in urine color was 7.4 times higher among workers exposed to WBGTs above threshold limit values (TLVs). Preliminary evidence shows that high heat exposures and heavy workload adversely affect the workers' health and reduce their work capacities. Health and productivity risks in developing tropical country work settings can be further aggravated by the predicted temperature rise due to climate change, without appropriate interventions. Apart from industries enhancing welfare facilities and designing control interventions, further physiological studies with a seasonal approach and interventional studies are needed to strengthen evidence for developing comprehensive policies to protect workers employed in high heat industries.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.
Donoho, David; Jin, Jiashun
2008-09-30
In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
Donoho, David; Jin, Jiashun
2008-01-01
In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365
Nadkarni, Tanvi N.; Andreoli, Matthew J.; Nair, Veena A.; Yin, Peng; Young, Brittany M.; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S.; Field, Aaron S.; Baskaya, Mustafa K.; Moritz, Chad H.; Meyerand, M. Elizabeth; Prabhakaran, Vivek
2014-01-01
Background and purpose Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Materials and methods Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Results Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Conclusion Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits. PMID:25685705
Vemer, Pepijn; Rutten-van Mölken, Maureen P M H
2011-10-01
Recently, several checklists systematically assessed factors that affect the transferability of cost-effectiveness (CE) studies between jurisdictions. The role of the threshold value for a QALY has been given little consideration in these checklists, even though the importance of a factor as a cause of between country differences in CE depends on this threshold. In this paper, we study the impact of the willingness-to-pay (WTP) per QALY on the importance of transferability factors in the case of smoking cessation support (SCS). We investigated, for several values of the WTP, how differences between six countries affect the incremental net monetary benefit (INMB) of SCS. The investigated factors were demography, smoking prevalence, mortality, epidemiology and costs of smoking-related diseases, resource use and unit costs of SCS, utility weights and discount rates. We found that when the WTP decreased, factors that mainly affect health outcomes became less important and factors that mainly effect costs became more important. With a WTP below
NASA Technical Reports Server (NTRS)
Smith, Paul L.; VonderHaar, Thomas H.
1996-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.
NASA Astrophysics Data System (ADS)
Zhang, Fengtian; Wang, Chao; Yuan, Mingquan; Tang, Bin; Xiong, Zhuang
2017-12-01
Most of the MEMS inertial switches developed in recent years are intended for shock and impact sensing with a threshold value above 50 g. In order to follow the requirement of detecting linear acceleration signal at low-g level, a silicon based MEMS inertial switch with a threshold value of 5 g was designed, fabricated and characterized. The switch consisted of a large proof mass, supported by circular spiral springs. An analytical model of the structure stiffness of the proposed switch was derived and verified by finite-element simulation. The structure fabrication was based on a customized double-buried layer silicon-on-insulator wafer and encapsulated by glass wafers. The centrifugal experiment and nanoindentation experiment were performed to measure the threshold value as well as the structure stiffness. The actual threshold values were measured to be 0.1-0.3 g lower than the pre-designed value of 5 g due to the dimension loss during non-contact lithography processing. Concerning the reliability assessment, a series of environmental experiments were conducted and the switches remained operational without excessive errors. However, both the random vibration and the shock tests indicate that the metal particles generated during collision of contact parts might affect the contact reliability and long-time stability. According to the conclusion reached in this report, an attentive study on switch contact behavior should be included in future research.
Self-Organization on Social Media: Endo-Exo Bursts and Baseline Fluctuations
Oka, Mizuki; Hashimoto, Yasuhiro; Ikegami, Takashi
2014-01-01
A salient dynamic property of social media is bursting behavior. In this paper, we study bursting behavior in terms of the temporal relation between a preceding baseline fluctuation and the successive burst response using a frequency time series of 3,000 keywords on Twitter. We found that there is a fluctuation threshold up to which the burst size increases as the fluctuation increases and that above the threshold, there appears a variety of burst sizes. We call this threshold the critical threshold. Investigating this threshold in relation to endogenous bursts and exogenous bursts based on peak ratio and burst size reveals that the bursts below this threshold are endogenously caused and above this threshold, exogenous bursts emerge. Analysis of the 3,000 keywords shows that all the nouns have both endogenous and exogenous origins of bursts and that each keyword has a critical threshold in the baseline fluctuation value to distinguish between the two. Having a threshold for an input value for activating the system implies that Twitter is an excitable medium. These findings are useful for characterizing how excitable a keyword is on Twitter and could be used, for example, to predict the response to particular information on social media. PMID:25329610
Cost-effectiveness thresholds: methods for setting and examples from around the world.
Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano
2018-06-01
Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.
Threshold Values for Identification of Contamination Predicted by Reduced-Order Models
Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; ...
2014-12-31
The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.
Processing circuitry for single channel radiation detector
NASA Technical Reports Server (NTRS)
Holland, Samuel D. (Inventor); Delaune, Paul B. (Inventor); Turner, Kathryn M. (Inventor)
2009-01-01
Processing circuitry is provided for a high voltage operated radiation detector. An event detector utilizes a comparator configured to produce an event signal based on a leading edge threshold value. A preferred event detector does not produce another event signal until a trailing edge threshold value is satisfied. The event signal can be utilized for counting the number of particle hits and also for controlling data collection operation for a peak detect circuit and timer. The leading edge threshold value is programmable such that it can be reprogrammed by a remote computer. A digital high voltage control is preferably operable to monitor and adjust high voltage for the detector.
Black, R.W.; Moran, P.W.; Frankforter, J.D.
2011-01-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).
Black, Robert W; Moran, Patrick W; Frankforter, Jill D
2011-04-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.
The color masking ability of a zirconia ceramic on the substrates with different values.
Tabatabaian, Farhad; Javadi Sharif, Mahdiye; Massoumi, Farhood; Namdari, Mahshid
2017-01-01
Background. The color masking ability of a restoration plays a significant role in coveringa discolored substructure; however, this optical property of zirconia ceramics has not been clearly determined yet. The aim of this in vitro study was to evaluate the color masking ability of a zirconia ceramic on substrates with different values. Methods. Ten zirconia disk specimens,0.5 mm in thickness and 10 mm in diameter, were fabricated by a CAD/CAM system. Four substrates with different values were prepared, including: white (control), light grey, dark grey, and black. The disk specimens were placed over the substratesfor spectrophotometric measurements. A spectrophotometer measured the L * , a * , and b * color attributes of the specimens. Additionally, ΔE values were calculated to determine the color differences between each group and the control,and were then compared with the perceptional threshold of ΔE=2.6. Repeated-measures ANOVA, Bonferroni, and one-sample t-test were used to analyze data. All the tests were carried out at 0.05 level of significance. Results. The means and standard deviations of ΔE values for the three groups of light grey, dark grey and black were 9.94±2.11, 10.40±2.09, and 13.34±1.77 units, respectively.Significant differences were detected between the groups in the ΔE values (P<0.0001).The ΔE values in all the groups were more than the predetermined perceptional threshold(ΔE>2.6) (P<0.0001). Conclusion. Within the limitations of this study, it was concluded that the tested zirconia ceramic did not exhibit sufficient color masking ability to hide the grey and black substrates.
Garrison, Louis P; Neumann, Peter J; Willke, Richard J; Basu, Anirban; Danzon, Patricia M; Doshi, Jalpa A; Drummond, Michael F; Lakdawalla, Darius N; Pauly, Mark V; Phelps, Charles E; Ramsey, Scott D; Towse, Adrian; Weinstein, Milton C
2018-02-01
This summary section first lists key points from each of the six sections of the report, followed by six key recommendations. The Special Task Force chose to take a health economics approach to the question of whether a health plan should cover and reimburse a specific technology, beginning with the view that the conventional cost-per-quality-adjusted life-year metric has both strengths as a starting point and recognized limitations. This report calls for the development of a more comprehensive economic evaluation that could include novel elements of value (e.g., insurance value and equity) as part of either an "augmented" cost-effectiveness analysis or a multicriteria decision analysis. Given an aggregation of elements to a measure of value, consistent use of a cost-effectiveness threshold can help ensure the maximization of health gain and well-being for a given budget. These decisions can benefit from the use of deliberative processes. The six recommendations are to: 1) be explicit about decision context and perspective in value assessment frameworks; 2) base health plan coverage and reimbursement decisions on an evaluation of the incremental costs and benefits of health care technologies as is provided by cost-effectiveness analysis; 3) develop value thresholds to serve as one important input to help guide coverage and reimbursement decisions; 4) manage budget constraints and affordability on the basis of cost-effectiveness principles; 5) test and consider using structured deliberative processes for health plan coverage and reimbursement decisions; and 6) explore and test novel elements of benefit to improve value measures that reflect the perspectives of both plan members and patients. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Garg, A; Kapellusch, J; Hegmann, K; Wertsch, J; Merryweather, A; Deckow-Schaefer, G; Malloy, E J
2012-01-01
A cohort of 536 workers was enrolled from 10 diverse manufacturing facilities and was followed monthly for six years. Job physical exposures were individually measured. Worker demographics, medical history, psychosocial factors, current musculoskeletal disorders (MSDs) and nerve conduction studies (NCS) were obtained. Point and lifetime prevalence of carpal tunnel syndrome (CTS) at baseline (symptoms + abnormal NCS) were 10.3% and 19.8%. During follow-up, there were 35 new CTS cases (left, right or both hands). Factors predicting development of CTS included: job physical exposure (American conference of governmental industrial hygienists Threshold Limit Value (ACGIH TLV) for Hand Activity Level (HAL) and the Strain Index (SI)), age, BMI, other MSDs, inflammatory arthritis, gardening outside of work and feelings of depression. In the adjusted models, the TLV for HAL and the SI were both significant per unit increase in exposure with hazard ratios (HR) increasing up to a maximum of 5.4 (p = 0.05) and 5.3 (p = 0.03), respectively; however, similar to other reports, both suggested lower risk at higher exposures. Data suggest that the TLV for HAL and the SI are useful metrics for estimating exposure to biomechanical stressors. This study was conducted to determine how well the TLV for HAL and the SI predict risk of CTS using a prospective cohort design with survival analysis. Both the TLV for HAL and the SI were found to predict risk of CTS when adjusted for relevant covariates.
Ferguson, Sue A.; Allread, W. Gary; Burr, Deborah L.; Heaney, Catherine; Marras, William S.
2013-01-01
Background Biomechanical, psychosocial and individual risk factors for low back disorder have been studied extensively however few researchers have examined all three risk factors. The objective of this was to develop a low back disorder risk model in furniture distribution workers using biomechanical, psychosocial and individual risk factors. Methods This was a prospective study with a six month follow-up time. There were 454 subjects at 9 furniture distribution facilities enrolled in the study. Biomechanical exposure was evaluated using the American Conference of Governmental Industrial Hygienists (2001) lifting threshold limit values for low back injury risk. Psychosocial and individual risk factors were evaluated via questionnaires. Low back health functional status was measured using the lumbar motion monitor. Low back disorder cases were defined as a loss of low back functional performance of −0.14 or more. Findings There were 92 cases of meaningful loss in low back functional performance and 185 non cases. A multivariate logistic regression model included baseline functional performance probability, facility, perceived workload, intermediated reach distance number of exertions above threshold limit values, job tenure manual material handling, and age combined to provide a model sensitivity of 68.5% and specificity of 71.9%. Interpretation: The results of this study indicate which biomechanical, individual and psychosocial risk factors are important as well as how much of each risk factor is too much resulting in increased risk of low back disorder among furniture distribution workers. PMID:21955915
Automatic delineation of functional lung volumes with 68Ga-ventilation/perfusion PET/CT.
Le Roux, Pierre-Yves; Siva, Shankar; Callahan, Jason; Claudic, Yannis; Bourhis, David; Steinfort, Daniel P; Hicks, Rodney J; Hofman, Michael S
2017-10-10
Functional volumes computed from 68 Ga-ventilation/perfusion (V/Q) PET/CT, which we have shown to correlate with pulmonary function test parameters (PFTs), have potential diagnostic utility in a variety of clinical applications, including radiotherapy planning. An automatic segmentation method would facilitate delineation of such volumes. The aim of this study was to develop an automated threshold-based approach to delineate functional volumes that best correlates with manual delineation. Thirty lung cancer patients undergoing both V/Q PET/CT and PFTs were analyzed. Images were acquired following inhalation of Galligas and, subsequently, intravenous administration of 68 Ga-macroaggreted-albumin (MAA). Using visually defined manual contours as the reference standard, various cutoff values, expressed as a percentage of the maximal pixel value, were applied. The average volume difference and Dice similarity coefficient (DSC) were calculated, measuring the similarity of the automatic segmentation and the reference standard. Pearson's correlation was also calculated to compare automated volumes with manual volumes, and automated volumes optimized to PFT indices. For ventilation volumes, mean volume difference was lowest (- 0.4%) using a 15%max threshold with Pearson's coefficient of 0.71. Applying this cutoff, median DSC was 0.93 (0.87-0.95). Nevertheless, limits of agreement in volume differences were large (- 31.0 and 30.2%) with differences ranging from - 40.4 to + 33.0%. For perfusion volumes, mean volume difference was lowest and Pearson's coefficient was highest using a 15%max threshold (3.3% and 0.81, respectively). Applying this cutoff, median DSC was 0.93 (0.88-0.93). Nevertheless, limits of agreement were again large (- 21.1 and 27.8%) with volume differences ranging from - 18.6 to + 35.5%. Using the 15%max threshold, moderate correlation was demonstrated with FEV1/FVC (r = 0.48 and r = 0.46 for ventilation and perfusion images, respectively). No correlation was found between other PFT indices. To automatically delineate functional volumes with 68 Ga-V/Q PET/CT, the most appropriate cutoff was 15%max for both ventilation and perfusion images. However, using this unique threshold systematically provided unacceptable variability compared to the reference volume and relatively poor correlation with PFT parameters. Accordingly, a visually adapted semi-automatic method is favored, enabling rapid and quantitative delineation of lung functional volumes with 68 Ga-V/Q PET/CT.
NASA Astrophysics Data System (ADS)
Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola
2018-03-01
We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.
Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.
2017-01-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532
Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W
2017-04-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.
Diagnostic management of chronic obstructive pulmonary disease.
Broekhuizen, B D L; Sachs, A P E; Hoes, A W; Verheij, T J M; Moons, K G M
2012-01-01
Detection of early chronic obstructive pulmonary disease (COPD) in patients presenting with respiratory symptoms is recommended; however, diagnosing COPD is difficult because a single gold standard is not available. The aim of this article is to review and interpret the existing evidence, theories and consensus on the individual parts of the diagnostic work-up for COPD. Relevant articles are discussed under the subheadings: history taking, physical examination, spirometry and additional lung function assessment. Wheezing, cough, phlegm and breathlessness on exertion are suggestive signs for COPD. The diagnostic value of the physical examination is limited, except for auscultated pulmonary wheezing or reduced breath sounds, increasing the probability of COPD. Spirometric airflow obstruction after bronchodilation, defined as a lowered ratio of the forced volume in one second to the forced vital capacity (FEV1/FVC ratio), is a prerequisite, but can only confirm COPD in combination with suggestive symptoms. Different thresholds are being recommended to define low FEV1/FVC, including a fixed threshold, and one varying with gender and age; however, the way physicians interpret these thresholds in their assessment is not well known. Body plethysmography allows a more complete assessment of pulmonary function, providing results on the total lung capacity and the residual volume and is indicated when conventional spirometry results are inconclusive. Chest radiography has no diagnostic value for COPD but is useful to exclude alternative diagnoses such as heart failure or lung cancer. Extensive history taking is of key importance in diagnosing COPD.
Getting the message across: using ecological integrity to communicate with resource managers
Mitchell, Brian R.; Tierney, Geraldine L.; Schweiger, E. William; Miller, Kathryn M.; Faber-Langendoen, Don; Grace, James B.
2014-01-01
This chapter describes and illustrates how concepts of ecological integrity, thresholds, and reference conditions can be integrated into a research and monitoring framework for natural resource management. Ecological integrity has been defined as a measure of the composition, structure, and function of an ecosystem in relation to the system’s natural or historical range of variation, as well as perturbations caused by natural or anthropogenic agents of change. Using ecological integrity to communicate with managers requires five steps, often implemented iteratively: (1) document the scale of the project and the current conceptual understanding and reference conditions of the ecosystem, (2) select appropriate metrics representing integrity, (3) define externally verified assessment points (metric values that signify an ecological change or need for management action) for the metrics, (4) collect data and calculate metric scores, and (5) summarize the status of the ecosystem using a variety of reporting methods. While we present the steps linearly for conceptual clarity, actual implementation of this approach may require addressing the steps in a different order or revisiting steps (such as metric selection) multiple times as data are collected. Knowledge of relevant ecological thresholds is important when metrics are selected, because thresholds identify where small changes in an environmental driver produce large responses in the ecosystem. Metrics with thresholds at or just beyond the limits of a system’s range of natural variability can be excellent, since moving beyond the normal range produces a marked change in their values. Alternatively, metrics with thresholds within but near the edge of the range of natural variability can serve as harbingers of potential change. Identifying thresholds also contributes to decisions about selection of assessment points. In particular, if there is a significant resistance to perturbation in an ecosystem, with threshold behavior not occurring until well beyond the historical range of variation, this may provide a scientific basis for shifting an ecological assessment point beyond the historical range. We present two case studies using ongoing monitoring by the US National Park Service Vital Signs program that illustrate the use of an ecological integrity approach to communicate ecosystem status to resource managers. The Wetland Ecological Integrity in Rocky Mountain National Park case study uses an analytical approach that specifically incorporates threshold detection into the process of establishing assessment points. The Forest Ecological Integrity of Northeastern National Parks case study describes a method for reporting ecological integrity to resource managers and other decision makers. We believe our approach has the potential for wide applicability for natural resource management.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Metlapally, Sangeetha; Tong, Jianliang L.; Tahir, Humza J.; Schor, Clifton M.
2014-01-01
It has been proposed that the accommodation system could perform contrast discrimination between the two dioptric extremes of accommodative microfluctuations to extract directional signals for reflex accommodation. Higher-order aberrations (HOAs) may have a significant influence on the strength of these contrast signals. Our goal was to compute the effect HOAs may have on contrast signals for stimuli within the upper defocus limit by comparing computed microcontrast fluctuations with psychophysical contrast increment thresholds (Bradley & Ohzawa, 1986). Wavefront aberrations were measured while subjects viewed a Maltese spoke stimulus monocularly. Computations were performed for accommodation or disaccommodation stimuli from a 3 Diopter (D) baseline. Microfluctuations were estimated from the standard deviation of the wavefronts over time at baseline. Through-focus Modulation Transfer, optical contrast increments (ΔC), and Weber fractions (ΔC/C) were derived from point spread functions computed from the wavefronts at baseline for 2 and 4 cycles per degree (cpd) components, with and without HOAs. The ΔCs thus computed from the wavefronts were compared with psychophysical contrast increment threshold data. Microfluctuations are potentially useful for extracting directional information for defocus values within 3 D, where contrast increments for the 2 or 4 cpd components exceed psychophysical thresholds. HOAs largely reduce contrast signals produced by microfluctuations, depending on the mean focus error, and their magnitude in individual subjects, and they may shrink the effective stimulus range for reflex accommodation. The upper defocus limit could therefore be constrained by discrimination of microcontrast fluctuations. PMID:25342542
Angst, Ueli M.; Boschmann, Carolina; Wagner, Matthias; Elsener, Bernhard
2017-01-01
The aging of reinforced concrete infrastructure in developed countries imposes an urgent need for methods to reliably assess the condition of these structures. Corrosion of the embedded reinforcing steel is the most frequent cause for degradation. While it is well known that the ability of a structure to withstand corrosion depends strongly on factors such as the materials used or the age, it is common practice to rely on threshold values stipulated in standards or textbooks. These threshold values for corrosion initiation (Ccrit) are independent of the actual properties of a certain structure, which clearly limits the accuracy of condition assessments and service life predictions. The practice of using tabulated values can be traced to the lack of reliable methods to determine Ccrit on-site and in the laboratory. Here, an experimental protocol to determine Ccrit for individual engineering structures or structural members is presented. A number of reinforced concrete samples are taken from structures and laboratory corrosion testing is performed. The main advantage of this method is that it ensures real conditions concerning parameters that are well known to greatly influence Ccrit, such as the steel-concrete interface, which cannot be representatively mimicked in laboratory-produced samples. At the same time, the accelerated corrosion test in the laboratory permits the reliable determination of Ccrit prior to corrosion initiation on the tested structure; this is a major advantage over all common condition assessment methods that only permit estimating the conditions for corrosion after initiation, i.e., when the structure is already damaged. The protocol yields the statistical distribution of Ccrit for the tested structure. This serves as a basis for probabilistic prediction models for the remaining time to corrosion, which is needed for maintenance planning. This method can potentially be used in material testing of civil infrastructures, similar to established methods used for mechanical testing. PMID:28892023
Angst, Ueli M; Boschmann, Carolina; Wagner, Matthias; Elsener, Bernhard
2017-08-31
The aging of reinforced concrete infrastructure in developed countries imposes an urgent need for methods to reliably assess the condition of these structures. Corrosion of the embedded reinforcing steel is the most frequent cause for degradation. While it is well known that the ability of a structure to withstand corrosion depends strongly on factors such as the materials used or the age, it is common practice to rely on threshold values stipulated in standards or textbooks. These threshold values for corrosion initiation (Ccrit) are independent of the actual properties of a certain structure, which clearly limits the accuracy of condition assessments and service life predictions. The practice of using tabulated values can be traced to the lack of reliable methods to determine Ccrit on-site and in the laboratory. Here, an experimental protocol to determine Ccrit for individual engineering structures or structural members is presented. A number of reinforced concrete samples are taken from structures and laboratory corrosion testing is performed. The main advantage of this method is that it ensures real conditions concerning parameters that are well known to greatly influence Ccrit, such as the steel-concrete interface, which cannot be representatively mimicked in laboratory-produced samples. At the same time, the accelerated corrosion test in the laboratory permits the reliable determination of Ccrit prior to corrosion initiation on the tested structure; this is a major advantage over all common condition assessment methods that only permit estimating the conditions for corrosion after initiation, i.e., when the structure is already damaged. The protocol yields the statistical distribution of Ccrit for the tested structure. This serves as a basis for probabilistic prediction models for the remaining time to corrosion, which is needed for maintenance planning. This method can potentially be used in material testing of civil infrastructures, similar to established methods used for mechanical testing.
Detection and quantification system for monitoring instruments
Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA
2008-08-12
A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.
p p →A →Z h and the wrong-sign limit of the two-Higgs-doublet model
NASA Astrophysics Data System (ADS)
Ferreira, Pedro M.; Liebler, Stefan; Wittbrodt, Jonas
2018-03-01
We point out the importance of the decay channels A →Z h and H →V V in the wrong-sign limit of the two-Higgs-doublet model (2HDM) of type II. They can be the dominant decay modes at moderate values of tan β , even if the (pseudo)scalar mass is above the threshold where the decay into a pair of top quarks is kinematically open. Accordingly, large cross sections p p →A →Z h and p p →H →V V are obtained and currently probed by the LHC experiments, yielding conclusive statements about the remaining parameter space of the wrong-sign limit. In addition, mild excesses—as recently found in the ATLAS analysis b b ¯→A →Z h —could be explained. The wrong-sign limit makes other important testable predictions for the light Higgs boson couplings.
Kim, Dae-Young; Seo, Byoung-Do; Choi, Pan-Am
2014-04-01
[Purpose] This study was conducted to determine the influence of Taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery. [Subjects and Methods] Fourteen healthy university students were recruited and divided into an exercise group and a control group (n = 7 in each group). The subjects who participated in the experiment were subjected to an exercise loading test in which anaerobic threshold, value of ventilation, oxygen uptake, maximal oxygen uptake, heart rate, and maximal values of ventilation / heart rate were measured during the exercise, immediately after maximum exercise loading, and at 1, 3, 5, 10, and 15 min of recovery. [Results] At the anaerobic threshold time point, the exercise group showed a significantly longer time to reach anaerobic threshold. The exercise group showed significantly higher values for the time to reach VO2max, maximal values of ventilation, maximal oxygen uptake and maximal values of ventilation / heart rate. Significant changes were observed in the value of ventilation volumes at the 1- and 5-min recovery time points within the exercise group; oxygen uptake and maximal oxygen uptake were significantly different at the 5- and 10-min time points; heart rate was significantly different at the 1- and 3-min time points; and maximal values of ventilation / heart rate was significantly different at the 5-min time point. The exercise group showed significant decreases in blood lactate levels at the 15- and 30-min recovery time points. [Conclusion] The study results revealed that Taekwondo as a security martial arts training increases the maximal oxygen uptake and anaerobic threshold and accelerates an individual's recovery to the normal state of cardiorespiratory fitness and blood lactate level. These results are expected to contribute to the execution of more effective security services in emergencies in which violence can occur.
Periago, J F; Morente, A; Villanueva, M; Luna, A
1994-01-01
We determined the correlations between the concentrations of n-hexane and toluene in exhaled and environmental air in the shoe manufacturing industry. Data were collected in 1988 and in 1992 from a total of 265 subjects. Environmental air samples were collected with personal diffusive samplers by adsorption on activated charcoal during exposure and from end-expired air (alveolar air) on cartridges of activated charcoal after exposure. Both compounds were desorbed with carbon disulphide and analysed by gas chromatography. Linear regression analyses showed a good correlation between environmental and end-expired air concentrations (r = 0.82 for n-hexane and r = 0.81 for toluene). These correlations allowed us to calculate the concentrations in expired air corresponding to current environmental limit values. The calculated concentrations in end-expired air that correspond to current environmental threshold limit values of 176 mg m-3 for n-hexane and 377 mg m-3 for toluene are 28 mg m-3 (95% confidence limit, 27-29 mg m-3) and 40 mg m-3 (95% confidence limit, 39-41 mg m-3), respectively. Similar correlations were found when the data from the two study periods were analysed separately.
Initial-state-independent equilibration at the breakdown of the eigenstate thermalization hypothesis
NASA Astrophysics Data System (ADS)
Khodja, Abdellah; Schmidtke, Daniel; Gemmer, Jochen
2016-04-01
This work aims at understanding the interplay between the eigenstate thermalization hypothesis (ETH), initial state independent equilibration, and quantum chaos in systems that do not have a direct classical counterpart. It is based on numerical investigations of asymmetric Heisenberg spin ladders with varied interaction strengths between the legs, i.e., along the rungs. The relaxation of the energy difference between the legs is investigated. Two different parameters, both intended to quantify the degree of accordance with the ETH, are computed. Both indicate violation of the ETH at large interaction strengths but at different thresholds. Indeed, the energy difference is found not to relax independently of its initial value above some critical interaction strength, which coincides with one of the thresholds. At the same point the level statistics shift from Poisson-type to Wigner-type. Hence, the system may be considered to become integrable again in the strong interaction limit.
Probing Sub-GeV Mass Strongly Interacting Dark Matter with a Low-Threshold Surface Experiment.
Davis, Jonathan H
2017-11-24
Using data from the ν-cleus detector, based on the surface of Earth, we place constraints on dark matter in the form of strongly interacting massive particles (SIMPs) which interact with nucleons via nuclear-scale cross sections. For large SIMP-nucleon cross sections, the sensitivity of traditional direct dark matter searches using underground experiments is limited by the energy loss experienced by SIMPs, due to scattering with the rock overburden and experimental shielding on their way to the detector apparatus. Hence, a surface-based experiment is ideal for a SIMP search, despite the much larger background resulting from the lack of shielding. We show using data from a recent surface run of a low-threshold cryogenic detector that values of the SIMP-nucleon cross section up to approximately 10^{-27} cm^{2} can be excluded for SIMPs with masses above 100 MeV.
Denoising time-resolved microscopy image sequences with singular value thresholding.
Furnival, Tom; Leary, Rowan K; Midgley, Paul A
2017-07-01
Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chitrambalam, S.; Manimaran, D.; Hubert Joe, I.; Rastogi, V. K.; Ul Hassan, Israr
2018-01-01
The organometallic crystal of Dichlorobis(DL-valine)zinc(II) was grown by solution growth method. The computed structural geometry, vibrational wavenumbers and UV-visible spectra were compared with experimental results. Hirshfeld surface map was used to locate electron density and the fingerprint plots percentages are responsible for the stabilization of intermolecular interactions in molecular crystal. The second-order hyperpolarizability value of the molecule was also calculated at density functional theory method. The surface resistance and third-order nonlinear optical property of the crystal were studied by laser induced surface damage threshold and Z-scan techniques, respectively using Nd:YAG laser with wavelength 532 nm. The open aperture result exhibits the reverse saturation absorption, which indicate that this material has potential candidate for optical limiting and optoelectronic applications.
Work environments and exposure to hazardous substances in korean tire manufacturing.
Lee, Naroo; Lee, Byung-Kyu; Jeong, Sijeong; Yi, Gwang Yong; Shin, Jungah
2012-06-01
The purpose of this study is to evaluate the tire manufacturing work environments extensively and to identify workers' exposure to hazardous substances in various work processes. Personal air sampling was conducted to measure polycyclic aromatic hydrocarbons, carbon disulfide, 1,3-butadiene, styrene, methyl isobutyl ketone, methylcyclohexane, formaldehyde, sulfur dioxide, and rubber fume in tire manufacturing plants using the National Institute for Occupational Safety Health Manual of Analytical Methods. Noise, carbon monoxide, and heat stress exposure were evaluated using direct reading instruments. Past concentrations of rubber fume were assessed using regression analysis of total particulate data from 2003 to 2007, after identifying the correlation between the concentration of total particulate and rubber fume. Workers were exposed to rubber fume that exceeded 0.6 mg/m(3), the maximum exposure limit of the UK, in curing and production management processes. Forty-seven percent of workers were exposed to noise levels exceeding 85 dBA. Workers in the production management process were exposed to 28.1℃ (wet bulb globe temperature value, WBGT value) even when the outdoor atmosphere was 2.7℃ (WBGT value). Exposures to other substances were below the limit of detection or under a tenth of the threshold limit values given by the American Conference of Governmental Industrial Hygienists. To better classify exposure groups and to improve work environments, examining closely at rubber fume components and temperature as risk indicators in tire manufacturing is recommended.
Value of information and pricing new healthcare interventions.
Willan, Andrew R; Eckermann, Simon
2012-06-01
Previous application of value-of-information methods to optimal clinical trial design have predominantly taken a societal decision-making perspective, implicitly assuming that healthcare costs are covered through public expenditure and trial research is funded by government or donation-based philanthropic agencies. In this paper, we consider the interaction between interrelated perspectives of a societal decision maker (e.g. the National Institute for Health and Clinical Excellence [NICE] in the UK) charged with the responsibility for approving new health interventions for reimbursement and the company that holds the patent for a new intervention. We establish optimal decision making from societal and company perspectives, allowing for trade-offs between the value and cost of research and the price of the new intervention. Given the current level of evidence, there exists a maximum (threshold) price acceptable to the decision maker. Submission for approval with prices above this threshold will be refused. Given the current level of evidence and the decision maker's threshold price, there exists a minimum (threshold) price acceptable to the company. If the decision maker's threshold price exceeds the company's, then current evidence is sufficient since any price between the thresholds is acceptable to both. On the other hand, if the decision maker's threshold price is lower than the company's, then no price is acceptable to both and the company's optimal strategy is to commission additional research. The methods are illustrated using a recent example from the literature.
[Clinical experiences with four newly developed, surface modified stimulation electrodes].
Winter, U J; Fritsch, J; Liebing, J; Höpp, H W; Hilger, H H
1993-05-01
Newly developed pacing electrodes with so-called porous surfaces promise a significantly improved post-operative pacing and sensing threshold. We therefore investigated four newly developed leads (ELA-PMCF-860 n = 10; Biotronik-60/4-DNP n = 10, CPI-4010 n = 10, Intermedics-421-03-Biopore n = 6) connected to two different pacing devices (Intermedics NOVA II, Medtronic PASYS) in 36 patients (18 men, 18 women, age: 69.7 +/- 9.8 years) suffering from symptomatic bradycardia. The individual electrode maturation process was investigated by means of repeated measurements of pacing threshold, electrode impedance in acute, subacute, and chronic phase, as well as energy consumption and sensing behavior in the chronic phase. However, with the exception of the 4010, the investigated leads showed largely varying values of the pacing threshold with individual peaks occurring from the second up to the 13th week. All leads had nearly similar chronic pacing thresholds (PMCF 0.13 +/- 0.07; DNP 0.25 +/- 0.18; Biopore 0.15 +/- 0.05; 4010 0.14 +/- 0.05 ms). Impedance measurements revealed higher, but not significantly different values for the DNP (PMCF 582 +/- 112, DNP 755 +/- 88, Biopore 650 +/- 15, 4010 718 +/- 104 Ohm). Despite differing values for pacing threshold and impedance, the energy consumption in the chronic phase during threshold-adapted, but secure stimulation (3 * impulse-width at pacing threshold) were comparable.
Ultrafast Passive Shields for Laser and Ballistic Protection
1991-07-15
chemically polymerized P(DPA)) as a binder, and these were tested for ablation (i.e. laser damage threshold ) limits. Table IV below summarizes these results...50, 100, 250 and 500 AJ/pulse o 1.G, 2.5, 5.0 mJ/pulse. The following energies were used for the preliminary laser damage threshold tests: o 2.5, 5.0...these were tested for ablation (i.e. laser damage threshold ) limits. Table VI summarizes these results which are all for tests in the absence of an iris
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
Incorporation of trace elements in Portland cement clinker: Thresholds limits for Cu, Ni, Sn or Zn
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gineys, N., E-mail: nathalie.gineys@mines-douai.fr; EMDouai, LGCgE-MPE-GCE, F-59508 Douai; Aouad, G.
2011-11-15
This paper aims at defining precisely, the threshold limits for several trace elements (Cu, Ni, Sn or Zn) which correspond to the maximum amount that could be incorporated into a standard clinker whilst reaching the limit of solid solution of its four major phases (C{sub 3}S, C{sub 2}S, C{sub 3}A and C{sub 4}AF). These threshold limits were investigated through laboratory synthesised clinkers that were mainly studied by X-ray Diffraction and Scanning Electron Microscopy. The reference clinker was close to a typical Portland clinker (65% C{sub 3}S, 18% C{sub 2}S, 8% C{sub 3}A and 8% C{sub 4}AF). The threshold limits formore » Cu, Ni, Zn and Sn are quite high with respect to the current contents in clinker and were respectively equal to 0.35, 0.5, 0.7 and 1 wt.%. It appeared that beyond the defined threshold limits, trace elements had different behaviours. Ni was associated with Mg as a magnesium nickel oxide (MgNiO{sub 2}) and Sn reacted with lime to form a calcium stannate (Ca{sub 2}SnO{sub 4}). Cu changed the crystallisation process and affected therefore the formation of C{sub 3}S. Indeed a high content of Cu in clinker led to the decomposition of C{sub 3}S into C{sub 2}S and of free lime. Zn, in turn, affected the formation of C{sub 3}A. Ca{sub 6}Zn{sub 3}Al{sub 4}O{sub 15} was formed whilst a tremendous reduction of C{sub 3}A content was identified. The reactivity of cements made with the clinkers at the threshold limits was followed by calorimetry and compressive strength measurements on cement paste. The results revealed that the doped cements were at least as reactive as the reference cement.« less
Effect of Three Different Core Materials on Masking Ability of a Zirconia Ceramic.
Tabatabaian, Farhad; Masoomi, Faeze; Namdari, Mahshid; Mahshid, Minoo
2016-09-01
Masking ability of a restorative material plays a role in hiding colored substructures; however, the masking ability of zirconia ceramic (ZRC) has not yet been clearly understood in zirconia-based restorations. This study evaluated the effect of three different core materials on masking ability of a ZRC. Ten zirconia disc samples, 0.5mm in thickness and 10mm in diameter, were fabricated. A white (W) substrate (control) and three substrates of nickel-chromium alloy (NCA), non-precious gold alloy (NPGA), and ZRC were prepared. The zirconia discs were placed on the four types of substrates for spectrophotometry. The L*, a*, and b* values of the specimens were measured by a spectrophotometer and color change (ΔE) values were calculated to determine color differences between the test and control groups and were then compared with the perceptual threshold. Randomized block ANOVA and Bonferroni test analyzed the data. A significance level of 0.05 was considered. The mean and standard deviation values of ΔE for NCA, NPGA, and ZRC groups were 10.26±2.43, 9.45±1.74, and 6.70±1.91 units, respectively. Significant differences were found in the ΔE values between ZRC and the other two experimental groups (NCA and NPGA; P<0.0001 and P=0.001, respectively). The ΔE values for the groups were more than the predetermined perceptual threshold. Within the limitations of this study, it was concluded that the tested ZRC could not well mask the examined core materials.
Connors, B M; Cooper, A B
2014-12-01
Categorization of the status of populations, species, and ecosystems underpins most conservation activities. Status is often based on how a system's current indicator value (e.g., change in abundance) relates to some threshold of conservation concern. Receiver operating characteristic (ROC) curves can be used to quantify the statistical reliability of indicators of conservation status and evaluate trade-offs between correct (true positive) and incorrect (false positive) classifications across a range of decision thresholds. However, ROC curves assume a discrete, binary relationship between an indicator and the conservation status it is meant to track, which is a simplification of the more realistic continuum of conservation status, and may limit the applicability of ROC curves in conservation science. We describe a modified ROC curve that treats conservation status as a continuum rather than a discrete state. We explored the influence of this continuum and typical sources of variation in abundance that can lead to classification errors (i.e., random variation and measurement error) on the true and false positive rates corresponding to varying decision thresholds and the reliability of change in abundance as an indicator of conservation status, respectively. We applied our modified ROC approach to an indicator of endangerment in Pacific salmon (Oncorhynchus nerka) (i.e., percent decline in geometric mean abundance) and an indicator of marine ecosystem structure and function (i.e., detritivore biomass). Failure to treat conservation status as a continuum when choosing thresholds for indicators resulted in the misidentification of trade-offs between true and false positive rates and the overestimation of an indicator's reliability. We argue for treating conservation status as a continuum when ROC curves are used to evaluate decision thresholds in indicators for the assessment of conservation status. © 2014 Society for Conservation Biology.
Relationship between slow visual processing and reading speed in people with macular degeneration
Cheong, Allen MY; Legge, Gordon E; Lawrence, Mary G; Cheung, Sing-Hang; Ruff, Mary A
2007-01-01
Purpose People with macular degeneration (MD) often read slowly even with adequate magnification to compensate for acuity loss. Oculomotor deficits may affect reading in MD, but cannot fully explain the substantial reduction in reading speed. Central-field loss (CFL) is often a consequence of macular degeneration, necessitating the use of peripheral vision for reading. We hypothesized that slower temporal processing of visual patterns in peripheral vision is a factor contributing to slow reading performance in MD patients. Methods Fifteen subjects with MD, including 12 with CFL, and five age-matched control subjects were recruited. Maximum reading speed and critical print size were measured with RSVP (Rapid Serial Visual Presentation). Temporal processing speed was studied by measuring letter-recognition accuracy for strings of three randomly selected letters centered at fixation for a range of exposure times. Temporal threshold was defined as the exposure time yielding 80% recognition accuracy for the central letter. Results Temporal thresholds for the MD subjects ranged from 159 to 5881 ms, much longer than values for age-matched controls in central vision (13 ms, p<0.01). The mean temporal threshold for the 11 MD subjects who used eccentric fixation (1555.8 ± 1708.4 ms) was much longer than the mean temporal threshold (97.0 ms ± 34.2 ms, p<0.01) for the age-matched controls at 10° in the lower visual field. Individual temporal thresholds accounted for 30% of the variance in reading speed (p<0.05). Conclusion The significant association between increased temporal threshold for letter recognition and reduced reading speed is consistent with the hypothesis that slower visual processing of letter recognition is one of the factors limiting reading speed in MD subjects. PMID:17881032
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Harrison, Edwin F.; Gibson, Gary G.
1987-01-01
A set of visible and IR data obtained with GOES from July 17-31, 1983 is analyzed using a modified version of the hybrid bispectral threshold method developed by Minnis and Harrison (1984). This methodology can be divided into a set of procedures or optional techniques to determine the proper contaminate clear-sky temperature or IR threshold. The various optional techniques are described; the options are: standard, low-temperature limit, high-reflectance limit, low-reflectance limit, coldest pixel and thermal adjustment limit, IR-only low-cloud temperature limit, IR clear-sky limit, and IR overcast limit. Variations in the cloud parameters and the characteristics and diurnal cycles of trade cumulus and stratocumulus clouds over the eastern equatorial Pacific are examined. It is noted that the new method produces substantial changes in about one third of the cloud amount retrieval; and low cloud retrievals are affected most by the new constraints.
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten
2018-05-01
Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.
Development of an epiphyte indicator of nutrient enrichment ...
Metrics of epiphyte load on macrophytes were evaluated for use as quantitative biological indicators for nutrient impacts in estuarine waters, based on review and analysis of the literature on epiphytes and macrophytes, primarily seagrasses, but including some brackish and freshwater rooted macrophyte species. An approach is presented that empirically derives threshold epiphyte loads which are likely to cause specified levels of decrease in macrophyte response metrics such as biomass, shoot density, percent cover, production and growth. Data from 36 studies of 10 macrophyte species were pooled to derive relationships between epiphyte load and -25 and -50% seagrass response levels, which are proposed as the primary basis for establishment of critical threshold values. Given multiple sources of variability in the response data, threshold ranges based on the range of values falling between the median and the 75th quantiles of observations at a given seagrass response level are proposed rather than single, critical point values. Four epiphyte load threshold categories - low, moderate, high, very high, are proposed. Comparison of values of epiphyte loads associated with 25 and 50% reductions in light to macrophytes suggest that the threshold ranges are realistic both in terms of the principle mechanism of impact to macrophytes and in terms of the magnitude of resultant impacts expressed by the macrophytes. Some variability in response levels was observed among
42 CFR 422.208 - Physician incentive plans: requirements and limitations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... difference between the maximum potential payments and the minimum potential payments is more than 25 percent... have the effect of reducing or limiting the services provided to any plan enrollee. Potential payments... considered in this determination. (2) Risk threshold. The risk threshold is 25 percent of potential payments...
42 CFR 422.208 - Physician incentive plans: requirements and limitations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... difference between the maximum potential payments and the minimum potential payments is more than 25 percent... have the effect of reducing or limiting the services provided to any plan enrollee. Potential payments... considered in this determination. (2) Risk threshold. The risk threshold is 25 percent of potential payments...
42 CFR 422.208 - Physician incentive plans: requirements and limitations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... difference between the maximum potential payments and the minimum potential payments is more than 25 percent... have the effect of reducing or limiting the services provided to any plan enrollee. Potential payments... considered in this determination. (2) Risk threshold. The risk threshold is 25 percent of potential payments...
42 CFR 422.208 - Physician incentive plans: requirements and limitations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... of reducing or limiting the services provided to any plan enrollee. Potential payments means the... determination. (2) Risk threshold. The risk threshold is 25 percent of potential payments. (3) Arrangements that...) Withholds greater than 25 percent of potential payments. (ii) Withholds less than 25 percent of potential...
42 CFR 422.208 - Physician incentive plans: requirements and limitations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of reducing or limiting the services provided to any plan enrollee. Potential payments means the... determination. (2) Risk threshold. The risk threshold is 25 percent of potential payments. (3) Arrangements that...) Withholds greater than 25 percent of potential payments. (ii) Withholds less than 25 percent of potential...
Enforcing positivity in intrusive PC-UQ methods for reactive ODE systems
Najm, Habib N.; Valorani, Mauro
2014-04-12
We explore the relation between the development of a non-negligible probability of negative states and the instability of numerical integration of the intrusive Galerkin ordinary differential equation system describing uncertain chemical ignition. To prevent this instability without resorting to either multi-element local polynomial chaos (PC) methods or increasing the order of the PC representation in time, we propose a procedure aimed at modifying the amplitude of the PC modes to bring the probability of negative state values below a user-defined threshold. This modification can be effectively described as a filtering procedure of the spectral PC coefficients, which is applied on-the-flymore » during the numerical integration when the current value of the probability of negative states exceeds the prescribed threshold. We demonstrate the filtering procedure using a simple model of an ignition process in a batch reactor. This is carried out by comparing different observables and error measures as obtained by non-intrusive Monte Carlo and Gauss-quadrature integration and the filtered intrusive procedure. Lastly, the filtering procedure has been shown to effectively stabilize divergent intrusive solutions, and also to improve the accuracy of stable intrusive solutions which are close to the stability limits.« less
Value-based differential pricing: efficient prices for drugs in a global context.
Danzon, Patricia; Towse, Adrian; Mestre-Ferrandiz, Jorge
2015-03-01
This paper analyzes pharmaceutical pricing between and within countries to achieve second-best static and dynamic efficiency. We distinguish countries with and without universal insurance, because insurance undermines patients' price sensitivity, potentially leading to prices above second-best efficient levels. In countries with universal insurance, if each payer unilaterally sets an incremental cost-effectiveness ratio (ICER) threshold based on its citizens' willingness-to-pay for health; manufacturers price to that ICER threshold; and payers limit reimbursement to patients for whom a drug is cost-effective at that price and ICER, then the resulting price levels and use within each country and price differentials across countries are roughly consistent with second-best static and dynamic efficiency. These value-based prices are expected to differ cross-nationally with per capita income and be broadly consistent with Ramsey optimal prices. Countries without comprehensive insurance avoid its distorting effects on prices but also lack financial protection and affordability for the poor. Improving pricing efficiency in these self-pay countries includes improving regulation and consumer information about product quality and enabling firms to price discriminate within and between countries. © 2013 The Authors. Health Economics published by John Wiley & Sons Ltd.
Enhancement of the Daytime MODIS Based Aircraft Icing Potential Algorithm Using Mesoscale Model Data
2006-03-01
January, 15, 2006 ...... 37 x Figure 25. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...42 Figure 26. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...Table 4. Results using T icing potential values from the Alexander Tmap , and 3 Hour PIREPs
Measurand transient signal suppressor
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.
Kang, Hyunchul
2015-01-01
We investigate the in-network processing of an iceberg join query in wireless sensor networks (WSNs). An iceberg join is a special type of join where only those joined tuples whose cardinality exceeds a certain threshold (called iceberg threshold) are qualified for the result. Processing such a join involves the value matching for the join predicate as well as the checking of the cardinality constraint for the iceberg threshold. In the previous scheme, the value matching is carried out as the main task for filtering non-joinable tuples while the iceberg threshold is treated as an additional constraint. We take an alternative approach, meeting the cardinality constraint first and matching values next. In this approach, with a logical fragmentation of the join operand relations on the aggregate counts of the joining attribute values, the optimal sequence of 2-way fragment semijoins is generated, where each fragment semijoin employs a Bloom filter as a synopsis of the joining attribute values. This sequence filters non-joinable tuples in an energy-efficient way in WSNs. Through implementation and a set of detailed experiments, we show that our alternative approach considerably outperforms the previous one. PMID:25774710
Ding, Jiule; Xing, Wei; Chen, Jie; Dai, Yongming; Sun, Jun; Li, Dengfa
2014-01-21
To explore the influence of signal noise ratio (SNR) on analysis of clear cell renal cell carcinoma (CCRCC) using DWI with multi-b values. The images of 17 cases with CCRCC were analyzed, including 17 masses and 9 pure cysts. The signal intensity of the cysts and masses was measured separately on DWI for each b value. The minimal SNR, as the threshold, was recorded when the signal curve manifest as the single exponential line. The SNR of the CCRCC was calculated on DWI for each b value, and compared with the threshold by independent Two-sample t Test. The signal decreased on DWI with increased b factors for both pure cysts and CCRCC. The threshold is 1.29 ± 0.17, and the signal intensity of the cysts on DWI with multi-b values shown as a single exponential line when b ≤ 800 s/mm(2). For the CCRCC, the SNR is similar to the threshold when b = 1 000 s/mm(2) (t = 0.40, P = 0.69), and is lower when b = 1 200 s/mm(2) (t = -2.38, P = 0.03). The SNR should be sufficient for quantitative analysis of DWI, and the maximal b value is 1000 s/mm(2) for CCRCC.
A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.
Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês
2015-12-01
Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
Mercury demethylation in waterbird livers: Dose-response thresholds and differences among species
Eagles-Smith, Collin A.; Ackerman, Joshua T.; Julie, Y.E.E.; Adelsbach, T.L.
2009-01-01
We assessed methylmercury (MeHg) demethylation in the livers of adults and chicks of four waterbird species that commonly breed in San Francisco Bay: American avocets, black-necked stilts, Caspian terns, and Forster's terns. In adults (all species combined), we found strong evidence for a threshold, model where MeHg demethylation occurred above a hepatic total mercury concentration threshold of 8.51 ?? 0.93 ??g/g dry weight, and there was a strong decline in %MeHg values as total mercury (THg) concentrations increased above 8.51 ??g/g dry weight. Conversely, there was no evidence for a demethylation threshold in chicks, and we found that %MeHg values declined linearly with increasing THg concentrations. For adults, we also found taxonomie differences in the demethylation responses, with avocets and stilts showing a higher demethylation rate than that of terns when concentrations exceeded the threshold, whereas terns had a lower demethylation threshold (7.48 ?? 1.48 ??g/g dry wt) than that of avocets and stilts (9.91 ?? 1.29 ??g/g dry wt). Finally, we assessed the role of selenium (Se) in the demethylation process. Selenium concentrations were positively correlated with inorganic Hg in livers of birds above the demethylation threshold but not below. This suggests that Se may act as a binding site for demethylated Hg and may reduce the potential for secondary toxicity. Our findings indicate that waterbirds demethylate mercury in their livers if exposure exceeds a threshold value and suggest that taxonomie differences in demethylation ability may be an important factor in evaluating species-specific risk to MeHg exposure. Further, we provide strong evidence for a threshold of approximately 8.5 ??g/g dry weight of THg in the liver where demethylation is initiated. ?? 2009 SETAC.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2016-02-08
We aimed to provide insight in how threshold selection affects time to stabilization (TTS) and its reliability to support selection of methods to determine TTS. Eighty-two elite youth soccer players performed six single leg drop jump landings. The TTS was calculated based on four processed signals: raw ground reaction force (GRF) signal (RAW), moving root mean square window (RMS), sequential average (SA) or unbounded third order polynomial fit (TOP). For each trial and processing method a wide range of thresholds was applied. Per threshold, reliability of the TTS was assessed through intra-class correlation coefficients (ICC) for the vertical (V), anteroposterior (AP) and mediolateral (ML) direction of force. Low thresholds resulted in a sharp increase of TTS values and in the percentage of trials in which TTS exceeded trial duration. The TTS and ICC were essentially similar for RAW and RMS in all directions; ICC's were mostly 'insufficient' (<0.4) to 'fair' (0.4-0.6) for the entire range of thresholds. The SA signals resulted in the most stable ICC values across thresholds, being 'substantial' (>0.8) for V, and 'moderate' (0.6-0.8) for AP and ML. The ICC's for TOP were 'substantial' for V, 'moderate' for AP, and 'fair' for ML. The present findings did not reveal an optimal threshold to assess TTS in elite youth soccer players following a single leg drop jump landing. Irrespective of threshold selection, the SA and TOP methods yielded sufficiently reliable TTS values, while for RAW and RMS the reliability was insufficient to differentiate between players. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jacobson, Linda S; McIntyre, Lauren; Mykusz, Jenny
2018-02-01
Objectives Real-time PCR provides quantitative information, recorded as the cycle threshold (Ct) value, about the number of organisms detected in a diagnostic sample. The Ct value correlates with the number of copies of the target organism in an inversely proportional and exponential relationship. The aim of the study was to determine whether Ct values could be used to distinguish between culture-positive and culture-negative samples. Methods This was a retrospective analysis of Ct values from dermatophyte PCR results in cats with suspicious skin lesions or suspected exposure to dermatophytosis. Results One hundred and thirty-two samples were included. Using culture as the gold standard, 28 were true positives, 12 were false positives and 92 were true negatives. The area under the curve for the pretreatment time point was 96.8% (95% confidence interval [CI] 94.2-99.5) compared with 74.3% (95% CI 52.6-96.0) for pooled data during treatment. Before treatment, a Ct cut-off of <35.7 (approximate DNA count 300) provided a sensitivity of 92.3% and specificity of 95.2%. There was no reliable cut-off Ct value between culture-positive and culture-negative samples during treatment. Ct values prior to treatment differed significantly between the true-positive and false-positive groups ( P = 0.0056). There was a significant difference between the pretreatment and first and second negative culture time points ( P = 0.0002 and P <0.0001, respectively). However, there was substantial overlap between Ct values for true positives and true negatives, and for pre- and intra-treatment time points. Conclusions and relevance Ct values had limited usefulness for distinguishing between culture-positive and culture-negative cases when field study samples were analyzed. In addition, Ct values were less reliable than fungal culture for determining mycological cure.
1982-01-01
bromide is listed as a positive interference. Nitric oxide and nitrogen dioxide can be detected by using the Draeger nitrous fumes detector tube. A... fumes exhibit a delay from the time of exposure to the onset of symptoms. This time delay would not be conducive for a rapid field screening test. It...Dangerous when strongly heated, emits highly toxic fumes . THRESHOLD LIMIT VALUE: No information available PHYSIOLOGICAL EFFECTS: A. Intensely irritating to
1989-03-01
fibers do not appoear to be a significant inhalation hazard nor are they biologically active in several in vitro test systems. Minor skin and eye...Additional emphasis on defining various methods to be utilized to define exposure including biological monitoring and application of various skin absorption...Threshold Limit Values and Biological Indices for 1988-1989, Cincinnati, Ohio Bartek, M.J., LaBulde, J.A., and Maibach, H.I. (1983). Skin permeability
NASA Astrophysics Data System (ADS)
Senese, Antonella; Maugeri, Maurizio; Vuillermoz, Elisa; Smiraglia, Claudio; Diolaiuti, Guglielmina
2014-05-01
Glacier melt occurs whenever the surface temperature is null (273.15 K) and the net energy budget is positive. These conditions can be assessed by analyzing meteorological and energy data acquired by a supraglacial Automatic Weather Station (AWS). In the case this latter is not present at the glacier surface the assessment of actual melting conditions and the evaluation of melt amount is difficult and degree-day (also named T-index) models are applied. These approaches require the choice of a correct temperature threshold. In fact, melt does not necessarily occur at daily air temperatures higher than 273.15 K, since it is determined by the energy budget which in turn is only indirectly affected by air temperature. This is the case of the late spring period when ablation processes start at the glacier surface thus progressively reducing snow thickness. In this study, to detect the most indicative air temperature threshold witnessing melt conditions in the April-June period, we analyzed air temperature data recorded from 2006 to 2012 by a supraglacial AWS (at 2631 m a.s.l.) on the ablation tongue of the Forni Glacier (Italy), and by a weather station located nearby the studied glacier (at Bormio, 1225 m a.s.l.). Moreover we evaluated the glacier energy budget (which gives the actual melt, Senese et al., 2012) and the snow water equivalent values during this time-frame. Then the ablation amount was estimated both from the surface energy balance (MEB from supraglacial AWS data) and from degree-day method (MT-INDEX, in this latter case applying the mean tropospheric lapse rate to temperature data acquired at Bormio changing the air temperature threshold) and the results were compared. We found that the mean tropospheric lapse rate permits a good and reliable reconstruction of daily glacier air temperature conditions and the major uncertainty in the computation of snow melt from degree-day models is driven by the choice of an appropriate air temperature threshold. Then, to assess the most suitable threshold, we firstly analyzed hourly MEB values to detect if ablation occurs and how long this phenomenon takes (number of hours per day). The largest part of the melting (97.7%) resulted occurring on days featuring at least 6 melting hours thus suggesting to consider their minimum average daily temperature value as a suitable threshold (268.1 K). Then we ran a simple T-index model applying different threshold values. The threshold which better reproduces snow melting results the value 268.1 K. Summarizing using a 5.0 K lower threshold value (with respect to the largely applied 273.15 K) permits the best reconstruction of glacier melt and it results in agreement with findings by van den Broeke et al. (2010) in Greenland ice sheet. Then probably the choice of a 268 K value as threshold for computing degree days amount could be generalized and applied not only on Greenland glaciers but also on Mid latitude and Alpine ones. This work was carried out under the umbrella of the SHARE Stelvio Project funded by the Lombardy Region and managed by FLA and EvK2-CNR Committee.
Yates, K.K.; Halley, R.B.
2006-01-01
The severity of the impact of elevated atmospheric pCO2 to coral reef ecosystems depends, in part, on how sea-water pCO2 affects the balance between calcification and dissolution of carbonate sediments. Presently, there are insufficient published data that relate concentrations of pCO 2 and CO32- to in situ rates of reef calcification in natural settings to accurately predict the impact of elevated atmospheric pCO2 on calcification and dissolution processes. Rates of net calcification and dissolution, CO32- concentrations, and pCO2 were measured, in situ, on patch reefs, bare sand, and coral rubble on the Molokai reef flat in Hawaii. Rates of calcification ranged from 0.03 to 2.30 mmol CaCO3 m-2 h-1 and dissolution ranged from -0.05 to -3.3 mmol CaCO3 m-2 h-1. Calcification and dissolution varied diurnally with net calcification primarily occurring during the day and net dissolution occurring at night. These data were used to calculate threshold values for pCO2 and CO32- at which rates of calcification and dissolution are equivalent. Results indicate that calcification and dissolution are linearly correlated with both CO32- and pCO2. Threshold pCO2 and CO32- values for individual substrate types showed considerable variation. The average pCO2 threshold value for all substrate types was 654??195 ??atm and ranged from 467 to 1003 ??atm. The average CO32- threshold value was 152??24 ??mol kg-1, ranging from 113 to 184 ??mol kg-1. Ambient seawater measurements of pCO2 and CO32- indicate that CO32- and pCO2 threshold values for all substrate types were both exceeded, simultaneously, 13% of the time at present day atmospheric pCO2 concentrations. It is predicted that atmospheric pCO2 will exceed the average pCO2 threshold value for calcification and dissolution on the Molokai reef flat by the year 2100.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
Wang, Hui; Liu, Huifang; Cao, Zhiyong; Wang, Bowen
2016-01-01
This paper presents a new perspective that there is a double-threshold effect in terms of the technology gap existing in the foreign direct investment (FDI) technology spillover process in different regional Chinese industrial sectors. In this paper, a double-threshold regression model was established to examine the relation between the threshold effect of the technology gap and technology spillover. Based on the provincial panel data of Chinese industrial sectors from 2000 to 2011, the empirical results reveal that there are two threshold values, which are 1.254 and 2.163, in terms of the technology gap in the industrial sector in eastern China. There are also two threshold values in both the central and western industrial sector, which are 1.516, 2.694 and 1.635, 2.714, respectively. The technology spillover is a decreasing function of the technology gap in both the eastern and western industrial sectors, but a concave curve function of the technology gap is in the central industrial sectors. Furthermore, the FDI technology spillover has increased gradually in recent years. Based on the empirical results, suggestions were proposed to elucidate the introduction of the FDI and the improvement in the industrial added value in different regions of China.
Jockusch, Rebecca A.; Williams*, Evan R.
2005-01-01
The dissociation kinetics of protonated n-acetyl-L-alanine methyl ester dimer (AcAlaMEd), imidazole dimer, and their cross dimer were measured using blackbody infrared radiative dissociation (BIRD). Master equation modeling of these data was used to extract threshold dissociation energies (Eo) for the dimers. Values of 1.18 ± 0.06, 1.11 ± 0.04, and 1.12 ± 0.08 eV were obtained for AcAlaMEd, imidazole dimer, and the cross dimer, respectively. Assuming that the reverse activation barrier for dissociation of the ion–molecule complex is negligible, the value of Eo can be compared to the dissociation enthalpy (ΔHd°) from HPMS data. The Eo values obtained for the imidazole dimer and the cross dimer are in agreement with HPMS values; the value for AcAlaMEd is somewhat lower. Radiative rate constants used in the master equation modeling were determined using transition dipole moments calculated at the semiempirical (AM1) level for all dimers and compared to ab initio (RHF/3-21G*) calculations where possible. To reproduce the experimentally measured dissociation rates using master equation modeling, it was necessary to multiply semiempirical transition dipole moments by a factor between 2 and 3. Values for transition dipole moments from the ab initio calculations could be used for two of the dimers but appear to be too low for AcAlaMEd. These results demonstrate that BIRD, in combination with master equation modeling, can be used to determine threshold dissociation energies for intermediate size ions that are in neither the truncated Boltzmann nor the rapid energy exchange limit. PMID:16604163
Shlomai, Amir; Kariv, Revital; Leshno, Moshe; Beth-or, Anat; Sheinberg, Bracha; Halpern, Zamir
2010-10-01
Serum alanine aminotransferase (ALT) is commonly used to detect liver damage. Recent studies indicate that ALT levels at the upper range of normal limits are predictors of adverse outcomes, especially diabetes mellitus (DM) and the metabolic syndrome. The aim of our study was to define the ALT threshold for both men and women that may predict the onset of DM. We analyzed a large Health Maintenance Organization cohort of 157 308 healthy subjects with no evidence of liver disease and with baseline ALT levels ≤ 120 U/L, and identified those who developed DM within 6 years. Overall, an elevated baseline serum ALT value was significantly associated with the development of DM, with an odds ratio of 3.3 when comparing the higher and the lower quartiles of the whole study population. A subgroup analysis revealed that baseline ALT values higher than 10 U/L among women and 22 U/L among men were already significantly associated with an increased risk for DM for any increment in ALT level. Notably, ALT values higher than ∼55 U/L were associated with increased risk for DM that was relatively constant for any increment in ALT. Higher baseline ALT levels were stronger predictors for DM as compared with age, triglycerides and cholesterol levels. Our study implies that ALT values higher than 10 U/L and 22 U/L for women and men, respectively, may predict DM. We suggest redefining ALT values as either 'normal' or 'healthy', with the later reflecting much lower values, above which an individual is at increased risk for DM. © 2010 Journal of Gastroenterology and Hepatology Foundation and Blackwell Publishing Asia Pty Ltd.
NASA Astrophysics Data System (ADS)
Dubuc, Alexia; Waltham, Nathan; Malerba, Martino; Sheaves, Marcus
2017-11-01
Little is known about levels of dissolved oxygen fish are exposed to daily in typical urbanised tropical wetlands found along the Great Barrier Reef coastline. This study investigates diel dissolved oxygen (DO) dynamics in one of these typical urbanised wetlands, in tropical North Queensland, Australia. High frequency data loggers (DO, temperature, depth) were deployed for several days over the summer months in different tidal pools and channels that fish use as temporal or permanent refuges. DO was extremely variable over a 24 h cycle, and across the small-scale wetland. The high spatial and temporal DO variability measured was affected by time of day and tidal factors, namely water depth, tidal range and tidal direction (flood vs ebb). For the duration of the logging time, DO was mainly above the adopted threshold for hypoxia (50% saturation), however, for around 11% of the time, and on almost every logging day, DO values fell below the threshold, including a severe hypoxic event (<5% saturation) that continued for several hours. Fish still use this wetland intensively, so must be able to cope with low DO periods. Despite the ability of fish to tolerate extreme conditions, continuing urban expansion is likely to lead to further water quality degradation and so potential loss of nursery ground value. There is a substantial discontinuity between the recommended DO values in the Australian and New Zealand Guidelines for Fresh and Marine Water Quality and the values observed in this wetland, highlighting the limited value of these guidelines for management purposes. Local and regional high frequency data monitoring programs, in conjunction with local exposure risk studies are needed to underpin the development of the management that will ensure the sustainability of coastal wetlands.
Chemical sensing thresholds for mine detection dogs
NASA Astrophysics Data System (ADS)
Phelan, James M.; Barnett, James L.
2002-08-01
Mine detection dogs have been found to be an effective method to locate buried landmines. The capabilities of the canine olfaction method are from a complex combination of training and inherent capacity of the dog for odor detection. The purpose of this effort was to explore the detection thresholds of a limited group of dogs that were trained specifically for landmine detection. Soils were contaminated with TNT and 2,4-DNT to develop chemical vapor standards to present to the dogs. Soils contained ultra trace levels of TNT and DNT, which produce extremely low vapor levels. Three groups of dogs were presented the headspace vapors from the contaminated soils in work environments for each dog group. One positive sample was placed among several that contained clean soils and, the location and vapor source (strength, type) was frequently changed. The detection thresholds for the dogs were determined from measured and extrapolated dilution of soil chemical residues and, estimated soil vapor values using phase partitioning relationships. The results showed significant variances in dog sensing thresholds, where some dogs could sense the lowest levels and others had trouble with even the highest source. The remarkable ultra-trace levels detectable by the dogs are consistent with the ultra-trace chemical residues derived from buried landmines; however, poor performance may go unnoticed without periodic challenge tests at levels consistent with performance requirements.
Threshold regression to accommodate a censored covariate.
Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A
2018-06-22
In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.
Mericske-Stern, R; Hofmann, J; Wedig, A; Geering, A H
1993-01-01
Numerous investigations give evidence of improvement of masticatory performance when edentulous patients have had implants placed. A comparative study was carried out to investigate the oral function and tactile sensibility of patients restored with implant-supported overdentures. Twenty-six patients with ITI implants and 18 patients with natural-tooth roots were selected. The minimal pressure threshold perceived in vertical and horizontal directions was registered with dynamometers. Maximal occlusal force was recorded with a miniature bite recorder placed between each pair of antagonistic teeth on both jaw sides separately. All measurements were repeated three times and the average was calculated. The records of minimal perceived pressure revealed a significantly higher threshold (factor 100) for the implant group. In both test groups, values registered in the vertical direction were slightly increased. A tendency for test subjects with implants to reach higher maximal occlusal force was observed, but not at a statistically significant level. In both test groups, the average maximum was found on the second premolar. The minimal pressure threshold seems to depend on the presence of receptors in the periodontal ligament. The records of maximal occlusal force, which were similar in both test groups, lead to the assumption that the limitation in maximal occlusal capacity of overdenture wearers is multifactorial and does not depend on the presence of a periodontal ligament.
Aquatic Rational Threshold Value (RTV) Concepts for Army Environmental Impact Assessment.
1979-07-01
rreversible impacts. In aquatic impacts. Examination of the etymology of “ration al systems, bot h the possible cause-effect relationships threshold value...namics, aqueous chemistry . toxicology, a driving function. 30 3’ The shading effects of ripar- and aquatic ecology. In addition , when man ’s use ian
Rainfall threshold definition using an entropy decision approach and radar data
NASA Astrophysics Data System (ADS)
Montesarchio, V.; Ridolfi, E.; Russo, F.; Napolitano, F.
2011-07-01
Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.
A score-statistic approach for determining threshold values in QTL mapping.
Kao, Chen-Hung; Ho, Hsiang-An
2012-06-01
Issues in determining the threshold values of QTL mapping are often investigated for the backcross and F2 populations with relatively simple genome structures so far. The investigations of these issues in the progeny populations after F2 (advanced populations) with relatively more complicated genomes are generally inadequate. As these advanced populations have been well implemented in QTL mapping, it is important to address these issues for them in more details. Due to an increasing number of meiosis cycle, the genomes of the advanced populations can be very different from the backcross and F2 genomes. Therefore, special devices that consider the specific genome structures present in the advanced populations are required to resolve these issues. By considering the differences in genome structure between populations, we formulate more general score test statistics and gaussian processes to evaluate their threshold values. In general, we found that, given a significance level and a genome size, threshold values for QTL detection are higher in the denser marker maps and in the more advanced populations. Simulations were performed to validate our approach.
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Zuclich, Joseph A; Lund, David J; Stuck, Bruce E
2007-01-01
This report summarizes the results of a series of infrared (IR) laser-induced ocular damage studies conducted over the past decade. The studies examined retinal, lens, and corneal effects of laser exposures in the near-IR to far-IR transition region (wavelengths from 1.3-1.4 mum with exposure durations ranging from Q-switched to continuous wave). The corneal and retinal damage thresholds are tabulated for all pulsewidth regimes, and the wavelength dependence of the IR thresholds is discussed and contrasted to laser safety standard maximum permissible exposure limits. The analysis suggests that the current maximum permissible exposure limits could be beneficially revised to (1) relax the IR limits over wavelength ranges where unusually high safety margins may unintentionally hinder applications of recently developed military and telecommunications laser systems; (2) replace step-function discontinuities in the IR limits by continuously varying analytical functions of wavelength and pulsewidth which more closely follow the trends of the experimental retinal (for point-source laser exposures) and corneal ED50 threshold data; and (3) result in an overall simplification of the permissible exposure limits over the wavelength range from 1.2-2.6 mum. A specific proposal for amending the IR maximum permissible exposure limits over this wavelength range is presented.
Shah, Anoop D.; Nicholas, Owen; Timmis, Adam D.; Feder, Gene; Abrams, Keith R.; Chen, Ruoling; Hingorani, Aroon D.; Hemingway, Harry
2011-01-01
Background Low haemoglobin concentration has been associated with adverse prognosis in patients with angina and myocardial infarction (MI), but the strength and shape of the association and the presence of any threshold has not been precisely evaluated. Methods and findings A retrospective cohort study was carried out using the UK General Practice Research Database. 20,131 people with a new diagnosis of stable angina and no previous acute coronary syndrome, and 14,171 people with first MI who survived for at least 7 days were followed up for a mean of 3.2 years. Using semi-parametric Cox regression and multiple adjustment, there was evidence of threshold haemoglobin values below which mortality increased in a graded continuous fashion. For men with MI, the threshold value was 13.5 g/dl (95% confidence interval [CI] 13.2–13.9); the 29.5% of patients with haemoglobin below this threshold had an associated hazard ratio for mortality of 2.00 (95% CI 1.76–2.29) compared to those with haemoglobin values in the lowest risk range. Women tended to have lower threshold haemoglobin values (e.g, for MI 12.8 g/dl; 95% CI 12.1–13.5) but the shape and strength of association did not differ between the genders, nor between patients with angina and MI. We did a systematic review and meta-analysis that identified ten previously published studies, reporting a total of only 1,127 endpoints, but none evaluated thresholds of risk. Conclusions There is an association between low haemoglobin concentration and increased mortality. A large proportion of patients with coronary disease have haemoglobin concentrations below the thresholds of risk defined here. Intervention trials would clarify whether increasing the haemoglobin concentration reduces mortality. Please see later in the article for the Editors' Summary PMID:21655315
Culture-area relation in Axelrod's model for culture dissemination.
Barbosa, Lauro A; Fontanari, José F
2009-11-01
Axelrod's model for culture dissemination offers a nontrivial answer to the question of why there is cultural diversity given that people's beliefs have a tendency to become more similar to each other's as they interact repeatedly. The answer depends on the two control parameters of the model, namely, the number F of cultural features that characterize each agent, and the number q of traits that each feature can take on, as well as on the size A of the territory or, equivalently, on the number of interacting agents. Here, we investigate the dependence of the number C of distinct coexisting cultures on the area A in Axelrod's model, the culture-area relationship, through extensive Monte Carlo simulations. We find a non-monotonous culture-area relation, for which the number of cultures decreases when the area grows beyond a certain size, provided that q is smaller than a threshold value qc = qc (F) and F > or = 3. In the limit of infinite area, this threshold value signals the onset of a discontinuous transition between a globalized regime marked by a uniform culture (C = 1), and a completely polarized regime where all C = qF possible cultures coexist. Otherwise, the culture- area relation exhibits the typical behavior of the species- area relation, i.e., a monotonically increasing curve the slope of which is steep at first and steadily levels off at some maximum diversity value.
Simmons, Andrea Megela; Hom, Kelsey N; Simmons, James A
2017-03-01
Thresholds to short-duration narrowband frequency-modulated (FM) sweeps were measured in six big brown bats (Eptesicus fuscus) in a two-alternative forced choice passive listening task before and after exposure to band-limited noise (lower and upper frequencies between 10 and 50 kHz, 1 h, 116-119 dB sound pressure level root mean square; sound exposure level 152 dB). At recovery time points of 2 and 5 min post-exposure, thresholds varied from -4 to +4 dB from pre-exposure threshold estimates. Thresholds after sham (control) exposures varied from -6 to +2 dB from pre-exposure estimates. The small differences in thresholds after noise and sham exposures support the hypothesis that big brown bats do not experience significant temporary threshold shifts under these experimental conditions. These results confirm earlier findings showing stability of thresholds to broadband FM sweeps at longer recovery times after exposure to broadband noise. Big brown bats may have evolved a lessened susceptibility to noise-induced hearing losses, related to the special demands of echolocation.
Revising two-point discrimination assessment in normal aging and in patients with polyneuropathies.
van Nes, S I; Faber, C G; Hamers, R M T P; Harschnitz, O; Bakkers, M; Hermans, M C E; Meijer, R J; van Doorn, P A; Merkies, I S J
2008-07-01
To revise the static and dynamic normative values for the two-point discrimination test and to examine its applicability and validity in patients with a polyneuropathy. Two-point discrimination threshold values were assessed in 427 healthy controls and 99 patients mildly affected by a polyneuropathy. The controls were divided into seven age groups ranging from 20-29, 30-39,..., up to 80 years and older; each group consisted of at least 30 men and 30 women. Two-point discrimination examination took place under standardised conditions on the index finger. Correlation studies were performed between the scores obtained and the values derived from the Weinstein Enhanced Sensory Test (WEST) and the arm grade of the Overall Disability SumScore (ODSS) in the patients' group (validity studies). Finally, the sensitivity to detect patients mildly affected by a polyneuropathy was evaluated for static and dynamic assessments. There was a significant age-dependent increase in the two-point discrimination values. No significant gender difference was found. The dynamic threshold values were lower than the static scores. The two-point discrimination values obtained correlated significantly with the arm grade of the ODSS (static values: r = 0.33, p = 0.04; dynamic values: r = 0.37, p = 0.02) and the scores of the WEST in patients (static values: r = 0.58, p = 0.0001; dynamic values: r = 0.55, p = 0.0002). The sensitivity for the static and dynamic threshold values was 28% and 33%, respectively. This study provides age-related normative two-point discrimination threshold values using a two-point discriminator (an aesthesiometer). This easily applicable instrument could be used as part of a more extensive neurological sensory evaluation.
NASA Astrophysics Data System (ADS)
Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun
2007-03-01
Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.
Three-dimensional Monte Carlo model of pulsed-laser treatment of cutaneous vascular lesions
NASA Astrophysics Data System (ADS)
Milanič, Matija; Majaron, Boris
2011-12-01
We present a three-dimensional Monte Carlo model of optical transport in skin with a novel approach to treatment of side boundaries of the volume of interest. This represents an effective way to overcome the inherent limitations of ``escape'' and ``mirror'' boundary conditions and enables high-resolution modeling of skin inclusions with complex geometries and arbitrary irradiation patterns. The optical model correctly reproduces measured values of diffuse reflectance for normal skin. When coupled with a sophisticated model of thermal transport and tissue coagulation kinetics, it also reproduces realistic values of radiant exposure thresholds for epidermal injury and for photocoagulation of port wine stain blood vessels in various skin phototypes, with or without application of cryogen spray cooling.
Smartphone-Based Hearing Screening in Noisy Environments
Na, Youngmin; Joo, Hyo Sung; Yang, Hyejin; Kang, Soojin; Hong, Sung Hwa; Woo, Jihwan
2014-01-01
It is important and recommended to detect hearing loss as soon as possible. If it is found early, proper treatment may help improve hearing and reduce the negative consequences of hearing loss. In this study, we developed smartphone-based hearing screening methods that can ubiquitously test hearing. However, environmental noise generally results in the loss of ear sensitivity, which causes a hearing threshold shift (HTS). To overcome this limitation in the hearing screening location, we developed a correction algorithm to reduce the HTS effect. A built-in microphone and headphone were calibrated to provide the standard units of measure. The HTSs in the presence of either white or babble noise were systematically investigated to determine the mean HTS as a function of noise level. When the hearing screening application runs, the smartphone automatically measures the environmental noise and provides the HTS value to correct the hearing threshold. A comparison to pure tone audiometry shows that this hearing screening method in the presence of noise could closely estimate the hearing threshold. We expect that the proposed ubiquitous hearing test method could be used as a simple hearing screening tool and could alert the user if they suffer from hearing loss. PMID:24926692
Hamada, Nobuyuki; Fujimichi, Yuki
2014-01-01
Radiation exposure causes cancer and non-cancer health effects, each of which differs greatly in the shape of the dose–response curve, latency, persistency, recurrence, curability, fatality and impact on quality of life. In recent decades, for dose limitation purposes, the International Commission on Radiological Protection has divided such diverse effects into tissue reactions (formerly termed non-stochastic and deterministic effects) and stochastic effects. On the one hand, effective dose limits aim to reduce the risks of stochastic effects (cancer/heritable effects) and are based on the detriment-adjusted nominal risk coefficients, assuming a linear-non-threshold dose response and a dose and dose rate effectiveness factor of 2. On the other hand, equivalent dose limits aim to avoid tissue reactions (vision-impairing cataracts and cosmetically unacceptable non-cancer skin changes) and are based on a threshold dose. However, the boundary between these two categories is becoming vague. Thus, we review the changes in radiation effect classification, dose limitation concepts, and the definition of detriment and threshold. Then, the current situation is overviewed focusing on (i) stochastic effects with a threshold, (ii) tissue reactions without a threshold, (iii) target organs/tissues for circulatory disease, (iv) dose levels for limitation of cancer risks vs prevention of non-life-threatening tissue reactions vs prevention of life-threatening tissue reactions, (v) mortality or incidence of thyroid cancer, and (vi) the detriment for tissue reactions. For future discussion, one approach is suggested that classifies radiation effects according to whether effects are life threatening, and radiobiological research needs are also briefly discussed. PMID:24794798
Quantitative measurement of binocular color fusion limit for non-spectral colors.
Jung, Yong Ju; Sohn, Hosik; Lee, Seong-il; Ro, Yong Man; Park, Hyun Wook
2011-04-11
Human perception becomes difficult in the event of binocular color fusion when the color difference presented for the left and right eyes exceeds a certain threshold value, known as the binocular color fusion limit. This paper discusses the binocular color fusion limit for non-spectral colors within the color gamut of a conventional LCD 3DTV. We performed experiments to measure the color fusion limit for eight chromaticity points sampled from the CIE 1976 chromaticity diagram. A total of 2480 trials were recorded for a single observer. By analyzing the results, the color fusion limit was quantified by ellipses in the chromaticity diagram. The semi-minor axis of the ellipses ranges from 0.0415 to 0.0923 in terms of the Euclidean distance in the u'v´ chromaticity diagram and the semi-major axis ranges from 0.0640 to 0.1560. These eight ellipses are drawn on the chromaticity diagram. © 2011 Optical Society of America
When Is a Sprint a Sprint? A Review of the Analysis of Team-Sport Athlete Activity Profile
Sweeting, Alice J.; Cormack, Stuart J.; Morgan, Stuart; Aughey, Robert J.
2017-01-01
The external load of a team-sport athlete can be measured by tracking technologies, including global positioning systems (GPS), local positioning systems (LPS), and vision-based systems. These technologies allow for the calculation of displacement, velocity and acceleration during a match or training session. The accurate quantification of these variables is critical so that meaningful changes in team-sport athlete external load can be detected. High-velocity running, including sprinting, may be important for specific team-sport match activities, including evading an opponent or creating a shot on goal. Maximal accelerations are energetically demanding and frequently occur from a low velocity during team-sport matches. Despite extensive research, conjecture exists regarding the thresholds by which to classify the high velocity and acceleration activity of a team-sport athlete. There is currently no consensus on the definition of a sprint or acceleration effort, even within a single sport. The aim of this narrative review was to examine the varying velocity and acceleration thresholds reported in athlete activity profiling. The purposes of this review were therefore to (1) identify the various thresholds used to classify high-velocity or -intensity running plus accelerations; (2) examine the impact of individualized thresholds on reported team-sport activity profile; (3) evaluate the use of thresholds for court-based team-sports and; (4) discuss potential areas for future research. The presentation of velocity thresholds as a single value, with equivocal qualitative descriptors, is confusing when data lies between two thresholds. In Australian football, sprint efforts have been defined as activity >4.00 or >4.17 m·s−1. Acceleration thresholds differ across the literature, with >1.11, 2.78, 3.00, and 4.00 m·s−2 utilized across a number of sports. It is difficult to compare literature on field-based sports due to inconsistencies in velocity and acceleration thresholds, even within a single sport. Velocity and acceleration thresholds have been determined from physical capacity tests. Limited research exists on the classification of velocity and acceleration data by female team-sport athletes. Alternatively, data mining techniques may be used to report team-sport athlete external load, without the requirement of arbitrary or physiologically defined thresholds. PMID:28676767
75 FR 24450 - Early Retiree Reinsurance Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-05
... option'' in the regulation when discussing the fact that there is only one cost threshold and cost limit... amounts, and to apply the cost threshold and cost limit, to periods of time that are 12 months in duration... program provides reimbursement to participating employment-based plans for a portion of the cost of health...
Mercury Dispersion Modeling And Purge Ventilation Stack Height Determination For Tank 40H
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera-Giboyeaux, A.
2017-05-19
The SRNL Atmospheric Technologies Group performed an analysis for mercury emissions from H-Tank Farm - Tank 40 ventilation system exhaust in order to assess whether the Short Term Exposure Limit (STEL), or Threshold Limit Value (TLV) levels for mercury will be exceeded during bulk sludge slurry mixing and sludge removal operations. The American Meteorological Society/Environmental Protection Agency Regulatory Model (AERMOD) was used as the main dispersion modelling tool for this analysis. The results indicated that a 45-foot stack is sufficient to raise the plume centerline from the Tank 40 release to prevent mercury exposure problems for any of the stackmore » discharge scenarios provided. However, a 42-foot stack at Tank 40 is sufficient to prevent mercury exposure concerns in all emission scenarios except the 50 mg/m 3 release. At a 42-foot stack height, values exceeding the exposure standards are only measured on receptors located above 34 feet.« less
Rodriguez-Martinez, Carlos E; Sossa-Briceño, Monica P; Castro-Rodriguez, Jose A
2018-05-01
Asthma educational interventions have been shown to improve several clinically and economically important outcomes. However, these interventions are costly in themselves and could lead to even higher disease costs. A cost-effectiveness threshold analysis would be helpful in determining the threshold value of the cost of educational interventions, leading to these interventions being cost-effective. The aim of the present study was to perform a cost-effectiveness threshold analysis to determine the level at which the cost of a pediatric asthma educational intervention would be cost-effective and cost-saving. A Markov-type model was developed in order to estimate costs and health outcomes of a simulated cohort of pediatric patients with persistent asthma treated over a 12-month period. Effectiveness parameters were obtained from a single uncontrolled before-and-after study performed with Colombian asthmatic children. Cost data were obtained from official databases provided by the Colombian Ministry of Health. The main outcome was the variable "quality-adjusted life-years" (QALYs). A deterministic threshold sensitivity analysis showed that the asthma educational intervention will be cost-saving to the health system if its cost is under US$513.20. Additionally, the analysis showed that the cost of the intervention would have to be below US$967.40 in order to be cost-effective. This study identified the level at which the cost of a pediatric asthma educational intervention will be cost-effective and cost-saving for the health system in Colombia. Our findings could be a useful aid for decision makers in efficiently allocating limited resources when planning asthma educational interventions for pediatric patients.
[Research on the threshold of Chl-a in Lake Taihu based on microcystins].
Wei, Dai-chun; Su, Jing; Ji, Dan-feng; Fu, Xiao-yong; Wang, Ji; Huo, Shou-liang; Cui, Chi-fei; Tang, Jun; Xi, Bei-dou
2014-12-01
Water samples were collected in Lake Taihu from June to October in 2013 in order to investigate the threshold of chlorophyll a (Chl-a). The concentrations of three microcystins isomers (MC-LR, MC-RR, MC-YR) were detected by means of solid phase extraction and high performance liquid chromatography-tandem mass spectrometry. The correlations between various MCs and eutrophication factors, for instance of total nitrogen (TN), total phosphorus (TP), chlorophyll a, permanganate index etc were analyzed. The threshold of Chl-a was studied based on the relationships between MC-LR, MCs and Chl-a. The results showed that Lake Taihu was severely polluted by MCs and its spatial distribution could be described as follows: the concentration in Meiliang Bay was the highest, followed by Gonghu Bay and Western Lake, and Lake Center; the least polluted areas were in Lake Xuhu and Southern Lake. The concentration of MC-LR was the highest among the 3 MCs. The correlation analysis indicated that MC-LR, MC-RR, MC-YR and MCs had very positive correlation with permanganate index, TN, TP and Chl-a (P < 0.01). The threshold value of Chl-a was 12.26 mg x m(-3) according to the standard thresholds of MC-LR and MCs in drinking water. The threshold value of Chl-a in Lake Taihu was very close to the standard in the State of North Carolina, which demonstrated that the threshold value provided in this study was reasonable.
Bilevel thresholding of sliced image of sludge floc.
Chu, C P; Lee, D J
2004-02-15
This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.
CHANGES IN THE ANAEROBIC THRESHOLD IN AN ANNUAL CYCLE OF SPORT TRAINING OF YOUNG SOCCER PLAYERS
Andrzejewski, M.; Wieczorek, A.; Barinow-Wojewódzki, A.; Jadczak, Ł.; Adrian, S.; Pietrzak, M.; Wieczorek, S.
2013-01-01
The aim of the study was to assess changes in the anaerobic threshold of young soccer players in an annual training cycle. A group of highly trained 15-18 year old players of KKS Lech Poznań were tested. The tests included an annual training macrocycle, and its individual stages resulted from the time structure of the sports training. In order to assess the level of exercise capacities of the players, a field exercise test of increasing intensity was carried out on a soccer pitch. The test made it possible to determine the 4 millimolar lactate threshold (T LA 4 mmol · l-1) on the basis of the lactate concentration in blood [LA], to establish the threshold running speed and the threshold heart rate [HR]. The threshold running speed at the level of the 4 millimolar lactate threshold was established using the two-point form of the equation of a straight line. The obtained indicators of the threshold running speed allowed for precise establishment of effort intensity used in individual training in developing aerobic endurance. In order to test the significance of differences in mean values between four dates of tests, a non-parametric Friedman ANOVA test was used. The significance of differences between consecutive dates of tests was determined using a post-hoc Friedman ANOVA test. The tests showed significant differences in values of selected indicators determined at the anaerobic threshold in various stages of an annual training cycle of young soccer players. The most beneficial changes in terms of the threshold running speed were noted on the fourth date of tests, when the participants had the highest values of 4.01 m · s-1 for older juniors, and 3.80 m · s-1 for younger juniors. This may be indicative of effective application of an individualized programme of training loads and of good preparation of teams for competition in terms of players’ aerobic endurance. PMID:24744480
Changes in the anaerobic threshold in an annual cycle of sport training of young soccer players.
Sliwowski, R; Andrzejewski, M; Wieczorek, A; Barinow-Wojewódzki, A; Jadczak, L; Adrian, S; Pietrzak, M; Wieczorek, S
2013-06-01
The aim of the study was to assess changes in the anaerobic threshold of young soccer players in an annual training cycle. A group of highly trained 15-18 year old players of KKS Lech Poznań were tested. The tests included an annual training macrocycle, and its individual stages resulted from the time structure of the sports training. In order to assess the level of exercise capacities of the players, a field exercise test of increasing intensity was carried out on a soccer pitch. The test made it possible to determine the 4 millimolar lactate threshold (T LA 4 mmol · l(-1)) on the basis of the lactate concentration in blood [LA], to establish the threshold running speed and the threshold heart rate [HR]. The threshold running speed at the level of the 4 millimolar lactate threshold was established using the two-point form of the equation of a straight line. The obtained indicators of the threshold running speed allowed for precise establishment of effort intensity used in individual training in developing aerobic endurance. In order to test the significance of differences in mean values between four dates of tests, a non-parametric Friedman ANOVA test was used. The significance of differences between consecutive dates of tests was determined using a post-hoc Friedman ANOVA test. The tests showed significant differences in values of selected indicators determined at the anaerobic threshold in various stages of an annual training cycle of young soccer players. The most beneficial changes in terms of the threshold running speed were noted on the fourth date of tests, when the participants had the highest values of 4.01 m · s(-1) for older juniors, and 3.80 m · s(-1) for younger juniors. This may be indicative of effective application of an individualized programme of training loads and of good preparation of teams for competition in terms of players' aerobic endurance.
Zinn, Caryn; Rush, Amy; Johnson, Rebecca
2018-01-01
Objective The low-carbohydrate, high-fat (LCHF) diet is becoming increasingly employed in clinical dietetic practice as a means to manage many health-related conditions. Yet, it continues to remain contentious in nutrition circles due to a belief that the diet is devoid of nutrients and concern around its saturated fat content. This work aimed to assess the micronutrient intake of the LCHF diet under two conditions of saturated fat thresholds. Design In this descriptive study, two LCHF meal plans were designed for two hypothetical cases representing the average Australian male and female weight-stable adult. National documented heights, a body mass index of 22.5 to establish weight and a 1.6 activity factor were used to estimate total energy intake using the Schofield equation. Carbohydrate was limited to <130 g, protein was set at 15%–25% of total energy and fat supplied the remaining calories. One version of the diet aligned with the national saturated fat guideline threshold of <10% of total energy and the other included saturated fat ad libitum. Primary outcomes The primary outcomes included all micronutrients, which were assessed using FoodWorks dietary analysis software against national Australian/New Zealand nutrient reference value (NRV) thresholds. Results All of the meal plans exceeded the minimum NRV thresholds, apart from iron in the female meal plans, which achieved 86%–98% of the threshold. Saturated fat intake was logistically unable to be reduced below the 10% threshold for the male plan but exceeded the threshold by 2 g (0.6%). Conclusion Despite macronutrient proportions not aligning with current national dietary guidelines, a well-planned LCHF meal plan can be considered micronutrient replete. This is an important finding for health professionals, consumers and critics of LCHF nutrition, as it dispels the myth that these diets are suboptimal in their micronutrient supply. As with any diet, for optimal nutrient achievement, meals need to be well formulated. PMID:29439004
Wang, Rui-Ping; Jiang, Yong-Gen; Zhao, Gen-Ming; Guo, Xiao-Qin; Michael, Engelgau
2017-12-01
The China Infectious Disease Automated-alert and Response System (CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control (CDC) at all levels in China. In the CIDARS, thresholds are determined using the "Mean+2SD‟ in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the "Mean +2SD‟ method to the performance of 5 novel algorithms to select optimal "Outbreak Gold Standard (OGS)‟ and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The "Mean+2SD‟, C1, C2, moving average (MA), seasonal model (SM), and cumulative sum (CUSUM) algorithms were applied. Outbreak signals for the predicted value (Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A (chickenpox and mumps), TYPE B (influenza and rubella) and TYPE C [hand foot and mouth disease (HFMD) and scarlet fever]. Optimized thresholds for chickenpox (P 55 ), mumps (P 50 ), influenza (P 40 , P 55 , and P 75 ), rubella (P 45 and P 75 ), HFMD (P 65 and P 70 ), and scarlet fever (P 75 and P 80 ) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.
Cloud Detection of Optical Satellite Images Using Support Vector Machine
NASA Astrophysics Data System (ADS)
Lee, Kuan-Yi; Lin, Chao-Hung
2016-06-01
Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.
Low latency counter event indication
Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY
2008-09-16
A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.
Low latency counter event indication
Gara, Alan G.; Salapura, Valentina
2010-08-24
A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.
NASA Astrophysics Data System (ADS)
Attal, M.; Hobley, D.; Cowie, P. A.; Whittaker, A. C.; Tucker, G. E.; Roberts, G. P.
2008-12-01
Prominent convexities in channel long profiles, or knickzones, are an expected feature of bedrock rivers responding to a change in the rate of base level fall driven by tectonic processes. In response to a change in relative uplift rate, the simple stream power model which is characterized by a slope exponent equal to unity predicts that knickzone retreat velocity is independent of uplift rate and that channel slope and uplift rate are linearly related along the reaches which have re-equilibrated with respect to the new uplift condition (i.e., downstream of the profile convexity). However, a threshold for erosion has been shown to introduce non- linearity between slope and uplift rate when associated with stochastic rainfall variability. We present field data regarding the height and retreat rates of knickzones in rivers upstream of active normal faults in the central Apennines, Italy, where excellent constraints exist on the temporal and spatial history of fault movement. The knickzones developed in response to an independently-constrained increase in fault throw rate 0.75 Ma. Channel characteristics and Shield stress values suggest that these rivers lie close to the detachment-limited end-member but the knickzone retreat velocity (calculated from the time since fault acceleration) has been found to scale systematically with the known fault throw rates, even after accounting for differences in drainage area. In addition, the relationship between measured channel slope and relative uplift rate is non-linear, suggesting that a threshold for erosion might be effective in this setting. We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to quantify the effect of such a threshold on river long profile development and knickzone retreat in response to tectonic perturbation. In particular, we investigate the evolutions of 3 Italian catchments of different size characterized by contrasted degree of tectonic perturbation, using physically realistic threshold values based on sediment grain-size measurements along the studied rivers. We show that the threshold alone cannot account for field observations of the size, position and retreat rate of profile convexities and that other factors neglected by the simple stream power law (e.g. role of sediments) have to be invoked to explain the discrepancy between field observations and modeled topographies.
Leung, Gary N W; Ho, Emmie N M; Kwok, W Him; Leung, David K K; Tang, Francis P W; Wan, Terence S M; Wong, April S Y; Wong, Colton H F; Wong, Jenny K Y; Yu, Nola H
2007-09-07
Quantitative determination, particularly for threshold substances in biological samples, is much more demanding than qualitative identification. A proper assessment of any quantitative determination is the measurement uncertainty (MU) associated with the determined value. The International Standard ISO/IEC 17025, "General requirements for the competence of testing and calibration laboratories", has more prescriptive requirements on the MU than its superseded document, ISO/IEC Guide 25. Under the 2005 or 1999 versions of the new standard, an estimation of the MU is mandatory for all quantitative determinations. To comply with the new requirement, a protocol was established in the authors' laboratory in 2001. The protocol has since evolved based on our practical experience, and a refined version was adopted in 2004. This paper describes our approach in establishing the MU, as well as some other important considerations, for the quantification of threshold substances in biological samples as applied in the area of doping control for horses. The testing of threshold substances can be viewed as a compliance test (or testing to a specified limit). As such, it should only be necessary to establish the MU at the threshold level. The steps in a "Bottom-Up" approach adopted by us are similar to those described in the EURACHEM/CITAC guide, "Quantifying Uncertainty in Analytical Measurement". They involve first specifying the measurand, including the relationship between the measurand and the input quantities upon which it depends. This is followed by identifying all applicable uncertainty contributions using a "cause and effect" diagram. The magnitude of each uncertainty component is then calculated and converted to a standard uncertainty. A recovery study is also conducted to determine if the method bias is significant and whether a recovery (or correction) factor needs to be applied. All standard uncertainties with values greater than 30% of the largest one are then used to derive the combined standard uncertainty. Finally, an expanded uncertainty is calculated at 99% one-tailed confidence level by multiplying the standard uncertainty with an appropriate coverage factor (k). A sample is considered positive if the determined concentration of the threshold substance exceeds its threshold by the expanded uncertainty. In addition, other important considerations, which can have a significant impact on quantitative analyses, will be presented.
Cockburn, Neil; Kovacs, Michael
2016-01-01
CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia time by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155–180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877
Lee, Janet S; Yang, Jianing; Stockl, Karen M; Lew, Heidi; Solow, Brian K
2016-01-01
General eligibility criteria used by the Centers for Medicare & Medicaid Services (CMS) to identify patients for medication therapy management (MTM) services include having multiple chronic conditions, taking multiple Part D drugs, and being likely to incur annual drug costs that exceed a predetermined threshold. The performance of these criteria in identifying patients in greatest need of MTM services is unknown. Although there are numerous possible versions of MTM identification algorithms that satisfy these criteria, there are limited data that evaluate the performance of MTM services using eligibility thresholds representative of those used by the majority of Part D sponsors. To (a) evaluate the performance of the 2013 CMS MTM eligibility criteria thresholds in identifying Medicare Advantage Prescription Drug (MAPD) plan patients with at least 2 drug therapy problems (DTPs) relative to alternative criteria threshold levels and (b) identify additional patient risk factors significantly associated with the number of DTPs for consideration as potential future MTM eligibility criteria. All patients in the Medicare Advantage Part D population who had pharmacy eligibility as of December 31, 2013, were included in this retrospective cohort study. Study outcomes included 7 different types of DTPs: use of high-risk medications in the elderly, gaps in medication therapy, medication nonadherence, drug-drug interactions, duplicate therapy, drug-disease interactions, and brand-to-generic conversion opportunities. DTPs were identified for each member based on 6 months of most recent pharmacy claims data and 14 months of most recent medical claims data. Risk factors examined in this study included patient demographics and prior health care utilization in the most recent 6 months. Descriptive statistics were used to summarize patient characteristics and to evaluate unadjusted relationships between the average number of DTPs identified per patient and each risk factor. Quartile values identified in the study population for number of diseases, number of drugs, and annual spend were used as potential new criteria thresholds, resulting in 27 new MTM criteria combinations. The performance of each eligibility criterion was evaluated using sensitivity, specificity, positive predictive values (PPVs), and negative predictive values (NPVs). Patients identified with at least 2 DTPs were defined as those who would benefit from MTM services and were used as the gold standard. As part of a sensitivity analysis, patients identified with at least 1 DTP were used as the gold standard. Lastly, a multivariable negative binomial regression model was used to evaluate the relationship between each risk factor and the number of identified DTPs per patient while controlling for the patients' number of drugs, number of chronic diseases, and annual drug spend. A total of 2,578,336 patients were included in the study. The sensitivity, specificity, PPV, and NPV of CMS MTM criteria for the 2013 plan year were 15.3%, 95.6%, 51.3%, and 78.8%, respectively. Sensitivity and PPV improved when the drug count threshold increased from 8 to 10, and when the annual drug cost decreased from $3,144 to $2,239 or less. Results were consistent when at least 1 DTP was used as the gold standard. The adjusted rate of DTPs was significantly greater among patients identified with higher drug and disease counts, annual drug spend, and prior ER or outpatient or hospital visits. Patients with higher median household incomes who were male, younger, or white had significantly lower rates of DTPs. The performance of MTM eligibility criteria can be improved by increasing the threshold values for drug count while decreasing the threshold value for annual drug spend. Furthermore, additional risk factors, such as a recent ER or hospital visit, may be considered as potential MTM eligibility criteria.
Sparing of normal urothelium in hexyl-aminolevulinate-mediated photodynamic therapy
NASA Astrophysics Data System (ADS)
Vaucher, Laurent; Jichlinski, Patrice; Lange, Norbert; Ritter-Schenk, Celine; van den Bergh, Hubert; Kucera, Pavel
2005-04-01
This work determines on an in vitro porcine urothelium model the threshold values of different parameters such as photosensitizer concentration, irradiation parameters and production of reactive oxygen species in order to control the damage on normal urothelium and spare about 50% of normal mucosa. For a three hours HAL incubation time, these threshold values were with blue light (0.75J/cm at 75 mW/cm2 or 0.15J/cm2 at 30 mW/cm2) and with white light (0.55J/cm2, at 30 mW/cm2). This means that for identical fluence rates, the threshold value for white light irradiation may be 3 times higher than for blue light irradiation.
Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate
NASA Astrophysics Data System (ADS)
Li, Chun-Hsien
2015-06-01
This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.
Continuous Seismic Threshold Monitoring
1992-05-31
Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic
Abejón, David; Rueda, Pablo; del Saz, Javier; Arango, Sara; Monzón, Eva; Gilsanz, Fernando
2015-04-01
Neurostimulation is the process and technology derived from the application of electricity with different parameters to activate or inhibit nerve pathways. Pulse width (Pw) is the duration of each electrical impulse and, along with amplitude (I), determines the total energy charge of the stimulation. The aim of the study was to test Pw values to find the most adequate pulse widths in rechargeable systems to obtain the largest coverage of the painful area, the most comfortable paresthesia, and the greatest patient satisfaction. A study of the parameters was performed, varying Pw while maintaining a fixed frequency at 50 Hz. Data on perception threshold (Tp ), discomfort threshold (Td ), and therapeutic threshold (Tt ) were recorded, applying 14 increasing Pw values ranging from 50 µsec to 1000 µsec. Lastly, the behavior of the therapeutic range (TR), the coverage of the painful area, the subjective patient perception of paresthesia, and the degree of patient satisfaction were assessed. The findings after analyzing the different thresholds were as follows: When varying the Pw, the differences obtained at each threshold (Tp , Tt , and Td ) were statistically significant (p < 0.05). The differences among the resulting Tp values and among the resulting Tt values were statistically significant when varying Pw from 50 up to 600 µsec (p < 0.05). For Pw levels 600 µsec and up, no differences were observed in these thresholds. In the case of Td , significant differences existed as Pw increased from 50 to 700 µsec (p ≤ 0.05). The coverage increased in a statistically significant way (p < 0.05) from Pw values of 50 µsec to 300 µsec. Good or very good subjective perception was shown at about Pw 300 µsec. The patient paresthesia coverage was introduced as an extra variable in the chronaxie-rheobase curve, allowing the adjustment of Pw values for optimal programming. The coverage of the patient against the current chronaxie-rheobase formula will be represented on three axes; an extra axis (z) will appear, multiplying each combination of Pw value and amplitude by the percentage of coverage corresponding to those values. Using this new comparison of chronaxie-rheobase curve vs. coverage, maximum Pw values will be obtained different from those obtained by classic methods. © 2014 International Neuromodulation Society.
Scaling Laws for NanoFET Sensors
NASA Astrophysics Data System (ADS)
Wei, Qi-Huo; Zhou, Fu-Shan
2008-03-01
In this paper, we report our numerical studies of the scaling laws for nanoplate field-effect transistor (FET) sensors by simplifying the nanoplates as random resistor networks. Nanowire/tube FETs are included as the limiting cases where the device width goes small. Computer simulations show that the field effect strength exerted by the binding molecules has significant impact on the scaling behaviors. When the field effect strength is small, nanoFETs have little size and shape dependence. In contrast, when the field-effect strength becomes stronger, there exists a lower detection threshold for charge accumulation FETs and an upper detection threshold for charge depletion FET sensors. At these thresholds, the nanoFET devices undergo a transition between low and large sensitivities. These thresholds may set the detection limits of nanoFET sensors. We propose to eliminate these detection thresholds by employing devices with very short source-drain distance and large width.
Sazykina, Tatiana G
2018-02-01
Model predictions of population response to chronic ionizing radiation (endpoint 'morbidity') were made for 11 species of warm-blooded animals, differing in body mass and lifespan - from mice to elephant. Predictions were made also for 3 bird species (duck, pigeon, and house sparrow). Calculations were based on analytical solutions of the mathematical model, simulating a population response to low-LET ionizing radiation in an ecosystem with a limiting resource (Sazykina, Kryshev, 2016). Model parameters for different species were taken from biological and radioecological databases; allometric relationships were employed for estimating some parameter values. As a threshold of decreased health status in exposed populations ('health threshold'), a 10% reduction in self-repairing capacity of organisms was suggested, associated with a decline in ability to sustain environmental stresses. Results of the modeling demonstrate a general increase of population vulnerability to ionizing radiation in animal species of larger size and longevity. Populations of small widespread species (mice, house sparrow; body mass 20-50 g), which are characterized by intensive metabolism and short lifespan, have calculated 'health thresholds' at dose rates about 6.5-7.5 mGy day -1 . Widespread animals with body mass 200-500 g (rat, common pigeon) - demonstrate 'health threshold' values at 4-5 mGy day -1 . For populations of animals with body mass 2-5 kg (rabbit, fox, raccoon), the indicators of 10% health decrease are in the range 2-3.4 mGy day -1 . For animals with body mass 40-100 kg (wolf, sheep, wild boar), thresholds are within 0.5-0.8 mGy day -1 ; for herbivorous animals with body mass 200-300 kg (deer, horse) - 0.5-0.6 mGy day -1 . The lowest health threshold was estimated for elephant (body mass around 5000 kg) - 0.1 mGy day -1 . According to the model results, the differences in population sensitivities of warm-blooded animal species to ionizing radiation are generally depended on the metabolic rate and longevity of organisms, also on individual radiosensitivity of biological tissues. The results of 'health threshold' calculations are formulated as a graded scale of wildlife sensitivities to chronic radiation stress, ranging from potentially vulnerable to more resistant species. Further studies are needed to expand the scale of population sensitivities to radiation, including other groups of wildlife - cold-blooded species, invertebrates, and plants. Copyright © 2017 Elsevier Ltd. All rights reserved.
An avoidance behavior model for migrating whale populations
NASA Astrophysics Data System (ADS)
Buck, John R.; Tyack, Peter L.
2003-04-01
A new model is presented for the avoidance behavior of migrating marine mammals in the presence of a noise stimulus. This model assumes that each whale will adjust its movement pattern near a sound source to maintain its exposure below its own individually specific maximum received sound-pressure level, called its avoidance threshold. The probability distribution function (PDF) of this avoidance threshold across individuals characterizes the migrating population. The avoidance threshold PDF may be estimated by comparing the distribution of migrating whales during playback and control conditions at their closest point of approach to the sound source. The proposed model was applied to the January 1998 experiment which placed a single acoustic source from the U.S. Navy SURTASS-LFA system in the migration corridor of grey whales off the California coast. This analysis found that the median avoidance threshold for this migrating grey whale population was 135 dB, with 90% confidence that the median threshold was within +/-3 dB of this value. This value is less than the 141 dB value for 50% avoidance obtained when the 1984 ``Probability of Avoidance'' model of Malme et al.'s was applied to the same data. [Work supported by ONR.
The (in)famous GWAS P-value threshold revisited and updated for low-frequency variants.
Fadista, João; Manning, Alisa K; Florez, Jose C; Groop, Leif
2016-08-01
Genome-wide association studies (GWAS) have long relied on proposed statistical significance thresholds to be able to differentiate true positives from false positives. Although the genome-wide significance P-value threshold of 5 × 10(-8) has become a standard for common-variant GWAS, it has not been updated to cope with the lower allele frequency spectrum used in many recent array-based GWAS studies and sequencing studies. Using a whole-genome- and -exome-sequencing data set of 2875 individuals of European ancestry from the Genetics of Type 2 Diabetes (GoT2D) project and a whole-exome-sequencing data set of 13 000 individuals from five ancestries from the GoT2D and T2D-GENES (Type 2 Diabetes Genetic Exploration by Next-generation sequencing in multi-Ethnic Samples) projects, we describe guidelines for genome- and exome-wide association P-value thresholds needed to correct for multiple testing, explaining the impact of linkage disequilibrium thresholds for distinguishing independent variants, minor allele frequency and ancestry characteristics. We emphasize the advantage of studying recent genetic isolate populations when performing rare and low-frequency genetic association analyses, as the multiple testing burden is diminished due to higher genetic homogeneity.
Do poison center triage guidelines affect healthcare facility referrals?
Benson, B E; Smith, C A; McKinney, P E; Litovitz, T L; Tandberg, W D
2001-01-01
The purpose of this study was to determine the extent to which poison center triage guidelines influence healthcare facility referral rates for acute, unintentional acetaminophen-only poisoning and acute, unintentional adult formulation iron poisoning. Managers of US poison centers were interviewed by telephone to determine their center's triage threshold value (mg/kg) for acute iron and acute acetaminophen poisoning in 1997. Triage threshold values and healthcare facility referral rates were fit to a univariate logistic regression model for acetaminophen and iron using maximum likelihood estimation. Triage threshold values ranged from 120-201 mg/kg (acetaminophen) and 16-61 mg/kg (iron). Referral rates ranged from 3.1% to 24% (acetaminophen) and 3.7% to 46.7% (iron). There was a statistically significant inverse relationship between the triage value and the referral rate for acetaminophen (p < 0.001) and iron (p = 0.0013). The model explained 31.7% of the referral variation for acetaminophen but only 4.1% of the variation for iron. There is great variability in poison center triage values and referral rates for iron and acetaminophen poisoning. Guidelines can account for a meaningful proportion of referral variation. Their influence appears to be substance dependent. These data suggest that efforts to determine and utilize the highest, safe, triage threshold value could substantially decrease healthcare costs for poisonings as long as patient medical outcomes are not compromised.
[Follow-up examination of Danish stainless steel welders previously examined in 1987].
Knudsen, Lisbeth Ehlert; Burr, Herman
2003-07-14
A Danish cohort from 1987 consisting of 226 stainless steel welders and reference persons is part of the European Study Group on Cytogenetic Biomarkers and Health (ESCH). In ESCH increased cancer morbidity and mortality was significantly associated with high levels of chromosomal aberrations, measured in blood samples several years prior to cancer registration. The positive association was found in two cohorts from the Nordic countries and from Italy. ESCH followed all registered cancer cases and control persons by questionnaires and interviews to obtain information about exposures in the period from the time of blood sampling for chromosomal aberration analysis to the time of cancer diagnosis. In Denmark the total cohort was included in the inquiry and the ESCH questions were supplemented with questions from the Danish National Work Environment Cohort Study 1990-95. Responses from one hundred and forty-four persons showed that seventy-four were employed at the same workplace place as in 1987. Differences in occupational exposures, such as more noise, heat and insufficient lighting and no differences in the self-rated health were found in comparison with the Danish National Work Environment Cohort Study as such and with the sample of metal workers. Only very few of the study persons knew the threshold limit value of welding fumes but a majority found that the working environment had improved during the past ten years. This study confirms hazardous exposures in stainless steel welding. The threshold limit value, however, has been lowered since 1987 suggesting there is less cancer risk today from stainless steel welding.
Tsujimura, Hiroji; Taoda, Kazushi; Kitahara, Teruyo
2015-01-01
The aims of this study were to clarify in detail the levels of whole-body vibration (WBV) exposure from a variety of agricultural machines in a rice farmer over one year, and to evaluate the daily level of exposure compared with European and Japanese threshold limits. The subject was a full-time, male rice farmer. We measured vibration accelerations on the seat pan and at the seat base of four tractors with various implements attached, one rice-planting machine, two combine harvesters, produced by the same manufacturer, and one truck used for transportation of agricultural machines. The position and velocity of the machines were recorded in parallel with WBV measurements. In addition, during the year starting in April 2010, the subject completed a questionnaire regarding his work (date, place, content, hours worked, machines used). We calculated the daily exposure to WBV, A(8), on all the days on which the subject used the agricultural machines. The WBV magnitude in farm fields was relatively high during tasks with high velocity and heavy mechanical load on the machine, and had no dominant axis. The subject worked for 159 days using the agricultural machines during the year, and the proportion of days on which A(8) values exceeded the thresholds was 90% for the Japan occupational exposure limit and 24% for the EU exposure action value. Our findings emphasize the need for rice farmers to have health management strategies suited to the farming seasons and measures to reduce WBV exposure during each farm task.
Absorption spectrum of a two-level atom in a bad cavity with injected squeezed vacuum
NASA Astrophysics Data System (ADS)
Zhou, Peng; Swain, S.
1996-02-01
We study the absorption spectrum of a coherently driven two-level atom interacting with a resonant cavity mode which is coupled to a broadband squeezed vacuum through its input-output mirror in the bad cavity limit. We study the modification of the two-photon correlation strength of the injected squeezed vacuum inside the cavity, and show that the equations describing probe absorption in the cavity environment are formally identical to these in free space, but with modified parameters describing the squeezed vacuum. The two photon correlations induced by the squeezed vacuum are always weaker than in free space. We pay particular attention to the spectral behaviour at line centre in the region of intermediate trength driving intensities, where anomalous spectral features such as hole-burning and dispersive profiles are displayed. These unusual spectral features are very sensitive to the squeezing phase and the Rabi frequency of the driving field. We also derive the threshold value of the Rabi frequency which gives rise to the transparency of the probe beam at the driving frequency. When the Rabi frequency is less than the threshold value, the probe beam is absorbed, whilst the probe beam is amplified (without population inversion under certain conditions) when the Rabi frequency is larger than this threshold. The anomalous spectral features all take place in the vicinity of the critical point dividing the different dynamical regimes, probe absorption and amplification, of the atomic radiation. The physical origin of the strong amplification without population inversion, and the feasibility of observing it, are discussed.
Morphological analysis of pore size and connectivity in a thick mixed-culture biofilm.
Rosenthal, Alex F; Griffin, James S; Wagner, Michael; Packman, Aaron I; Balogun, Oluwaseyi; Wells, George F
2018-05-19
Morphological parameters are commonly used to predict transport and metabolic kinetics in biofilms. Yet, quantification of biofilm morphology remains challenging due to imaging technology limitations and lack of robust analytical approaches. We present a novel set of imaging and image analysis techniques to estimate internal porosity, pore size distributions, and pore network connectivity to a depth of 1 mm at a resolution of 10 µm in a biofilm exhibiting both heterotrophic and nitrifying activity. Optical coherence tomography (OCT) scans revealed an extensive pore network with diameters as large as 110 µm directly connected to the biofilm surface and surrounding fluid. Thin section fluorescence in situ hybridization microscopy revealed ammonia oxidizing bacteria (AOB) distributed through the entire thickness of the biofilm. AOB were particularly concentrated in the biofilm around internal pores. Areal porosity values estimated from OCT scans were consistently lower than those estimated from multiphoton laser scanning microscopy, though the two imaging modalities showed a statistically significant correlation (r = 0.49, p<0.0001). Estimates of areal porosity were moderately sensitive to grey level threshold selection, though several automated thresholding algorithms yielded similar values to those obtained by manually thresholding performed by a panel of environmental engineering researchers (±25% relative error). These findings advance our ability to quantitatively describe the geometry of biofilm internal pore networks at length scales relevant to engineered biofilm reactors and suggest that internal pore structures provide crucial habitat for nitrifier growth. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Towards a clinically informed, data-driven definition of elderly onset epilepsy.
Josephson, Colin B; Engbers, Jordan D T; Sajobi, Tolulope T; Jette, Nathalie; Agha-Khani, Yahya; Federico, Paolo; Murphy, William; Pillay, Neelan; Wiebe, Samuel
2016-02-01
Elderly onset epilepsy represents a distinct subpopulation that has received considerable attention due to the unique features of the disease in this age group. Research into this particular patient group has been limited by a lack of a standardized definition and understanding of the attributes associated with elderly onset epilepsy. We used a prospective cohort database to examine differences in patients stratified according to age of onset. Linear support vector machine learning incorporating all significant variables was used to predict age of onset according to prespecified thresholds. Sensitivity and specificity were calculated and plotted in receiver-operating characteristic (ROC) space. Feature coefficients achieving an absolute value of 0.25 or greater were graphed by age of onset to define how they vary with time. We identified 2,449 patients, of whom 149 (6%) had an age of seizure onset of 65 or older. Fourteen clinical variables had an absolute predictive value of at least 0.25 at some point over the age of epilepsy-onset spectrum. Area under the curve in ROC space was maximized between ages of onset of 65 and 70. Features identified through machine learning were frequently threshold specific and were similar, but not identical, to those revealed through simple univariable and multivariable comparisons. This study provides an empirical, clinically informed definition of "elderly onset epilepsy." If validated, an age threshold of 65-70 years can be used for future studies of elderly onset epilepsy and permits targeted interventions according to the patient's age of onset. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.
NASA Astrophysics Data System (ADS)
Michel, P.; Benz, W.; Richardson, D. C.
2005-08-01
Recent simulations of asteroid break-ups, including both the fragmentation of the parent body and the gravitational interactions of the fragments, have allowed to reproduced successfully the main properties of asteroid families formed in different regimes of impact energy. Here, using the same kind of simulations, we concentrate on a single regime of impact energy, the so-called catastrophic threshold usually designated by Qcrit, which results in the escape of half of the target's mass. Considering a wide range of diameter values and two kinds of internal structures of the parent body, monolithic and pre-shattered, we analyse their potential influences on the value of Qcrit and on the collisional outcome limited here to the fragment size and ejection speed distributions, which are the main outcome properties used by collisional models to study the evolutions of the different populations of small bodies. For all the considered diameters and the two internal structures of the parent body, we confirm that the process of gravitational reaccumulation is at the origin of the largest remnant's mass. We then find that, for a given diameter of the parent body, the impact energy corresponding to the catastrophic disruption threshold is highly dependent on the internal structure of the parent body. In particular, a pre-shattered parent body containing only damaged zones but no macroscopic voids is easier to disrupt than a monolithic parent body. Other kinds of internal properties that can also characterize small bodies in real populations will be investigated in a future work.
Climate limits across space and time on European forest structure
NASA Astrophysics Data System (ADS)
Moreno, A. L. S.; Neumann, M.; Hasenauer, H.
2017-12-01
The impact climate has on forests has been extensively studied. However, the large scale effect climate has on forest structures, such as average diameters, heights and basal area are understudied in a spatially explicit manner. The limits, tipping points and thresholds that climate places on forest structures dictate the services a forest may provide, the vulnerability of a forest to mortality and the potential value of the timber there within. The majority of current research either investigates climate impacts on forest pools and fluxes, on a tree physiological scale or on case studies that are used to extrapolate results and potential impacts. A spatially explicit study on how climate affects forest structure over a large region would give valuable information to stakeholders who are more concerned with ecosystem services that cannot be described by pools and fluxes but require spatially explicit information - such as biodiversity, habitat suitability, and market values. In this study, we quantified the limits that climate (maximum, minimum temperature and precipitation) places on 3 forest structures, diameter at breast height, height, and basal area throughout Europe. Our results show clear climatic zones of high and low upper limits for each forest structure variable studied. We also spatially analyzed how climate restricts the potential bio-physical upper limits and creates tipping points of each forest structure variable and which climate factors are most limiting. Further, we demonstrated how the climate change has affected 8 individual forests across Europe and then the continent as a whole. We find that diameter, height and basal area are limited by climate in different ways and that areas may have high upper limits in one structure and low upper limits in another limitted by different climate variables. We also found that even though individual forests may have increased their potential upper limit forest structure values, European forests as a whole have lost, on average, 5.0%, 1.7% and 6.5% in potential mean forest diameter, height and basal area, respectively.
Ultrasonically triggered ignition at liquid surfaces.
Simon, Lars Hendrik; Meyer, Lennart; Wilkens, Volker; Beyer, Michael
2015-01-01
Ultrasound is considered to be an ignition source according to international standards, setting a threshold value of 1mW/mm(2) [1] which is based on theoretical estimations but which lacks experimental verification. Therefore, it is assumed that this threshold includes a large safety margin. At the same time, ultrasound is used in a variety of industrial applications where it can come into contact with explosive atmospheres. However, until now, no explosion accidents have been reported in connection with ultrasound, so it has been unclear if the current threshold value is reasonable. Within this paper, it is shown that focused ultrasound coupled into a liquid can in fact ignite explosive atmospheres if a specific target positioned at a liquid's surface converts the acoustic energy into a hot spot. Based on ignition tests, conditions could be derived that are necessary for an ultrasonically triggered explosion. These conditions show that the current threshold value can be significantly augmented. Copyright © 2014 Elsevier B.V. All rights reserved.
A new evaluation method of electron optical performance of high beam current probe forming systems.
Fujita, Shin; Shimoyama, Hiroshi
2005-10-01
A new numerical simulation method is presented for the electron optical property analysis of probe forming systems with point cathode guns such as cold field emitters and the Schottky emitters. It has long been recognized that the gun aberrations are important parameters to be considered since the intrinsically high brightness of the point cathode gun is reduced due to its spherical aberration. The simulation method can evaluate the 'threshold beam current I(th)' above which the apparent brightness starts to decrease from the intrinsic value. It is found that the threshold depends on the 'electron gun focal length' as well as on the spherical aberration of the gun. Formulas are presented to estimate the brightness reduction as a function of the beam current. The gun brightness reduction must be included when the probe property (the relation between the beam current l(b) and the probe size on the sample, d) of the entire electron optical column is evaluated. Formulas that explicitly consider the gun aberrations into account are presented. It is shown that the probe property curve consists of three segments in the order of increasing beam current: (i) the constant probe size region, (ii) the brightness limited region where the probe size increases as d approximately I(b)(3/8), and (iii) the angular current intensity limited region in which the beam size increases rapidly as d approximately I(b)(3/2). Some strategies are suggested to increase the threshold beam current and to extend the effective beam current range of the point cathode gun into micro ampere regime.
Determination of γ -ray widths in 15N using nuclear resonance fluorescence
NASA Astrophysics Data System (ADS)
Szücs, T.; Bemmerer, D.; Caciolli, A.; Fülöp, Zs.; Massarczyk, R.; Michelagnoli, C.; Reinhardt, T. P.; Schwengner, R.; Takács, M. P.; Ur, C. A.; Wagner, A.; Wagner, L.
2015-07-01
Background: The stable nucleus 15N is the mirror of 15O, the bottleneck in the hydrogen burning CNO cycle. Most of the 15N level widths below the proton emission threshold are known from just one nuclear resonance fluorescence (NRF) measurement, with limited precision in some cases. A recent experiment with the AGATA demonstrator array determined level lifetimes using the Doppler shift attenuation method in 15O. As a reference and for testing the method, level lifetimes in 15N have also been determined in the same experiment. Purpose: The latest compilation of 15N level properties dates back to 1991. The limited precision in some cases in the compilation calls for a new measurement to enable a comparison to the AGATA demonstrator data. The widths of several 15N levels have been studied with the NRF method. Method: The solid nitrogen compounds enriched in 15N have been irradiated with bremsstrahlung. The γ rays following the deexcitation of the excited nuclear levels were detected with four high-purity germanium detectors. Results: Integrated photon-scattering cross sections of 10 levels below the proton emission threshold have been measured. Partial γ -ray widths of ground-state transitions were deduced and compared to the literature. The photon-scattering cross sections of two levels above the proton emission threshold, but still below other particle emission energies have also been measured, and proton resonance strengths and proton widths were deduced. Conclusions: Gamma and proton widths consistent with the literature values were obtained, but with greatly improved precision.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
...) program to establish appropriate emission thresholds for determining which new stationary sources and.... This action affects major stationary sources in Vermont that have GHG emissions above the thresholds... of GHG, and do not limit PSD applicability to GHGs to the higher thresholds in the Tailoring Rule...
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
Löffler, Frank E.; Tiedje, James M.; Sanford, Robert A.
1999-01-01
Measurements of the hydrogen consumption threshold and the tracking of electrons transferred to the chlorinated electron acceptor (fe) reliably detected chlororespiratory physiology in both mixed cultures and pure cultures capable of using tetrachloroethene, cis-1,2-dichloroethene, vinyl chloride, 2-chlorophenol, 3-chlorobenzoate, 3-chloro-4-hydroxybenzoate, or 1,2-dichloropropane as an electron acceptor. Hydrogen was consumed to significantly lower threshold concentrations of less than 0.4 ppmv compared with the values obtained for the same cultures without a chlorinated compound as an electron acceptor. The fe values ranged from 0.63 to 0.7, values which are in good agreement with theoretical calculations based on the thermodynamics of reductive dechlorination as the terminal electron-accepting process. In contrast, a mixed methanogenic culture that cometabolized 3-chlorophenol exhibited a significantly lower fe value, 0.012. PMID:10473415
Cost-effectiveness thresholds: pros and cons.
Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R
2016-12-01
Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.
Oosterhuis, H J; Bouwsma, C; van Halsema, B; Hollander, R A; Kros, C J; Tombroek, I
1992-10-03
Quantification of vibration perception and fingertip sensation in routine neurological examination. Neurological Clinic, University Hospital, Groningen, the Netherlands. Prospective, controlled investigation. Vibration perception and fingertip sensation were quantified in a large group of normal control persons of various ages and in neurological patients and compared with the usual sensory tests at routine neurological examination. The vibration perception limit was measured with a biothesiometer without accelerometer, the fingertip sensation with a device for two-point discrimination slightly modified according to Renfrew ('Renfrew meter'). Concordance of the tests was studied by calculating kappa values. The normal values of both sensory qualities had a log-normal distribution and increased with age. The values obtained with the Renfrew meter correlated well with those of the two-point discrimination and stereognosis but were systematically higher than those indicated by Renfrew. Both methods appear useful at routine neurological examination if certain measuring precautions are taken.
The Impact of Different Permissible Exposure Limits on Hearing Threshold Levels Beyond 25 dBA.
Sayapathi, Balachandar S; Su, Anselm Ting; Koh, David
2014-10-01
Development of noise-induced hearing loss is reliant on a few factors such as frequency, intensity, and duration of noise exposure. The occurrence of this occupational malady has doubled from 120 million to 250 million in a decade. Countries such as Malaysia, India, and the US have adopted 90 dBA as the permissible exposure limit. According to the US Occupational Safety and Health Administration (OSHA), the exposure limit for noise is 90 dBA, while that of the US National Institute of Occupational Safety and Health (NIOSH) is 85 dBA for 8 hours of noise exposure. This study aimed to assess the development of hearing threshold levels beyond 25 dBA on adoption of 85 dBA as the permissible exposure limit compared to 90 dBA. This is an intervention study done on two automobile factories. There were 203 employees exposed to noise levels beyond the action level. Hearing protection devices were distributed to reduce noise levels to a level between the permissible exposure limit and action level. The permissible exposure limits were 90 and 85 dBA in factories 1 and 2, respectively, while the action levels were 85 and 80 dBA, respectively. The hearing threshold levels of participants were measured at baseline and at first month of postshift exposure of noise. The outcome was measured by a manual audiometer. McNemar and chi-square tests were used in the statistical analysis. We found that hearing threshold levels of more than 25 dBA has changed significantly from pre-intervention to post-intervention among participants from both factories (3000 Hz for the right ear and 2000 Hz for the left ear). There was a statistically significant association between participants at 3000 Hz on the right ear at 'deteriorated' level ( χ² (1) = 4.08, φ = - 0.142, P = 0.043), whereas there was worsening of hearing threshold beyond 25 dBA among those embraced 90 dBA. The adoption of 85 dBA as the permissible exposure limit has preserved hearing threshold level among participants at 3000 Hz compared to those who embraced 90 dBA.
Spreading dynamics of a SIQRS epidemic model on scale-free networks
NASA Astrophysics Data System (ADS)
Li, Tao; Wang, Yuanmei; Guan, Zhi-Hong
2014-03-01
In order to investigate the influence of heterogeneity of the underlying networks and quarantine strategy on epidemic spreading, a SIQRS epidemic model on the scale-free networks is presented. Using the mean field theory the spreading dynamics of the virus is analyzed. The spreading critical threshold and equilibria are derived. Theoretical results indicate that the critical threshold value is significantly dependent on the topology of the underlying networks and quarantine rate. The existence of equilibria is determined by threshold value. The stability of disease-free equilibrium and the permanence of the disease are proved. Numerical simulations confirmed the analytical results.
Laser damage threshold of gelatin and a copper phthalocyanine doped gelatin optical limiter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brant, M.C.; McLean, D.G.; Sutherland, R.L.
1996-12-31
The authors demonstrate optical limiting in a unique guest-host system which uses neither the typical liquid or solid host. Instead, they dope a gelatin gel host with a water soluble Copper (II) phthalocyaninetetrasulfonic acid, tetrasodium salt (CuPcTs). They report on the gelatin`s viscoelasticity, laser damage threshold, and self healing of this damage. The viscoelastic gelatin has mechanical properties quite different than a liquid or solid. The authors` laser measurements demonstrate that the single shot damage threshold of the undoped gelatin host increases with decreasing gelatin concentration. The gelatin also has a much higher laser damage threshold than a stiff acrylic.more » Unlike brittle solids, the soft gelatin self heals from laser induced damage. Optical limiting test also show the utility of a gelatin host doped with CuPcTs. The CuPcTs/gelatin matrix is not damaged at incident laser energies 5 times the single shot damage threshold of the gelatin host. However, at this high laser energy the CuPcTs is photo bleached at the beam waist. The authors repair photo bleached sites by annealing the CuPcTs/gelatin matrix.« less
Pulse oximeter based mobile biotelemetry application.
Işik, Ali Hakan; Güler, Inan
2012-01-01
Quality and features of tele-homecare are improved by information and communication technologies. In this context, a pulse oximeter-based mobile biotelemetry application is developed. With this application, patients can measure own oxygen saturation and heart rate through Bluetooth pulse oximeter at home. Bluetooth virtual serial port protocol is used to send the test results from pulse oximeter to the smart phone. These data are converted into XML type and transmitted to remote web server database via smart phone. In transmission of data, GPRS, WLAN or 3G can be used. The rule based algorithm is used in the decision making process. By default, the threshold value of oxygen saturation is 80; the heart rate threshold values are 40 and 150 respectively. If the patient's heart rate is out of the threshold values or the oxygen saturation is below the threshold value, an emergency SMS is sent to the doctor. By this way, the directing of an ambulance to the patient can be performed by doctor. The doctor for different patients can change these threshold values. The conversion of the result of the evaluated data to SMS XML template is done on the web server. Another important component of the application is web-based monitoring of pulse oximeter data. The web page provides access to of all patient data, so the doctors can follow their patients and send e-mail related to the evaluation of the disease. In addition, patients can follow own data on this page. Eight patients have become part of the procedure. It is believed that developed application will facilitate pulse oximeter-based measurement from anywhere and at anytime.
Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.
2012-01-01
Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069
A generalized methodology for identification of threshold for HRU delineation in SWAT model
NASA Astrophysics Data System (ADS)
M, J.; Sudheer, K.; Chaubey, I.; Raj, C.
2016-12-01
The distributed hydrological model, Soil and Water Assessment Tool (SWAT) is a comprehensive hydrologic model widely used for making various decisions. The simulation accuracy of the distributed hydrological model differs due to the mechanism involved in the subdivision of the watershed. Soil and Water Assessment Tool (SWAT) considers sub-dividing the watershed and the sub-basins into small computing units known as 'hydrologic response units (HRU). The delineation of HRU is done based on unique combinations of land use, soil types, and slope within the sub-watersheds, which are not spatially defined. The computations in SWAT are done at HRU level and are then aggregated up to the sub-basin outlet, which is routed through the stream system. Generally, the HRUs are delineated by considering a threshold percentage of land use, soil and slope are to be given by the modeler to decrease the computation time of the model. The thresholds constrain the minimum area for constructing an HRU. In the current HRU delineation practice in SWAT, the land use, soil and slope of the watershed within a sub-basin, which is less than the predefined threshold, will be surpassed by the dominating land use, soil and slope, and introduce some level of ambiguity in the process simulations in terms of inappropriate representation of the area. But the loss of information due to variation in the threshold values depends highly on the purpose of the study. Therefore this research studies the effects of threshold values of HRU delineation on the hydrological modeling of SWAT on sediment simulations and suggests guidelines for selecting the appropriate threshold values considering the sediment simulation accuracy. The preliminary study was done on Illinois watershed by assigning different thresholds for land use and soil. A general methodology was proposed for identifying an appropriate threshold for HRU delineation in SWAT model that considered computational time and accuracy of the simulation. The methodology can be adopted for identifying an appropriate threshold for SWAT model simulation in any watershed with a single simulation of the model with a zero-zero threshold.
Nonlinear Quantum Metrology of Many-Body Open Systems
NASA Astrophysics Data System (ADS)
Beau, M.; del Campo, A.
2017-07-01
We introduce general bounds for the parameter estimation error in nonlinear quantum metrology of many-body open systems in the Markovian limit. Given a k -body Hamiltonian and p -body Lindblad operators, the estimation error of a Hamiltonian parameter using a Greenberger-Horne-Zeilinger state as a probe is shown to scale as N-[k -(p /2 )], surpassing the shot-noise limit for 2 k >p +1 . Metrology equivalence between initial product states and maximally entangled states is established for p ≥1 . We further show that one can estimate the system-environment coupling parameter with precision N-(p /2 ), while many-body decoherence enhances the precision to N-k in the noise-amplitude estimation of a fluctuating k -body Hamiltonian. For the long-range Ising model, we show that the precision of this parameter beats the shot-noise limit when the range of interactions is below a threshold value.
Correlation of the tokamak H-mode density limit with ballooning stability at the separatrix
NASA Astrophysics Data System (ADS)
Eich, T.; Goldston, R. J.; Kallenbach, A.; Sieglin, B.; Sun, H. J.; ASDEX Upgrade Team; Contributors, JET
2018-03-01
We show for JET and ASDEX Upgrade, based on Thomson-scattering measurements, a clear correlation of the density limit of the tokamak H-mode high-confinement regime with the approach to the ideal ballooning instability threshold at the periphery of the plasma. It is shown that the MHD ballooning parameter at the separatrix position α_sep increases about linearly with the separatrix density normalized to Greenwald density, n_e, sep/n_GW for a wide range of discharge parameters in both devices. The observed operational space is found to reach at maximum n_e, sep/n_GW≈ 0.4 -0.5 at values for α_sep≈ 2 -2.5, in the range of theoretical predictions for ballooning instability. This work supports the hypothesis that the H-mode density limit may be set by ballooning stability at the separatrix.
NASA Astrophysics Data System (ADS)
Sharma, D.; Malik, B. P.; Gaur, A.
2016-11-01
Zinc oxide quantum dots (QDs) with Fe-doping at different concentrations were prepared by chemical co-precipitation method. The prepared QDs were characterized by UV-Vis spectroscopy, X-ray diffraction and Z-scan technique. The sizes of QDs were found to be within 4.6-6.6 nm range. The nonlinear parameters viz. two-photon absorption coefficient (βTPA) and two-photon absorption cross-section (σTPA) were extracted with the help of open aperture Z-scan technique using nanosecond Nd:YAG laser operating at wavelength 532 nm. Higher values of βTPA and σTPA for Fe doped ZnO implied that they were potential materials for development of photonics devices and sensor protection applications. Fe doped sample (3 % by wt) was found to be the best optical limiter with limiting threshold intensity of 0.64 TW/cm2.
Burney, Peter; Minelli, Cosetta
2018-01-01
The impact of disease on population health is most commonly estimated by the population attributable fraction (PAF), or less commonly by the excess risk, an alternative measure that estimates the absolute risk of disease in the population that can be ascribed to the exposure. Using chronic airflow obstruction as an example, we examined the impact on these estimates of defining disease based on different "normal" values. We estimated PAF and the excess risk in scenarios in which the true rate of disease was 10% in the exposed and 5% in the unexposed, and where either 50% or 20% of the population was exposed. Disease definition was based on a "lower limit of normal", using the 5th, 1st and 0.2nd centile of values in a "normal" population as thresholds to define normality. Where normality is defined by centiles of values in a "normal" population, PAF is strongly influenced by which centile is selected to define normality. This is not true for the population excess risk. Care should be taken when interpreting estimates of PAF when disease is defined from a centile of a normal population. Copyright © 2017 Elsevier Inc. All rights reserved.
Early prediction of thiopurine-induced hepatotoxicity in inflammatory bowel disease.
Wong, D R; Coenen, M J H; Derijks, L J J; Vermeulen, S H; van Marrewijk, C J; Klungel, O H; Scheffer, H; Franke, B; Guchelaar, H-J; de Jong, D J; Engels, L G J B; Verbeek, A L M; Hooymans, P M
2017-02-01
Hepatotoxicity, gastrointestinal complaints and general malaise are common limiting adverse reactions of azathioprine and mercaptopurine in IBD patients, often related to high steady-state 6-methylmercaptopurine ribonucleotide (6-MMPR) metabolite concentrations. To determine the predictive value of 6-MMPR concentrations 1 week after treatment initiation (T1) for the development of these adverse reactions, especially hepatotoxicity, during the first 20 weeks of treatment. The cohort study consisted of the first 270 IBD patients starting thiopurine treatment as part of the Dutch randomised-controlled trial evaluating pre-treatment thiopurine S-methyltransferase genotype testing (ClinicalTrials.gov NCT00521950). Blood samples for metabolite assessment were collected at T1. Hepatotoxicity was defined by alanine aminotransaminase elevations >2 times the upper normal limit or a ratio of alanine aminotransaminase/alkaline phosphatase ≥5. Forty-seven patients (17%) presented hepatotoxicity during the first 20 weeks of thiopurine treatment. A T1 6-MMPR threshold of 3615 pmol/8 × 10 8 erythrocytes was defined. Analysis of patients on stable thiopurine dose (n = 174) showed that those exceeding the 6-MMPR threshold were at increased risk of hepatotoxicity: OR = 3.8 (95% CI: 1.8-8.0). Age, male gender and BMI were significant determinants. A predictive algorithm was developed based on these determinants and the 6-MMPR threshold to assess hepatotoxicity risk [AUC = 0.83 (95% CI: 0.75-0.91)]. 6-MMPR concentrations above the threshold also correlated with gastrointestinal complaints: OR = 2.4 (95% CI: 1.4-4.3), and general malaise: OR = 2.0 (95% CI: 1.1-3.7). In more than 80% of patients, thiopurine-induced hepatotoxicity could be explained by elevated T1 6-MMPR concentrations and the independent risk factors age, gender and BMI, allowing personalised thiopurine treatment in IBD to prevent early failure. © 2016 John Wiley & Sons Ltd.
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false What happens if the dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 73-REAL ESTATE...
41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?
Code of Federal Regulations, 2014 CFR
2014-01-01
... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false What happens if the dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 73-REAL ESTATE...
Masking ability of a zirconia ceramic on composite resin substrate shades.
Tabatabaian, Farhad; Shabani, Sima; Namdari, Mahshid; Sadeghpour, Koroush
2017-01-01
Masking ability of a restorative material plays an important role to cover discolored tooth structure; however, this ability has not yet been well understood in zirconia-based restorations. This study assessed the masking ability of a zirconia ceramic on composite resin substrates with different shades. Ten zirconia disc specimens, with 0.5 mm thickness and 10 mm diameter, were fabricated by a computer-aided design/computer-aided manufacturing system. A white substrate (control) and six composite resin substrates with different shades including A1, A2, A3, B2, C2, and D3 were prepared. The substrates had a cylindrical shape with 10 mm diameter and height. The specimens were placed onto the substrates for spectrophotometric evaluation. A spectrophotometer measured the L*, a*, and b* values for the specimens. ΔE values were calculated to determine the color differences between the groups and the control and then were compared with a perceptional threshold (ΔE = 2.6). Repeated measures ANOVA and Bonferroni tests were used for data analysis ( P < 0.05). The mean and standard deviation of ΔE values for A1, A2, A3, B2, C2, and D3 groups were 6.78 ± 1.59, 8.13 ± 1.66, 9.81 ± 2.64, 9.61 ± 1.38, 9.59 ± 2.63, and 8.13 ± 1.89, respectively. A significant difference was found among the groups in the ΔE values ( P = 0.006). The ΔE values were more than the perceptional threshold in all the groups ( P < 0.0001). Within the limitations of this study, it can be concluded that the tested zirconia ceramic could not thoroughly mask different shades of the composite resin substrates. Moreover, color masking of zirconia depends on the shade of substrate.
NASA Astrophysics Data System (ADS)
Amrani, Aumeur El; Es-saghiri, Abdeljabbar; Boufounas, El-Mahjoub; Lucas, Bruno
2018-06-01
The performance of a pentacene based organic thin film transistor (OTFT) with polymethylmethacrylate as a dielectric insulator and indium tin oxide based electrical gate is investigated. On the one hand, we showed that the threshold voltage increases with gate voltage, and on the other hand that it decreases with drain voltage. Thus, we noticed that the onset voltage shifts toward positive voltage values with the drain voltage increase. In addition, threshold-onset differential voltage (TODV) is proposed as an original approach to estimate an averaged carrier density in pentacene. Indeed, a value of about 4.5 × 1016 cm-3 is reached at relatively high gate voltage of -50 V; this value is in good agreement with that reported in literature with other technique measurements. However, at a low applied gate voltage, the averaged pentacene carrier density remains two orders of magnitude lower; it is of about 2.8 × 1014 cm-3 and remains similar to that obtained from space charge limited current approach for low applied bias voltage of about 2.2 × 1014 cm-3. Furthermore, high IOn/IOff and IOn/IOnset current ratios of 5 × 106 and 7.5 × 107 are reported for lower drain voltage, respectively. The investigated OTFTs also showed good electrical performance including carrier mobility increasing with gate voltage; mobility values of 4.5 × 10-2 cm2 V-1 s-1 and of 4.25 × 10-2 cm2 V-1 s-1 are reached for linear and saturation regimes, respectively. These results remain enough interesting since current modulation ratio exceeds a value of 107 that is a quite important requirement than high mobility for some particular logic gate applications.
Johnson, Earl E
2017-11-01
To determine safe output sound pressure levels (SPL) for sound amplification devices to preserve hearing sensitivity after usage. A mathematical model consisting of the Modified Power Law (MPL) (Humes & Jesteadt, 1991 ) combined with equations for predicting temporary threshold shift (TTS) and subsequent permanent threshold shift (PTS) (Macrae, 1994b ) was used to determine safe output SPL. The study involves no new human subject measurements of loudness tolerance or threshold shifts. PTS was determined by the MPL model for 234 audiograms and the SPL output recommended by four different validated prescription recommendations for hearing aids. PTS can, on rare occasion, occur as a result of SPL delivered by hearing aids at modern day prescription recommendations. The trading relationship of safe output SPL, decibel hearing level (dB HL) threshold, and PTS was captured with algebraic expressions. Better hearing thresholds lowered the safe output SPL and higher thresholds raised the safe output SPL. Safe output SPL can consider the magnitude of unaided hearing loss. For devices not set to prescriptive levels, limiting the output SPL below the safe levels identified should protect against threshold worsening as a result of long-term usage.
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 1 2013-01-01 2013-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 1 2014-01-01 2014-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
10 CFR Appendix E to Part 20 - Nationally Tracked Source Thresholds
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 1 2012-01-01 2012-01-01 false Nationally Tracked Source Thresholds E Appendix E to Part 20 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Pt. 20, App. E Appendix E to Part 20— Nationally Tracked Source Thresholds The Terabecquerel (TBq) values are the...
Percolation of disordered jammed sphere packings
NASA Astrophysics Data System (ADS)
Ziff, Robert M.; Torquato, Salvatore
2017-02-01
We determine the site and bond percolation thresholds for a system of disordered jammed sphere packings in the maximally random jammed state, generated by the Torquato-Jiao algorithm. For the site threshold, which gives the fraction of conducting versus non-conducting spheres necessary for percolation, we find {{p}\\text{c}}=0.3116(3) , consistent with the 1979 value of Powell 0.310(5) and identical within errors to the threshold for the simple-cubic lattice, 0.311 608, which shares the same average coordination number of 6. In terms of the volume fraction ϕ, the threshold corresponds to a critical value {φ\\text{c}}=0.199 . For the bond threshold, which apparently was not measured before, we find {{p}\\text{c}}=0.2424(3) . To find these thresholds, we considered two shape-dependent universal ratios involving the size of the largest cluster, fluctuations in that size, and the second moment of the size distribution; we confirmed the ratios’ universality by also studying the simple-cubic lattice with a similar cubic boundary. The results are applicable to many problems including conductivity in random mixtures, glass formation, and drug loading in pharmaceutical tablets.
Reliability of the method of levels for determining cutaneous temperature sensitivity
NASA Astrophysics Data System (ADS)
Jakovljević, Miroljub; Mekjavić, Igor B.
2012-09-01
Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.
Titov, G.
1973-01-01
The aim of this research was to determine a conjugation of the functional condition dynamics of the neuromuscular organs, visual analyser, latent period of the motorial activity and supporting kinosthetic functions at different stages of preparation of the sportsmen for important contests. The following research methods were used to achieve this aim: a functional mobility of the neuromuscular organs was determined with the help of an electrostimulation method (excitability thresholds, frequency range of optimum and maximum rhythm, data on the changes of a bioelectric potential in relation to electrical stimuli); a functional condition of the visual analyser was determined with the use of an electrostimulation method (excitability thresholds, frequency range of phosefan in the reaction to a threshold irritant); a grey matter neurodynamics was determined with the help of a chronoreflexometry using an audio irritant of a different power with the preliminary strain of motor centres or without it; supporting kinosthetic functions were determined with the help of seismotremography and stabilography (frequency and amplitude of tremor, deviations of the gravity centre of the body in different positions of the Romberg test). All the research on the systems mentioned was carried out within the preparatory, main and contest periods during the preparation for the important contests. Gymnasts, boxers and fencers were under observation. In all there were carried out 570 observations of 54 sportsmen of a high sporting qualification. The functional condition of the visual analyser at a satisfactory level of training was characterised by relatively low excitability thresholds and high frequency limits of phosefan. The functional mobility of the neuromuscular organs was reduced during this period. The supporting kinosthetic functions became apparent in the instability of frequency and amplitude tremor characteristics and general gravity centre deviations. The latent period of the motor movement reaction was the longest without the preliminary strain of the motor centres. The performance of intense training efforts was accompanied by distinct symptoms of the nervous system excitement against a background of the reduced functional mobility of the neuromuscular organs. Just before the main contests, when the sportsmen were in good training condition, the excitability thresholds of the visual analyser were slightly increasing; frequency limits of phosefan were falling; functional mobility of the neuromuscular organs achieved the highest value; supporting kinosthetic functions were characterised by stable values of the tremor frequency and amplitude and of the deviations of the general gravity centre of the body; minimum latent period of the motor movement reaction was dependent upon the preliminary strain of the motor centres. The data obtained gave us the opportunity to assume that the functional condition dynamics of the systems under consideration might characterise the level of an operative rest, as A. A. Ukhtomsky saw it, as a combatant readiness for physical activity.