NASA Astrophysics Data System (ADS)
Del Duca, V.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C. D.
2017-11-01
We consider the production of an arbitrary number of colour-singlet particles near partonic threshold, and show that next-to-leading order cross sections for this class of processes have a simple universal form at next-to-leading power (NLP) in the energy of the emitted gluon radiation. Our analysis relies on a recently derived factorisation formula for NLP threshold effects at amplitude level, and therefore applies both if the leading-order process is tree-level and if it is loop-induced. It holds for differential distributions as well. The results can furthermore be seen as applications of recently derived next-to-soft theorems for gauge theory amplitudes. We use our universal expression to re-derive known results for the production of up to three Higgs bosons at NLO in the large top mass limit, and for the hadro-production of a pair of electroweak gauge bosons. Finally, we present new analytic results for Higgs boson pair production at NLO and NLP, with exact top-mass dependence.
Threshold resummation for top-pair hadroproduction to next-to-next-to-leading log
NASA Astrophysics Data System (ADS)
Czakon, Michal; Mitov, Alexander; Sterman, George
2009-10-01
We derive the threshold-resummed total cross section for heavy quark production in hadronic collisions accurate to next-to-next-to-leading logarithm, employing recent advances on soft anomalous dimension matrices for massive pair production in the relevant kinematic limit. We also derive the relation between heavy quark threshold resummations for fixed pair kinematics and the inclusive cross section. As a check of our results, we have verified that they reproduce all poles of the color-averaged qq¯→tt¯ amplitudes at two loops, noting that the latter are insensitive to the color-antisymmetric terms of the soft anomalous dimension.
Spatially Varying Spectrally Thresholds for MODIS Cloud Detection
NASA Technical Reports Server (NTRS)
Haines, S. L.; Jedlovec, G. J.; Lafontaine, F.
2004-01-01
The EOS science team has developed an elaborate global MODIS cloud detection procedure, and the resulting MODIS product (MOD35) is used in the retrieval process of several geophysical parameters to mask out clouds. While the global application of the cloud detection approach appears quite robust, the product has some shortcomings on the regional scale, often over determining clouds in a variety of settings, particularly at night. This over-determination of clouds can cause a reduction in the spatial coverage of MODIS derived clear-sky products. To minimize this problem, a new regional cloud detection method for use with MODIS data has been developed at NASA's Global Hydrology and Climate Center (GHCC). The approach is similar to that used by the GHCC for GOES data over the continental United States. Several spatially varying thresholds are applied to MODIS spectral data to produce a set of tests for detecting clouds. The thresholds are valid for each MODIS orbital pass, and are derived from 20-day composites of GOES channels with similar wavelengths to MODIS. This paper and accompanying poster will introduce the GHCC MODIS cloud mask, provide some examples, and present some preliminary validation.
Thresholds of sea-level rise rate and sea-level acceleration rate in a vulnerable coastal wetland
NASA Astrophysics Data System (ADS)
Wu, W.; Biber, P.; Bethel, M.
2017-12-01
Feedback among inundation, sediment trapping, and vegetation productivity help maintain coastal wetlands facing sea-level rise (SLR). However, when the SLR rate exceeds a threshold, coastal wetlands can collapse. Understanding the threshold help address the key challenge in ecology - nonlinear response of ecosystems to environmental change, and promote communication between ecologists and policy makers. We studied the threshold of SLR rate and developed a new threshold of SLR acceleration rate on sustainability of coastal wetlands as SLR is likely to accelerate due to the enhanced anthropogenic forces. We developed a mechanistic model to simulate wetland change and derived the SLR thresholds for Grand Bay, MS, a micro-tidal estuary with limited upland freshwater and sediment input in the northern Gulf of Mexico. The new SLR acceleration rate threshold complements the threshold of SLR rate and can help explain the temporal lag before the rapid decline of wetland area becomes evident after the SLR rate threshold is exceeded. Deriving these two thresholds depends on the temporal scale, the interaction of SLR with other environmental factors, and landscape metrics, which have not been fully accounted for before this study. The derived SLR rate thresholds range from 7.3 mm/yr to 11.9 mm/yr. The thresholds of SLR acceleration rate are 3.02×10-4 m/yr2 and 9.62×10-5 m/yr2 for 2050 and 2100 respectively. Based on the thresholds developed, predicted SLR that will adversely impact the coastal wetlands in Grand Bay by 2100 will fall within the likely range of SLR under a high warming scenario (RCP8.5), and beyond the very likely range under a low warming scenario (RCP2.6 or 3), highlighting the need to avoid the high warming scenario in the future if these marshes are to be preserved.
Estimating economic thresholds for pest control: an alternative procedure.
Ramirez, O A; Saunders, J L
1999-04-01
An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.
Impact of Xanthylium Derivatives on the Color of White Wine.
Bührle, Franziska; Gohl, Anita; Weber, Fabian
2017-08-19
Xanthylium derivatives are yellow to orange pigments with a glyoxylic acid bridge formed by dimerization of flavanols, which are built by oxidative cleavage of tartaric acid. Although their structure and formation under wine-like conditions are well established, knowledge about their color properties and their occurrence and importance in wine is deficient. Xanthylium cations and their corresponding esters were synthesized in a model wine solution and isolated via high-performance countercurrent chromatography (HPCCC) and solid phase extraction (SPE). A Three-Alternative-Forced-Choice (3-AFC) test was applied to reveal the color perception threshold of the isolated compounds in white wine. Their presence and color impact was assessed in 70 different wines (58 white and 12 rosé wines) by UHPLC-DAD-ESI-MS n and the storage stability in wine was determined. The thresholds in young Riesling wine were 0.57 mg/L (cations), 1.04 mg/L (esters) and 0.67 mg/L (1:1 ( w / w ) mixture), respectively. The low thresholds suggest a possible impact on white wine color, but concentrations in wines were below the threshold. The stability study showed the degradation of the compounds during storage under several conditions. Despite the low perception threshold, xanthylium derivatives might have no direct impact on white wine color, but might play a role in color formation as intermediate products in polymerization and browning.
Stark, Timo; Wollmann, Nadine; Wenker, Kerstin; Lösch, Sofie; Glabasnia, Arne; Hofmann, Thomas
2010-05-26
Aimed at investigating the concentrations and taste contribution of the oak-derived ellagitannins castalagin and vescalagin as well as their transformation products acutissimin A/B, epiacutissimin A/B, and beta-1-O-ethylvescalagin in red wine, a highly sensitive and accurate quantification method was developed on the basis of LC-MS/MS-MRM analysis with matrix calibration. Method validation showed good recovery rates ranging from 102.4 +/- 5.9% (vescalagin) to 113.7 +/- 15.2% (epiacutissimin A). In oak-matured wines, castalagin was found as the predominant ellagitannin, followed by beta-1-O-ethylvescalagin, whereas the flavano-C-ellagitannins (epi)acutissimin A/B were present in significantly lower amounts. In contrast to the high threshold concentration levels (600-1000 micromol/L) and the puckering astringent orosensation induced by flavan-3-ols, all of the ellagitannin derivatives were found to induce a smooth and velvety astringent oral sensation at rather low threshold concentrations ranging from 0.9 to 2.8 micromol/L. Dose/activity considerations demonstrated that, among all the ellagitannins investigated, castalagin exclusively exceeded its threshold concentration in various oak-matured wine samples.
Communication: Classical threshold law for ion-neutral-neutral three-body recombination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pérez-Ríos, Jesús; Greene, Chris H.
2015-07-28
A very recently method for classical trajectory calculations for three-body collision [Pérez-Ríos et al., J. Chem. Phys. 140, 044307 (2014)] has been applied to describe ion-neutral-neutral ternary processes for low energy collisions: 0.1 mK–10 mK. As a result, a threshold law for the three-body recombination cross section is obtained and corroborated numerically. The derived threshold law predicts the formation of weakly bound dimers, with binding energies comparable to the collision energy of the collisional partners. In this low energy range, this analysis predicts that molecular ions should dominate over molecular neutrals as the most products formed.
Testing for thresholds of ecosystem collapse in seagrass meadows.
Connell, Sean D; Fernandes, Milena; Burnell, Owen W; Doubleday, Zoë A; Griffin, Kingsley J; Irving, Andrew D; Leung, Jonathan Y S; Owen, Samuel; Russell, Bayden D; Falkenberg, Laura J
2017-10-01
Although the public desire for healthy environments is clear-cut, the science and management of ecosystem health has not been as simple. Ecological systems can be dynamic and can shift abruptly from one ecosystem state to another. Such unpredictable shifts result when ecological thresholds are crossed; that is, small cumulative increases in an environmental stressor drive a much greater change than could be predicted from linear effects, suggesting an unforeseen tipping point is crossed. In coastal waters, broad-scale seagrass loss often occurs as a sudden event associated with human-driven nutrient enrichment (eutrophication). We tested whether the response of seagrass ecosystems to coastal nutrient enrichment is subject to a threshold effect. We exposed seagrass plots to different levels of nutrient enrichment (dissolved inorganic nitrogen) for 10 months and measured net production. Seagrass response exhibited a threshold pattern when nutrient enrichment exceeded moderate levels: there was an abrupt and large shift from positive to negative net leaf production (from approximately 0.04 leaf production to 0.02 leaf loss per day). Epiphyte load also increased as nutrient enrichment increased, which may have driven the shift in leaf production. Inadvertently crossing such thresholds, as can occur through ineffective management of land-derived inputs such as wastewater and stormwater runoff along urbanized coasts, may account for the widely observed sudden loss of seagrass meadows. Identification of tipping points may improve not only adaptive-management monitoring that seeks to avoid threshold effects, but also restoration approaches in systems that have crossed them. © 2017 Society for Conservation Biology.
Soil Production and Erosion Rates and Processes in Mountainous Landscapes
NASA Astrophysics Data System (ADS)
Heimsath, A. M.; DiBiase, R. A.; Whipple, K. X.
2012-12-01
We focus here on high-relief, steeply sloped landscapes from the Nepal Himalaya to the San Gabriels of California that are typically thought to be at a critical threshold of soil cover. Observations reveal that, instead, there are significant areas mantled with soil that fit the conceptual framework of a physically mobile layer derived from the underlying parent material with some locally-derived organic content. The extent and persistence of such soils depends on the long-term balance between soil production and erosion despite the perceived discrepancy between high erosion and low soil production rates. We present cosmogenic Be-10-derived soil production and erosion rates that show that soil production increases with catchment-averaged erosion, suggesting a feedback that enhances soil-cover persistence, even in threshold landscapes. Soil production rates do decline systematically with increasing soil thickness, but hint at the potential for separate soil production functions for different erosional regimes. We also show that a process transistion to landslide-dominated erosion results in thinner, patchier soils and rockier topography, but find that there is no sudden transition to bedrock landscapes. Our landslide modeling is combined with a detailed quantification of bedrock exposure for these steep, mountainous landscapes. We also draw an important conclusion connecting the physical processes producing and transporting soil and the chemical processes weathering the parent material by measuring parent material strength across three different field settings. We observe that parent material strength increases with overlying soil thickness and, therefore, the weathered extent of the saprolite. Soil production rates, thus, decrease with increasing parent material competence. These observation highlight the importance of quantifying hillslope hydrologic processes where such multi-facted measurements are made.
Batt, Ryan D.; Carpenter, Stephen R.; Cole, Jonathan J.; Pace, Michael L.; Johnson, Robert A.
2013-01-01
Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems. PMID:24101479
Batt, Ryan D; Carpenter, Stephen R; Cole, Jonathan J; Pace, Michael L; Johnson, Robert A
2013-10-22
Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems.
Antonini, Samantha; Arias, Maria Alejandra; Eichert, Thomas; Clemens, Joachim
2012-11-01
A selection of six urine-derived struvite fertilizers generated by innovative precipitation technologies was assessed for their quality and their effectiveness as phosphorus sources for crops. Struvite purity was influenced by drying techniques and magnesium dosage. In a greenhouse experiment, the urine fertilizers led to biomass yields and phosphorus uptakes comparable to or higher than those induced by a commercial mineral fertilizer. Heavy metal concentrations of the different struvite fertilizers were below the threshold limits specified by the German Fertilizer and Sewage Sludge Regulations. The computed loading rates of heavy metals to agricultural land were also below the threshold limits decreed by the Federal Soil Protection Act. Urine-derived struvite contributed less to heavy metal inputs to farmland than other recycling products or commercial mineral and organic fertilizers. When combined with other soil conditioners, urine-derived struvite is an efficient fertilizer which covers the magnesium and more than half of the phosphorus demand of crops. Copyright © 2012 Elsevier Ltd. All rights reserved.
Development of an epiphyte indicator of nutrient enrichment ...
Metrics of epiphyte load on macrophytes were evaluated for use as quantitative biological indicators for nutrient impacts in estuarine waters, based on review and analysis of the literature on epiphytes and macrophytes, primarily seagrasses, but including some brackish and freshwater rooted macrophyte species. An approach is presented that empirically derives threshold epiphyte loads which are likely to cause specified levels of decrease in macrophyte response metrics such as biomass, shoot density, percent cover, production and growth. Data from 36 studies of 10 macrophyte species were pooled to derive relationships between epiphyte load and -25 and -50% seagrass response levels, which are proposed as the primary basis for establishment of critical threshold values. Given multiple sources of variability in the response data, threshold ranges based on the range of values falling between the median and the 75th quantiles of observations at a given seagrass response level are proposed rather than single, critical point values. Four epiphyte load threshold categories - low, moderate, high, very high, are proposed. Comparison of values of epiphyte loads associated with 25 and 50% reductions in light to macrophytes suggest that the threshold ranges are realistic both in terms of the principle mechanism of impact to macrophytes and in terms of the magnitude of resultant impacts expressed by the macrophytes. Some variability in response levels was observed among
Faulkner, Hope; Clarke, Holly J.; O’Sullivan, Maurice G.; Kerry, Joseph P.
2018-01-01
There has been a surge in interest in relation to differentiating dairy products derived from pasture versus confined systems. The impact of different forage types on the sensory properties of milk and cheese is complex due to the wide range of on farm and production factors that are potentially involved. The main effect of pasture diet on the sensory properties of bovine milk and cheese is increased yellow intensity correlated to β-carotene content, which is a possible biomarker for pasture derived dairy products. Pasture grazing also influences fat and fatty acid content which has been implicated with texture perception changes in milk and cheese and increased omega-3 fatty acids. Changes in polyunsaturated fatty acids in milk and cheese due to pasture diets has been suggested may increase susceptibility to lipid oxidation but does not seem to be an issue to due increased antioxidants and the reducing environment of cheese. It appears that pasture derived milk and cheese are easier to discern by trained panellists and consumers than milk derived from conserved or concentrate diets. However, milk pasteurization, inclusion of concentrate in pasture diets, cheese ripening time, have all been linked to reducing pasture dietary effects on sensory perception. Sensory evaluation studies of milk and cheese have, in general, found that untrained assessors who best represent consumers appear less able to discriminate sensory differences than trained assessors and that differences in visual and textural attributes are more likely to be realized than flavour attributes. This suggests that sensory differences due to diet are often subtle. Evidence supports the direct transfer of some volatiles via inhalation or ingestion but more so with indirect transfer post rumen metabolism dietary components. The impact of dietary volatiles on sensory perception of milk and dairy products obviously depends upon their concentration and odour activity, however very little quantitative studies have been carried out to date. Some studies have highlighted potential correlation of pasture with enhanced “barny” or “cowy” sensory attributes and subsequently linked these to accumulation of p-cresol from the metabolism of β-carotene and aromatic amino acids or possibly isoflavones in the rumen. p-Cresol has also been suggested as a potential biomarker for pasture derived dairy products. Other studies have linked terpenes to specific sensory properties in milk and cheese but this only appears to be relevant in milk and cheese derived from unseeded wild pasture where high concentrations accumulate, as their odour threshold is quite high. Toluene also a product of β-carotene metabolism has been identified as a potential biomarker for pasture derived dairy products but it has little impact on sensory perception due to its high odour threshold. Dimethyl sulfone has been linked to pasture diets and could influence sensory perception as its odour threshold is low. Other studies have linked the presence of maize and legumes (clover) in silage with adverse sensory impacts in milk and cheese. Considerably more research is required to define key dietary related impacts on the flavour of milk and cheese. PMID:29534042
NASA Astrophysics Data System (ADS)
Bai, Heming; Gong, Cheng; Wang, Minghuai; Zhang, Zhibo; L'Ecuyer, Tristan
2018-02-01
Precipitation susceptibility to aerosol perturbation plays a key role in understanding aerosol-cloud interactions and constraining aerosol indirect effects. However, large discrepancies exist in the previous satellite estimates of precipitation susceptibility. In this paper, multi-sensor aerosol and cloud products, including those from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO), CloudSat, Moderate Resolution Imaging Spectroradiometer (MODIS), and Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) from June 2006 to April 2011 are analyzed to estimate precipitation frequency susceptibility SPOP, precipitation intensity susceptibility SI, and precipitation rate susceptibility SR in warm marine clouds. We find that SPOP strongly depends on atmospheric stability, with larger values under more stable environments. Our results show that precipitation susceptibility for drizzle (with a -15 dBZ rainfall threshold) is significantly different than that for rain (with a 0 dBZ rainfall threshold). Onset of drizzle is not as readily suppressed in warm clouds as rainfall while precipitation intensity susceptibility is generally smaller for rain than for drizzle. We find that SPOP derived with respect to aerosol index (AI) is about one-third of SPOP derived with respect to cloud droplet number concentration (CDNC). Overall, SPOP demonstrates relatively robust features throughout independent liquid water path (LWP) products and diverse rain products. In contrast, the behaviors of SI and SR are subject to LWP or rain products used to derive them. Recommendations are further made for how to better use these metrics to quantify aerosol-cloud-precipitation interactions in observations and models.
NASA Astrophysics Data System (ADS)
Svobodová, Eva; Trnka, Miroslav; Kopp, Radovan; Mareš, Jan; Dubrovský, Martin; Spurný, Petr; Žalud, Zděněk
2015-04-01
Freshwater fish production is significantly correlated with water temperature which is expected to increase under the climate change. This study is dealing with the estimation of the change of water temperature in productive ponds and its impact on the fishery in the Czech Republic. Calculation of surface-water temperature which was based on three-day mean of the air temperature was developed and tested in several ponds in three main fish production areas. Output of surface-water temperature model was compared with measured data and showed that the lower range of model accuracy is surface-water temperature 3°C, under this temperature threshold the model loses its predictive competence. In the expecting of surface-water temperature above the temperature 3°C the model has proved the well consistence between observed and modelled surface-water temperature (R 0.79 - 0.96). Verified model was applied in the conditions of climate change determined by the pattern scaling method, in which standardised scenarios were derived from five global circulation models MPEH5, CSMK3, IPCM4, GFCM21 and HADGEM. Results were evaluated with regard to thresholds which characterise the fish species requirements on water temperature. Used thresholds involved the upper temperature threshold for fish survival and the tolerable number of days in continual period with mentioned threshold surface-water temperature. Target fish species were Common carp (Cyprinus carpio), Maraene whitefish (Coregonus maraena), Northern whitefish (Coregonus peled) and Rainbow trout (Oncorhynchus mykis). Results indicated the limitation of the Czech fish-farming in terms of i) the increase of the length of continual periods with surface-water temperature above the threshold appropriate to given fish species toleration, ii) the increase of the number of continual periods with surface-water temperature above the threshold, both appropriate to given fish species toleration, and iii) the increase of overall number of days within the continual period with temperature above the threshold tolerated by given fish species. ACKNOWLEDGEMENTS: This study was funded by project "Building up a multidisciplinary scientific team focused on drought" No. CZ.1.07/2.3.00/20.0248.
Mechanism of and Threshold Biomechanical Conditions for Falsetto Voice Onset
Deguchi, Shinji
2011-01-01
The sound source of a voice is produced by the self-excited oscillation of the vocal folds. In modal voice production, a drastic increase in transglottal pressure after vocal fold closure works as a driving force that develops self-excitation. Another type of vocal fold oscillation with less pronounced glottal closure observed in falsetto voice production has been accounted for by the mucosal wave theory. The classical theory assumes a quasi-steady flow, and the expected driving force onto the vocal folds under wavelike motion is derived from the Bernoulli effect. However, wavelike motion is not always observed during falsetto voice production. More importantly, the application of the quasi-steady assumption to a falsetto voice with a fundamental frequency of several hundred hertz is unsupported by experiments. These considerations suggested that the mechanism of falsetto voice onset may be essentially different from that explained by the mucosal wave theory. In this paper, an alternative mechanism is submitted that explains how self-excitation reminiscent of the falsetto voice could be produced independent of the glottal closure and wavelike motion. This new explanation is derived through analytical procedures by employing only general unsteady equations of motion for flow and solids. The analysis demonstrated that a convective acceleration of a flow induced by rapid wall movement functions as a negative damping force, leading to the self-excitation of the vocal folds. The critical subglottal pressure and volume flow are expressed as functions of vocal fold biomechanical properties, geometry, and voice fundamental frequency. The analytically derived conditions are qualitatively and quantitatively reasonable in view of reported measurement data of the thresholds required for falsetto voice onset. Understanding of the voice onset mechanism and the explicit mathematical descriptions of thresholds would be beneficial for the diagnosis and treatment of voice diseases and the development of artificial vocal folds. PMID:21408178
Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.
Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari
2014-07-01
[Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.
Population Dynamics of Belonolaimus longicaudatusin a Cotton Production System
Crow, W. T.; Weingartner, D. P.; McSorley, R.; Dickson, D. W.
2000-01-01
Belonolaimus longicaudatus is a recognized pathogen of cotton (Gossypium hirsutum), but insufficient information is available on the population dynamics and economic thresholds of B. longicaudatus in cotton production. In this study, data collected from a field in Florida were used to develop models predicting population increases of B. longicaudatus on cotton and population declines under clean fallow. Population densities of B. longicaudatus increased on cotton, reaching a carrying capacity of 139 nematodes/130 cm³ of soil, but decreased exponentially during periods of bare fallow. The model indicated that population densities should decrease each year of monocropped cotton, if an alternate host is not present between sequential cotton crops. Economic thresholds derived from published damage functions and current prices for cotton and nematicides varied from 2 to 5 B. longicaudatus/130 cm³ of soil, depending on the nematicide used. PMID:19270968
Keeping it simple: Monitoring flood extent in large data-poor wetlands using MODIS SWIR data
NASA Astrophysics Data System (ADS)
Wolski, Piotr; Murray-Hudson, Mike; Thito, Kgalalelo; Cassidy, Lin
2017-05-01
Characterising inundation conditions for flood-pulsed wetlands is a critical first step towards assessment of flood risk as well as towards understanding hydrological dynamics that underlay their ecology and functioning. In this paper, we develop a series of inundation maps for the Okavango Delta, Botswana, based on the thresholding of the SWIR band (b7) MODIS MCD43A4 product. We show that in the Okavango Delta, SWIR is superior to other spectral bands or derived indices, and illustrate an innovative way of defining the spectral threshold used to separate inundated from dry land. The threshold is determined dynamically for each scene based on reflectances of training areas capturing end-members of the inundation spectrum. The method provides a very good accuracy and is suitable for automated processing.
Roberts, David W; Api, Anne Marie; Safford, Robert J; Lalko, Jon F
2015-08-01
An essential step in ensuring the toxicological safety of chemicals used in consumer products is the evaluation of their skin sensitising potential. The sensitising potency, coupled with information on exposure levels, can be used in a Quantitative Risk Assessment (QRA) to determine an acceptable level of a given chemical in a given product. Where consumer skin exposure is low, a risk assessment can be conducted using the Dermal Sensitisation Threshold (DST) approach, avoiding the need to determine potency experimentally. Since skin sensitisation involves chemical reaction with skin proteins, the first step in the DST approach is to assess, on the basis of the chemical structure, whether the chemical is expected to be reactive or not. Our accompanying publication describes the probabilistic derivation of a DST of 64 μg/cm(2) for chemicals assessed as reactive. This would protect against 95% of chemicals assessed as reactive, but the remaining 5% would include chemicals with very high potency. Here we discuss the chemical properties and structural features of high potency sensitisers, and derive an approach whereby they can be identified and consequently excluded from application of the DST. Copyright © 2015 Elsevier Inc. All rights reserved.
Laabs, V; Leake, C; Botham, P; Melching-Kollmuß, S
2015-10-01
Non-relevant metabolites are defined in the EU regulation for plant protection product authorization and a detailed definition of non-relevant metabolites is given in an EU Commission DG Sanco (now DG SANTE - Health and Food Safety) guidance document. However, in water legislation at EU and member state level non-relevant metabolites of pesticides are either not specifically regulated or diverse threshold values are applied. Based on their inherent properties, non-relevant metabolites should be regulated based on substance-specific and toxicity-based limit values in drinking and groundwater like other anthropogenic chemicals. Yet, if a general limit value for non-relevant metabolites in drinking and groundwater is favored, an application of a Threshold of Toxicological Concern (TTC) concept for Cramer class III compounds leads to a threshold value of 4.5 μg L(-1). This general value is exemplarily shown to be protective for non-relevant metabolites, based on individual drinking water limit values derived for a set of 56 non-relevant metabolites. A consistent definition of non-relevant metabolites of plant protection products, as well as their uniform regulation in drinking and groundwater in the EU, is important to achieve legal clarity for all stakeholders and to establish planning security for development of plant protection products for the European market. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan Lin; Liu Fuyi; Armentrout, P.B.
The kinetic energy dependences of the reactions of Fe{sub n}{sup +} (n=1-19) with N{sub 2} are studied in a guided ion beam tandem mass spectrometer over the energy range of 0-15 eV. In addition to collision-induced dissociation forming Fe{sub m}{sup +} ions, which dominate the product spectra, a variety of Fe{sub m}N{sub 2}{sup +} and Fe{sub m}N{sup +} product ions, where m{<=}n, is observed. All processes are observed to exhibit thresholds. Fe{sub m}{sup +}-N and Fe{sub m}{sup +}-2N bond energies as a function of cluster size are derived from the threshold analysis of the kinetic energy dependences of the endothermicmore » reactions. The trends in this thermochemistry are compared to the isoelectronic D{sub 0}(Fe{sub n}{sup +}-CH), and to bulk phase values. A fairly uniform barrier of 0.48{+-}0.03 eV at 0 K is observed for formation of the Fe{sub n}N{sub 2}{sup +} product ions (n=12, 15-19) and can be related to the rate-limiting step in the Haber process for catalytic ammonia production.« less
Calculating the dim light melatonin onset: the impact of threshold and sampling rate.
Molina, Thomas A; Burgess, Helen J
2011-10-01
The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.
New method to evaluate the 7Li(p, n)7Be reaction near threshold
NASA Astrophysics Data System (ADS)
Herrera, María S.; Moreno, Gustavo A.; Kreiner, Andrés J.
2015-04-01
In this work a complete description of the 7Li(p, n)7Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed.
On thermonuclear ignition criterion at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Baolian; Kwan, Thomas J. T.; Wang, Yi-Ming
2014-10-15
Sustained thermonuclear fusion at the National Ignition Facility remains elusive. Although recent experiments approached or exceeded the anticipated ignition thresholds, the nuclear performance of the laser-driven capsules was well below predictions in terms of energy and neutron production. Such discrepancies between expectations and reality motivate a reassessment of the physics of ignition. We have developed a predictive analytical model from fundamental physics principles. Based on the model, we obtained a general thermonuclear ignition criterion in terms of the areal density and temperature of the hot fuel. This newly derived ignition threshold and its alternative forms explicitly show the minimum requirementsmore » of the hot fuel pressure, mass, areal density, and burn fraction for achieving ignition. Comparison of our criterion with existing theories, simulations, and the experimental data shows that our ignition threshold is more stringent than those in the existing literature and that our results are consistent with the experiments.« less
Inclusive heavy flavor hadroproduction in NLO QCD: The exact analytic result
NASA Astrophysics Data System (ADS)
Czakon, M.; Mitov, A.
2010-01-01
We present the first exact analytic result for all partonic channels contributing to the total cross section for the production of a pair of heavy flavors in hadronic collisions in NLO QCD. Our calculation is a step in the derivation of the top quark pair production cross section at NNLO in QCD, which is a cornerstone of the precision LHC program. Our results uncover the analytical structures behind observables with heavy flavors at higher orders. They also reveal surprising and non-trivial implications for kinematics close to partonic threshold.
Activation cross sections of α-induced reactions on natZn for Ge and Ga production
NASA Astrophysics Data System (ADS)
Aikawa, M.; Saito, M.; Ebata, S.; Komori, Y.; Haba, H.
2018-07-01
The production cross sections of 68,69Ge and 66,67Ga by α-induced reactions on natZn have been measured using the stacked-foil activation method and off-line γ-ray spectrometry from their threshold energies to 50.7 MeV. The derived cross sections were compared with the previous experimental data and the calculated values in the TENLD-2017 library. Our result shows a slightly larger amplitude than the previous data at the peak, though the peak energy is consistent with them.
Ji, Qing; Li, Fei; Pang, Xiaoping; Luo, Cong
2018-04-05
The threshold of sea ice concentration (SIC) is the basis for accurately calculating sea ice extent based on passive microwave (PM) remote sensing data. However, the PM SIC threshold at the sea ice edge used in previous studies and released sea ice products has not always been consistent. To explore the representable value of the PM SIC threshold corresponding on average to the position of the Arctic sea ice edge during summer in recent years, we extracted sea ice edge boundaries from the Moderate-resolution Imaging Spectroradiometer (MODIS) sea ice product (MOD29 with a spatial resolution of 1 km), MODIS images (250 m), and sea ice ship-based observation points (1 km) during the fifth (CHINARE-2012) and sixth (CHINARE-2014) Chinese National Arctic Research Expeditions, and made an overlay and comparison analysis with PM SIC derived from Special Sensor Microwave Imager Sounder (SSMIS, with a spatial resolution of 25 km) in the summer of 2012 and 2014. Results showed that the average SSMIS SIC threshold at the Arctic sea ice edge based on ice-water boundary lines extracted from MOD29 was 33%, which was higher than that of the commonly used 15% discriminant threshold. The average SIC threshold at sea ice edge based on ice-water boundary lines extracted by visual interpretation from four scenes of the MODIS image was 35% when compared to the average value of 36% from the MOD29 extracted ice edge pixels for the same days. The average SIC of 31% at the sea ice edge points extracted from ship-based observations also confirmed that choosing around 30% as the SIC threshold during summer is recommended for sea ice extent calculations based on SSMIS PM data. These results can provide a reference for further studying the variation of sea ice under the rapidly changing Arctic.
Melching-Kollmuss, Stephanie; Dekant, Wolfgang; Kalberlah, Fritz
2010-03-01
Limits for tolerable concentrations of ground water metabolites ("non-relevant metabolites" without targeted toxicities and specific classification and labeling) derived from active ingredients (AI) of plant protection products (PPPs) are discussed in the European Union. Risk assessments for "non-relevant metabolites" need to be performed when concentrations are above 0.75 microg/L. Since oral uptake is the only relevant exposure pathway for "non-relevant metabolites", risk assessment approaches as used for other chemicals with predominantly oral exposure in humans are applicable. The concept of "thresholds of toxicological concern" (TTC) defines tolerable dietary intakes for chemicals without toxicity data and is widely applied to chemicals present in food in low concentrations such as flavorings. Based on a statistical evaluation of the results of many toxicity studies and considerations of chemical structures, the TTC concept derives a maximum daily oral intake without concern of 90 microg/person/day for non-genotoxic chemicals, even for those with appreciable toxicity. When using the typical exposure assessment for drinking water contaminants (consumption of 2L of drinking water/person/day, allocation of 10% of the tolerable daily intake to drinking water), a TTC-based upper concentration limit of 4.5 microg/L for "non-relevant metabolites" in ground/drinking water is delineated. In the present publication it has been evaluated, whether this value would cover all relevant toxicities (repeated dose, reproductive and developmental, and immune effects). Taking into account, that after evaluation of specific reproduction toxicity data from chemicals and pharmaceuticals, a value of 1 microg/kgbw/day has been assessed as to cover developmental and reproduction toxicity, a TTC value of 60 microg/person/day was assessed as to represent a safe value. Based on these reasonable worst case assumptions, a TTC-derived threshold of 3 microg/L in drinking water is derived. When a non-relevant metabolite is present in concentration below 3 microg/L, animal testing for toxicity is not considered necessary for a compound-specific risk assessment since the application of the TTC covers all relevant toxicities to be considered in such assessment and any health risk resulting from these exposures is very low. (c) 2009 Elsevier Inc. All rights reserved.
Harris, Andrew C.; Stepanov, Irina; Pentel, Paul R.; LeSage, Mark G.
2012-01-01
Rationale Animal models of tobacco addiction rely on administration of nicotine alone or nicotine combined with isolated constituents. Models using tobacco extracts derived from tobacco products and containing a range of tobacco constituents might more accurately simulate tobacco exposure in humans. Objective To compare the effects of nicotine alone and an aqueous smokeless tobacco extract in several addiction-related animal behavioral models. Methods Nicotine alone and nicotine dose-equivalent concentrations of extract were compared in terms of their acute effects on intracranial self-stimulation (ICSS) thresholds, discriminative stimulus effects, and effects on locomotor activity. Results Similar levels of nicotine and minor alkaloids were achieved using either artificial saliva or saline for extraction, supporting the clinical relevance of the saline extracts used in these studies. Extract produced reinforcement-enhancing (ICSS threshold-decreasing) effects similar to those of nicotine alone at low to moderate nicotine doses, but reduced reinforcement-attenuating (ICSS threshold-increasing) effects at a high nicotine dose. In rats trained to discriminate nicotine alone from saline, intermediate extract doses did not substitute for the training dose as well as nicotine alone. Locomotor stimulant effects and nicotine distribution to brain were similar following administration of extract or nicotine alone. Conclusions The reinforcement-attenuating and discriminative stimulus effects of nicotine delivered in an extract of a commercial smokeless tobacco product differed from those of nicotine alone. Extracts of tobacco products may be useful for evaluating the abuse liability of those products and understanding the role of non-nicotine constituents in tobacco addiction. PMID:21960181
Harris, Andrew C; Stepanov, Irina; Pentel, Paul R; Lesage, Mark G
2012-04-01
Animal models of tobacco addiction rely on administration of nicotine alone or nicotine combined with isolated constituents. Models using tobacco extracts derived from tobacco products and containing a range of tobacco constituents might more accurately simulate tobacco exposure in humans. To compare the effects of nicotine alone and an aqueous smokeless tobacco extract in several addiction-related animal behavioral models. Nicotine alone and nicotine dose-equivalent concentrations of extract were compared in terms of their acute effects on intracranial self-stimulation (ICSS) thresholds, discriminative stimulus effects, and effects on locomotor activity. Similar levels of nicotine and minor alkaloids were achieved using either artificial saliva or saline for extraction, supporting the clinical relevance of the saline extracts used in these studies. Extract produced reinforcement-enhancing (ICSS threshold-decreasing) effects similar to those of nicotine alone at low to moderate nicotine doses, but reduced reinforcement-attenuating (ICSS threshold-increasing) effects at a high nicotine dose. In rats trained to discriminate nicotine alone from saline, intermediate extract doses did not substitute for the training dose as well as nicotine alone. Locomotor stimulant effects and nicotine distribution to brain were similar following administration of extract or nicotine alone. The reinforcement-attenuating and discriminative stimulus effects of nicotine delivered in an extract of a commercial smokeless tobacco product differed from those of nicotine alone. Extracts of tobacco products may be useful for evaluating the abuse liability of those products and understanding the role of non-nicotine constituents in tobacco addiction.
Wang, Zhen; Scott, W Casan; Williams, E Spencer; Ciarlo, Michael; DeLeo, Paul C; Brooks, Bryan W
2018-04-01
Uncertainty factors (UFs) are commonly used during hazard and risk assessments to address uncertainties, including extrapolations among mammals and experimental durations. In risk assessment, default values are routinely used for interspecies extrapolation and interindividual variability. Whether default UFs are sufficient for various chemical uses or specific chemical classes remains understudied, particularly for ingredients in cleaning products. Therefore, we examined publicly available acute median lethal dose (LD50), and reproductive and developmental no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) values for the rat model (oral). We employed probabilistic chemical toxicity distributions to identify likelihoods of encountering acute, subacute, subchronic and chronic toxicity thresholds for specific chemical categories and ingredients in cleaning products. We subsequently identified thresholds of toxicological concern (TTC) and then various UFs for: 1) acute (LD50s)-to-chronic (reproductive/developmental NOAELs) ratios (ACRs), 2) exposure duration extrapolations (e.g., subchronic-to-chronic; reproductive/developmental), and 3) LOAEL-to-NOAEL ratios considering subacute/acute developmental responses. These ratios (95% CIs) were calculated from pairwise threshold levels using Monte Carlo simulations to identify UFs for all ingredients in cleaning products. Based on data availability, chemical category-specific UFs were also identified for aliphatic acids and salts, aliphatic alcohols, inorganic acids and salts, and alkyl sulfates. In a number of cases, derived UFs were smaller than default values (e.g., 10) employed by regulatory agencies; however, larger UFs were occasionally identified. Such UFs could be used by assessors instead of relying on default values. These approaches for identifying mammalian TTCs and diverse UFs represent robust alternatives to application of default values for ingredients in cleaning products and other chemical classes. Findings can also support chemical substitutions during alternatives assessment, and data dossier development (e.g., read across), identification of TTCs, and screening-level hazard and risk assessment when toxicity data is unavailable for specific chemicals. Copyright © 2018 Elsevier Ltd. All rights reserved.
Borucki, Ewa; Berg, Bruce G
2017-05-01
This study investigated the psychophysical effects of distortion products in a listening task traditionally used to estimate the bandwidth of phase sensitivity. For a 2000 Hz carrier, estimates of modulation depth necessary to discriminate amplitude modulated (AM) tones and quasi-frequency modulated (QFM) were measured in a two interval forced choice task as a function modulation frequency. Temporal modulation transfer functions were often non-monotonic at modulation frequencies above 300 Hz. This was likely to be due to a spectral cue arising from the interaction of auditory distortion products and the lower sideband of the stimulus complex. When the stimulus duration was decreased from 200 ms to 20 ms, thresholds for low-frequency modulators rose to near-chance levels, whereas thresholds in the region of non-monotonicities were less affected. The decrease in stimulus duration appears to hinder the listener's ability to use temporal cues in order to discriminate between AM and QFM, whereas spectral information derived from distortion product cues appears more resilient. Copyright © 2017. Published by Elsevier B.V.
Zhai, S-Q; Guo, W; Hu, Y-Y; Yu, N; Chen, Q; Wang, J-Z; Fan, M; Yang, W-Y
2011-05-01
To explore the protective effects of brain-derived neurotrophic factor on the noise-damaged cochlear spiral ganglion. Recombinant adenovirus brain-derived neurotrophic factor vector, recombinant adenovirus LacZ and artificial perilymph were prepared. Guinea pigs with audiometric auditory brainstem response thresholds of more than 75 dB SPL, measured seven days after four hours of noise exposure at 135 dB SPL, were divided into three groups. Adenovirus brain-derived neurotrophic factor vector, adenovirus LacZ and perilymph were infused into the cochleae of the three groups, variously. Eight weeks later, the cochleae were stained immunohistochemically and the spiral ganglion cells counted. The auditory brainstem response threshold recorded before and seven days after noise exposure did not differ significantly between the three groups. However, eight weeks after cochlear perfusion, the group receiving brain-derived neurotrophic factor had a significantly decreased auditory brainstem response threshold and increased spiral ganglion cell count, compared with the adenovirus LacZ and perilymph groups. When administered via cochlear infusion following noise damage, brain-derived neurotrophic factor appears to improve the auditory threshold, and to have a protective effect on the spiral ganglion cells.
NASA Astrophysics Data System (ADS)
Sutula, Martha; Kudela, Raphael; Hagy, James D.; Harding, Lawrence W.; Senn, David; Cloern, James E.; Bricker, Suzanne; Berg, Gry Mine; Beck, Marcus
2017-10-01
San Francisco Bay (SFB), USA, is highly enriched in nitrogen and phosphorus, but has been resistant to the classic symptoms of eutrophication associated with over-production of phytoplankton. Observations in recent years suggest that this resistance may be weakening, shown by: significant increases of chlorophyll-a (chl-a) and decreases of dissolved oxygen (DO), common occurrences of phytoplankton taxa that can form Harmful Algal Blooms (HAB), and algal toxins in water and mussels reaching levels of concern. As a result, managers now ask: what levels of chl-a in SFB constitute tipping points of phytoplankton biomass beyond which water quality will become degraded, requiring significant nutrient reductions to avoid impairments? We analyzed data for DO, phytoplankton species composition, chl-a, and algal toxins to derive quantitative relationships between three indicators (HAB abundance, toxin concentrations, DO) and chl-a. Quantile regressions relating HAB abundance and DO to chl-a were significant, indicating SFB is at increased risk of adverse HAB and low DO levels if chl-a continues to increase. Conditional probability analysis (CPA) showed chl-a of 13 mg m-3 as a "protective" threshold below which probabilities for exceeding alert levels for HAB abundance and toxins were reduced. This threshold was similar to chl-a of 13-16 mg m-3 that would meet a SFB-wide 80% saturation Water Quality Criterion (WQC) for DO. Higher "at risk" chl-a thresholds from 25 to 40 mg m-3 corresponded to 0.5 probability of exceeding alert levels for HAB abundance, and for DO below a WQC of 5.0 mg L-1 designated for lower South Bay (LSB) and South Bay (SB). We submit these thresholds as a basis to assess eutrophication status of SFB and to inform nutrient management actions. This approach is transferrable to other estuaries to derive chl-a thresholds protective against eutrophication.
NASA Technical Reports Server (NTRS)
Brubaker, N.; Jedlovec, G. J.
2004-01-01
With the preliminary release of AIRS Level 1 and 2 data to the scientific community, there is a growing need for an accurate AIRS cloud mask for data assimilation studies and in producing products derived from cloud free radiances. Current cloud information provided with the AIRS data are limited or based on simplified threshold tests. A multispectral cloud detection approach has been developed for AIRS that utilizes the hyper-spectral capabilities to detect clouds based on specific cloud signatures across the short wave and long wave infrared window regions. This new AIRS cloud mask has been validated against the existing AIRS Level 2 cloud product and cloud information derived from MODIS. Preliminary results for both day and night applications over the continental U.S. are encouraging. Details of the cloud detection approach and validation results will be presented at the conference.
On the soft-gluon resummation in top quark pair production at hadron colliders
NASA Astrophysics Data System (ADS)
Czakon, M.; Mitov, A.
2009-09-01
We uncover a contribution to the NLO/NLL threshold resummed total cross section for top quark pair production at hadron colliders, which has not been taken into account in earlier literature. We derive this contribution - the difference between the singlet and octet hard (matching) coefficients - in exact analytic form. The numerical impact of our findings on the Sudakov resummed cross section turns out to be large, and comparable in size to the current estimates for the theoretical uncertainty of the total cross section. A rough estimate points toward a few percent decrease of the latter at the LHC.
NASA Astrophysics Data System (ADS)
Juhlke, Florian; Lorber, Katja; Wagenstaller, Maria; Buettner, Andrea
2017-12-01
Chlorinated guaiacol derivatives are found in waste water of pulp mills using chlorine in the bleaching process of wood pulp. They can also be detected in fish tissue, possibly causing off-odors. To date, there is no systematic investigation on the odor properties of halogenated guaiacol derivatives. To close this gap, odor thresholds in air and odor qualities of 14 compounds were determined by gas chromatography-olfactometry. Overall, the investigated compounds elicited smells that are characteristic for guaiacol, namely smoky, sweet, vanilla-like, but also medicinal and plaster-like. Their odor thresholds in air were, however, very low, ranging from 0.00072 to 23 ng/Lair. The lowest thresholds were found for 5-chloro- and 5-bromoguaiacol, followed by 4,5-dichloro- and 6-chloroguaiacol. Moreover, some inter-individual differences in odor threshold values could be observed, with the highest variations having been recorded for the individual values of 5-iodo- and 4-bromoguaiacol.
Stress/strain changes and triggered seismicity at The Geysers, California
Gomberg, J.; Davis, S.
1996-01-01
The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency or equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.
Stress/strain changes and triggered seismicity at The Geysers, California
NASA Astrophysics Data System (ADS)
Gomberg, Joan; Davis, Scott
1996-01-01
The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency, or, equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.
Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling
NASA Astrophysics Data System (ADS)
Reed, Seann M.
2003-09-01
The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.
A Supplementary Clear-Sky Snow and Ice Recognition Technique for CERES Level 2 Products
NASA Technical Reports Server (NTRS)
Radkevich, Alexander; Khlopenkov, Konstantin; Rutan, David; Kato, Seiji
2013-01-01
Identification of clear-sky snow and ice is an important step in the production of cryosphere radiation budget products, which are used in the derivation of long-term data series for climate research. In this paper, a new method of clear-sky snow/ice identification for Moderate Resolution Imaging Spectroradiometer (MODIS) is presented. The algorithm's goal is to enhance the identification of snow and ice within the Clouds and the Earth's Radiant Energy System (CERES) data after application of the standard CERES scene identification scheme. The input of the algorithm uses spectral radiances from five MODIS bands and surface skin temperature available in the CERES Single Scanner Footprint (SSF) product. The algorithm produces a cryosphere rating from an aggregated test: a higher rating corresponds to a more certain identification of the clear-sky snow/ice-covered scene. Empirical analysis of regions of interest representing distinctive targets such as snow, ice, ice and water clouds, open waters, and snow-free land selected from a number of MODIS images shows that the cryosphere rating of snow/ice targets falls into 95% confidence intervals lying above the same confidence intervals of all other targets. This enables recognition of clear-sky cryosphere by using a single threshold applied to the rating, which makes this technique different from traditional branching techniques based on multiple thresholds. Limited tests show that the established threshold clearly separates the cryosphere rating values computed for the cryosphere from those computed for noncryosphere scenes, whereas individual tests applied consequently cannot reliably identify the cryosphere for complex scenes.
Foster, David C.; Falsetta, Megan L.; Woeller, Collynn F.; Pollock, Stephen J.; Song, Kunchang; Bonham, Adrienne; Haidaris, Constantine G.; Stodgell, Chris J.; Messing, Susan P.; Iadarola, Michael; Phipps, Richard P.
2015-01-01
Fibroblast strains were derived from two regions of the lower genital tract of localized provoked vulvodynia (LPV) cases and pain-free controls. Sixteen strains were derived from four cases and four controls, age and race matched, following pre-sampling mechanical pain threshold assessments. Strains were challenged with six separate stimuli: live yeast species (C. albicans, C. glabrata, C. tropicalis, and S. cerevisiae), yeast extract (zymosan), or inactive vehicle. Production of prostaglandin E2 (PGE2) and interleukin-6 (IL-6) were pro-inflammatory response measures. Highest IL-6 and PGE2 occurred with vestibular strains following C. albicans, C. glabrata, and zymosan challenges, resulting in the ability to significantly predict IL-6 and PGE2 production by genital tract location. Following C. albicans and C. glabrata challenge of all sixteen fibroblast strains, adjusting for dual sampling of subjects, PGE2 and IL-6 production significantly predicted the pre-sampling pain threshold from the genital tract site of sampling. At the same location of pain assessment and fibroblast sampling, in situ immunohistochemical (IHC)(+) fibroblasts for IL-6 and Cox-2 were quantified microscopically. The correlation between IL-6 production and IL-6 IHC(+) was statistically significant yet biological significance is unknown because of the small number of IHC(+) IL-6 fibroblasts identified. A low fibroblast IL-6 IHC(+) count may result from most IL-6 produced by fibroblasts existing in a secreted, extracellular state. Enhanced, site-specific, innate immune responsiveness to yeast pathogens by fibroblasts may be an early step in LPV pathogenesis. Fibroblast strain testing may offer an attractive/objective marker of LPV pathology in women with vulvodynia of inflammatory origin. PMID:25679469
Threshold expansion of the gg (qqbar) →QQbar + X cross section at O (αs4)
NASA Astrophysics Data System (ADS)
Beneke, Martin; Czakon, Michal; Falgari, Pietro; Mitov, Alexander; Schwinn, Christian
2010-07-01
We derive the complete set of velocity-enhanced terms in the expansion of the total cross section for heavy-quark pair production in hadronic collisions at next-to-next-to-leading order. Our expression takes into account the effects of soft-gluon emission as well as that of potential-gluon exchanges. We prove that there are no enhancements due to subleading soft-gluon couplings multiplying the leading Coulomb singularity.
NASA Astrophysics Data System (ADS)
Lu, Zhaoyang; Xu, Wei; Sun, Decai; Han, Weiguo
2009-10-01
In this paper, the discounted penalty (Gerber-Shiu) functions for a risk model involving two independent classes of insurance risks under a threshold dividend strategy are developed. We also assume that the two claim number processes are independent Poisson and generalized Erlang (2) processes, respectively. When the surplus is above this threshold level, dividends are paid at a constant rate that does not exceed the premium rate. Two systems of integro-differential equations for discounted penalty functions are derived, based on whether the surplus is above this threshold level. Laplace transformations of the discounted penalty functions when the surplus is below the threshold level are obtained. And we also derive a system of renewal equations satisfied by the discounted penalty function with initial surplus above the threshold strategy via the Dickson-Hipp operator. Finally, analytical solutions of the two systems of integro-differential equations are presented.
Coherent vector meson photoproduction from deuterium at intermediate energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, T.C.; Strikman, M.I.; Sargsian, M.M.
2006-04-15
We analyze the cross section for vector meson photoproduction off a deuteron for the intermediate range of photon energies starting at a few giga-electron-volts above the threshold and higher. We reproduce the steps in the derivation of the conventional nonrelativistic Glauber expression based on an effective diagrammatic method while making corrections for Fermi motion and intermediate-energy kinematic effects. We show that, for intermediate-energy vector meson production, the usual Glauber factorization breaks down, and we derive corrections to the usual Glauber method to linear order in longitudinal nucleon momentum. The purpose of our analysis is to establish methods for probing interestingmore » physics in the production mechanism for {phi} mesons and heavier vector mesons. We demonstrate how neglecting the breakdown of Glauber factorization can lead to errors in measurements of basic cross sections extracted from nuclear data.« less
2012-10-25
of hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean molecular weight (MWave) of...diffusive soot extinction configurations. Matching the “real fuel combustion property targets” of hydrogen/ carbon molar ratio (H/C), derived cetane number...combustion property targets - hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean
Mapping irrigated areas in Afghanistan over the past decade using MODIS NDVI
Pervez, Md Shahriar; Budde, Michael; Rowland, James
2014-01-01
Agricultural production capacity contributes to food security in Afghanistan and is largely dependent on irrigated farming, mostly utilizing surface water fed by snowmelt. Because of the high contribution of irrigated crops (> 80%) to total agricultural production, knowing the spatial distribution and year-to-year variability in irrigated areas is imperative to monitoring food security for the country. We used 16-day composites of the Normalized Difference Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to create 23-point time series for each year from 2000 through 2013. Seasonal peak values and time series were used in a threshold-dependent decision tree algorithm to map irrigated areas in Afghanistan for the last 14 years. In the absence of ground reference irrigated area information, we evaluated these maps with the irrigated areas classified from multiple snapshots of the landscape during the growing season from Landsat 5 optical and thermal sensor images. We were able to identify irrigated areas using Landsat imagery by selecting as irrigated those areas with Landsat-derived NDVI greater than 0.30–0.45, depending on the date of the Landsat image and surface temperature less than or equal to 310 Kelvin (36.9 ° C). Due to the availability of Landsat images, we were able to compare with the MODIS-derived maps for four years: 2000, 2009, 2010, and 2011. The irrigated areas derived from Landsat agreed well r2 = 0.91 with the irrigated areas derived from MODIS, providing confidence in the MODIS NDVI threshold approach. The maps portrayed a highly dynamic irrigated agriculture practice in Afghanistan, where the amount of irrigated area was largely determined by the availability of surface water, especially snowmelt, and varied by as much as 30% between water surplus and water deficit years. During the past 14 years, 2001, 2004, and 2008 showed the lowest levels of irrigated area (~ 1.5 million hectares), attesting to the severe drought conditions in those years, whereas 2009, 2012 and 2013 registered the largest irrigated area (~ 2.5 million hectares) due to record snowpack and snowmelt in the region. The model holds promise the ability to provide near-real-time (by the end of the growing seasons) estimates of irrigated area, which are beneficial for food security monitoring as well as subsequent decision making for the country. While the model is developed for Afghanistan, it can be adopted with appropriate adjustments in the derived threshold values to map irrigated areas elsewhere.
Lunz, John G; Specht, Susan M; Murase, Noriko; Isse, Kumiko; Demetris, Anthony J
2007-12-01
Intraorgan dendritic cells (DCs) monitor the environment and help translate triggers of innate immunity into adaptive immune responses. Liver-based DCs are continually exposed, via gut-derived portal venous blood, to potential antigens and bacterial products that can trigger innate immunity. However, somehow the liver avoids a state of perpetual inflammation and protects central immune organs from overstimulation. In this study, we tested the hypothesis that hepatic interleukin-6 (IL-6)/signal transducer and activator of transcription 3 (STAT3) activity increases the activation/maturation threshold of hepatic DCs toward innate immune signals. The results show that the liver nuclear STAT3 activity is significantly higher than that of other organs and is IL-6-dependent. Hepatic DCs in normal IL-6 wild-type (IL-6(+/+)) mice are phenotypically and functionally less mature than DCs from IL-6-deficient (IL-6(-/-)) or STAT3-inhibited IL-6(+/+) mice, as determined by surface marker expression, proinflammatory cytokine secretion, and allogeneic T-cell stimulation. IL-6(+/+) liver DCs produce IL-6 in response to exposure to lipopolysaccharide (LPS) and cytidine phosphate guanosine oligonucleotides (CpG) but are resistant to maturation compared with IL-6(-/-) liver DCs. Conversely, exogenous IL-6 inhibits LPS-induced IL-6(-/-) liver DC maturation. IL-6/STAT3 signaling influences the liver DC expression of toll-like receptor 9 and IL-1 receptor associated kinase-M. The depletion of gut commensal bacteria in IL-6(+/+) mice with oral antibiotics decreased portal blood endotoxin levels, lowered the expression of IL-6 and phospho-STAT3, and significantly increased liver DC maturation. Gut-derived bacterial products, by stimulating hepatic IL-6/STAT3 signaling, inhibit hepatic DC activation/maturation and thereby elevate the threshold needed for translating triggers of innate immunity into adaptive immune responses. Manipulating gut bacteria may therefore be an effective strategy for altering intrahepatic immune responses.
Ding, Changfeng; Ma, Yibing; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang
2018-04-01
Cadmium (Cd) is an environmental toxicant with high rates of soil-plant transfer. It is essential to establish an accurate soil threshold for the implementation of soil management practices. This study takes root vegetable as an example to derive soil thresholds for Cd based on the food quality standard as well as health risk assessment using species sensitivity distribution (SSD). A soil type-specific bioconcentration factor (BCF, ratio of Cd concentration in plant to that in soil) generated from soil with a proper Cd concentration gradient was calculated and applied in the derivation of soil thresholds instead of a generic BCF value to minimize the uncertainty. The sensitivity variations of twelve root vegetable cultivars for accumulating soil Cd and the empirical soil-plant transfer model were investigated and developed in greenhouse experiments. After normalization, the hazardous concentrations from the fifth percentile of the distribution based on added Cd (HC5 add ) were calculated from the SSD curves fitted by Burr Type III distribution. The derived soil thresholds were presented as continuous or scenario criteria depending on the combination of soil pH and organic carbon content. The soil thresholds based on food quality standard were on average 0.7-fold of those based on health risk assessment, and were further validated to be reliable using independent data from field survey and published articles. The results suggested that deriving soil thresholds for Cd using SSD method is robust and also applicable to other crops as well as other trace elements that have the potential to cause health risk issues. Copyright © 2017 Elsevier B.V. All rights reserved.
Potential effects of sulfur pollutants on grape production in New York State
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudson, D.A.; Viessman, S.
1983-01-01
This paper presents the results of a prototype analysis of sulfur pollutants on graph production in New York State. Principal grape production areas for the state are defined and predictions of sulfur dioxide concentrations associated with present and projected sources are computed. Sulfur dioxide concentrations are based on the results of a multi-source dispersion model, whereas concentrations for other pollutants are derived from observations. This information is used in conjunction with results from experiments conducted to identify threshold levels of damage and/or injury to a variety of grape species to pollutants. Determination is then made whether the subject crop ismore » at risk from present and projected concentrations of pollutants.« less
NASA Astrophysics Data System (ADS)
Kim, Y.; Du, J.; Kimball, J. S.
2017-12-01
The landscape freeze-thaw (FT) status derived from satellite microwave remote sensing is closely linked to vegetation phenology and productivity, surface energy exchange, evapotranspiration, snow/ice melt dynamics, and trace gas fluxes over land areas affected by seasonally frozen temperatures. A long-term global satellite microwave Earth System Data Record of daily landscape freeze-thaw status (FT-ESDR) was developed using similar calibrated 37GHz, vertically-polarized (V-pol) brightness temperatures (Tb) from SMMR, SSM/I, and SSMIS sensors. The FT-ESDR shows mean annual spatial classification accuracies of 90.3 and 84.3 % for PM and AM overpass retrievals relative surface air temperature (SAT) measurement based FT estimates from global weather stations. However, the coarse FT-ESDR gridding (25-km) is insufficient to distinguish finer scale FT heterogeneity. In this study, we tested alternative finer scale FT estimates derived from two enhanced polar-grid (3.125-km and 6-km resolution), 36.5 GHz V-pol Tb records derived from calibrated AMSR-E and AMSR2 sensor observations. The daily FT estimates are derived using a modified seasonal threshold algorithm that classifies daily Tb variations in relation to grid cell-wise FT thresholds calibrated using ERA-Interim reanalysis based SAT, downscaled using a digital terrain map and estimated temperature lapse rates. The resulting polar-grid FT records for a selected study year (2004) show mean annual spatial classification accuracies of 90.1% (84.2%) and 93.1% (85.8%) for respective PM (AM) 3.125km and 6-km Tb retrievals relative to in situ SAT measurement based FT estimates from regional weather stations. Areas with enhanced FT accuracy include water-land boundaries and mountainous terrain. Differences in FT patterns and relative accuracy obtained from the enhanced grid Tb records were attributed to several factors, including different noise contributions from underlying Tb processing and spatial mismatches between Tb retrievals and SAT calibrated FT thresholds.
Data Assimilation Experiments using Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel
2008-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AlRS data. Version 5 contains accurate case-by-case error estimates for most derived products, which are also used for quality control. We have conducted forecast impact experiments assimilating AlRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM. Assimilation of quality controlled temperature profiles resulted in significantly improved forecast skill in both the Northern Hemisphere and Southern Hemisphere Extra-Tropics, compared to that obtained from analyses obtained when all data used operationally by NCEP except for AlRS data is assimilated. Experiments using different Quality Control thresholds for assimilation of AlRS temperature retrievals showed that a medium quality control threshold performed better than a tighter threshold, which provided better overall sounding accuracy; or a looser threshold, which provided better spatial coverage of accepted soundings. We are conducting more experiments to further optimize this balance of spatial coverage and sounding accuracy from the data assimilation perspective. In all cases, temperature soundings were assimilated well below cloud level in partially cloudy cases. The positive impact of assimilating AlRS derived atmospheric temperatures all but vanished when only AIRS stratospheric temperatures were assimilated. Forecast skill resulting from assimilation of AlRS radiances uncontaminated by clouds, instead of AlRS temperature soundings, was only slightly better than that resulting from assimilation of only stratospheric AlRS temperatures. This reduction in forecast skill is most likely the result of significant loss of tropospheric information when only AIRS radiances unaffected by clouds are used in the data assimilation process.
NASA Astrophysics Data System (ADS)
Hinsby, Klaus; Markager, Stiig; Kronvang, Brian; Windolf, Jørgen; Sonnenborg, Torben; Sørensen, Lærke
2015-04-01
Nitrate, which typically makes up the major part (~>90%) of dissolved inorganic nitrogen in groundwater and surface water, is the most frequent pollutant responsible for European groundwater bodies failing to meet the good status objectives of the European Water Framework Directive generally when comparing groundwater monitoring data with the nitrate quality standard of the Groundwater Directive (50 mg/l = the WHO drinking water standard). Still, while more than 50 % of the European surface water bodies do not meet the objective of good ecological status "only" 25 % of groundwater bodies do not meet the objective of good chemical status according to the river basin management plans reported by the EU member states. However, based on a study on interactions between groundwater, streams and a Danish estuary we argue that nitrate threshold values for aerobic groundwater often need to be significantly below the nitrate quality standard to ensure good ecological status of associated surface water bodies, and hence that the chemical status of European groundwater is worse than indicated by the present assessments. Here we suggest a methodology for derivation of groundwater and stream threshold values for total nitrogen ("nitrate") in a coastal catchment based on assessment of maximum acceptable nitrogen loadings (thresholds) to the associated vulnerable estuary. The applied method use existing information on agricultural practices and point source emissions in the catchment, groundwater, stream quantity and quality monitoring data that all feed data to an integrated groundwater and surface water modelling tool enabling us to conduct an assessment of total nitrogen loads and threshold concentrations derived to ensure/restore good ecological status of the investigated estuary. For the catchment to the Horsens estuary in Denmark we estimate the stream and groundwater thresholds for total nitrogen to be about 13 and 27 mg/l (~ 12 and 25 mg/l of nitrate). The shown example of deriving nitrogen threshold concentrations is for groundwater and streams in a coastal catchment discharging to a vulnerable estuary in Denmark, but the principles may be applied to large river basins with sub-catchments in several countries such as e.g. the Danube or the Rhine. In this case the relevant countries need to collaborate on derivation of nitrogen thresholds based on e.g. maximum acceptable nitrogen loadings to the Black Sea / the North Sea, and finally agree on thresholds for different parts of the river basin. Phosphorus is another nutrient which frequently results in or contributes to the eutrophication of surface waters. The transport and retention processes of total phosphorus (TP) is more complex than for nitrate (or alternatively total N), and presently we are able to establish TP thresholds for streams but not for groundwater. Derivation of TP thresholds is covered in an accompanying paper by Kronvang et al.
Lu, Yi Chen; Zhang, Shuang; Yang, Hong
2015-01-01
Isoproturon (IPU) is a herbicide widely used to prevent weeds in cereal production. Due to its extensive use in agriculture, residues of IPU are often detected in soils and crops. Overload of IPU to crops is associated with human health risks. Hence, there is an urgent need to develop an approach to mitigate its accumulation in crops. In this study, the IPU residues and its degradation products in wheat were characterized using ultra performance liquid chromatography-time of fight tandem-mass spectrometer/mass spectrometer (UPLC-TOF-MS/MS). Most detected IPU-derivatives were sugar-conjugated. Degradation and glycosylation of IPU-derivatives could be enhanced by applying salicylic acid (SA). While more sugar-conjugated IPU-derivatives were identified in wheat with SA application, lower levels of IPU were detected, indicating that SA is able to accelerate intracellular IPU catabolism. All structures of IPU-derivatives and sugar-conjugated products were characterized. Comparative data were provided with specific activities and gene expression of certain glucosyltransferases. A pathway with IPU degradation and glucosylation was discussed. Our work indicates that SA-accelerated degradation is practically useful for wheat crops growing in IPU-contaminated soils because such crops with SA application can potentially lower or minimize IPU accumulation in levels below the threshold for adverse effects. Copyright © 2014 Elsevier B.V. All rights reserved.
Boddy, Lynne M; Noonan, Robert J; Kim, Youngwon; Rowlands, Alex V; Welk, Greg J; Knowles, Zoe R; Fairclough, Stuart J
2018-03-28
To examine the comparability of children's free-living sedentary time (ST) derived from raw acceleration thresholds for wrist mounted GENEActiv accelerometer data, with ST estimated using the waist mounted ActiGraph 100count·min -1 threshold. Secondary data analysis. 108 10-11-year-old children (n=43 boys) from Liverpool, UK wore one ActiGraph GT3X+ and one GENEActiv accelerometer on their right hip and left wrist, respectively for seven days. Signal vector magnitude (SVM; mg) was calculated using the ENMO approach for GENEActiv data. ST was estimated from hip-worn ActiGraph data, applying the widely used 100count·min -1 threshold. ROC analysis using 10-fold hold-out cross-validation was conducted to establish a wrist-worn GENEActiv threshold comparable to the hip ActiGraph 100count·min -1 threshold. GENEActiv data were also classified using three empirical wrist thresholds and equivalence testing was completed. Analysis indicated that a GENEActiv SVM value of 51mg demonstrated fair to moderate agreement (Kappa: 0.32-0.41) with the 100count·min -1 threshold. However, the generated and empirical thresholds for GENEActiv devices were not significantly equivalent to ActiGraph 100count·min -1 . GENEActiv data classified using the 35.6mg threshold intended for ActiGraph devices generated significantly equivalent ST estimates as the ActiGraph 100count·min -1 . The newly generated and empirical GENEActiv wrist thresholds do not provide equivalent estimates of ST to the ActiGraph 100count·min -1 approach. More investigation is required to assess the validity of applying ActiGraph cutpoints to GENEActiv data. Future studies are needed to examine the backward compatibility of ST data and to produce a robust method of classifying SVM-derived ST. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Sutula, Martha; Kudela, Raphael; Hagy, James D.; Harding, Lawrence W.; Senn, David; Cloern, James E.; Bricker, Suzanne B.; Beck, Marcus W.; Berg, Gry Mine
2017-01-01
San Francisco Bay (SFB), USA, is highly enriched in nitrogen and phosphorus, but has been resistant to the classic symptoms of eutrophication associated with over-production of phytoplankton. Observations in recent years suggest that this resistance may be weakening, shown by: significant increases of chlorophyll-a (chl-a) and decreases of dissolved oxygen (DO), common occurrences of phytoplankton taxa that can form Harmful Algal Blooms (HAB), and algal toxins in water and mussels reaching levels of concern. As a result, managers now ask: what levels of chl-a in SFB constitute tipping points of phytoplankton biomass beyond which water quality will become degraded, requiring significant nutrient reductions to avoid impairments? We analyzed data for DO, phytoplankton species composition, chl-a, and algal toxins to derive quantitative relationships between three indicators (HAB abundance, toxin concentrations, DO) and chl-a. Quantile regressions relating HAB abundance and DO to chl-a were significant, indicating SFB is at increased risk of adverse HAB and low DO levels if chl-a continues to increase. Conditional probability analysis (CPA) showed chl-a of 13 mg m−3 as a “protective” threshold below which probabilities for exceeding alert levels for HAB abundance and toxins were reduced. This threshold was similar to chl-a of 13–16 mg m−3 that would meet a SFB-wide 80% saturation Water Quality Criterion (WQC) for DO. Higher “at risk” chl-a thresholds from 25 to 40 mg m−3 corresponded to 0.5 probability of exceeding alert levels for HAB abundance, and for DO below a WQC of 5.0 mg L−1 designated for lower South Bay (LSB) and South Bay (SB). We submit these thresholds as a basis to assess eutrophication status of SFB and to inform nutrient management actions. This approach is transferrable to other estuaries to derive chl-a thresholds protective against eutrophication.
Residence time control on hot moments of net nitrate production and uptake in the hyporheic zone
Briggs, Martin A.; Lautz, Laura K.; Hare, Danielle K.
2014-01-01
moments of net production and uptake, enhancing NO3- production as residence times approach the anaerobic threshold, and changing zones of net NO3- production to uptake as residence times increase past the net sink threshold. The anaerobic and net sink thresholds for beaver-influenced streambed morphology occur at much shorter residence times (1.3 h and 2.3 h, respectively) compared to other documented hyporheic systems, and the net sink threshold compares favorably to the lower boundary of the anaerobic threshold determined for this system with the new oxygen Damkohler number. The consistency of the residence time threshold values of NO3- cycling in this study, despite environmental variability and disparate morphology, indicates that NO3- hot moment dynamics are primarily driven by changes in physical hydrology and associated residence times.
Ionization cross sections of the Au L subshells by electron impact from the L3 threshold to 100 keV
NASA Astrophysics Data System (ADS)
Barros, Suelen F.; Vanin, Vito R.; Maidana, Nora L.; Martins, Marcos N.; García-Alvarez, Juan A.; Santos, Osvaldo C. B.; Rodrigues, Cleber L.; Koskinas, Marina F.; Fernández-Varea, José M.
2018-01-01
We measured the cross sections for Au Lα, Lβ, Lγ, Lℓ and Lη x-ray production by the impact of electrons with energies from the L3 threshold to 100 keV using a thin Au film whose mass thickness was determined by Rutherford Backscattering Spectrometry. The x-ray spectra were acquired with a Si drift detector, which allowed to separate the components of the Lγ multiplet lines. The measured Lα, Lβ, {{L}}{γ }1, L{γ }{2,3,6}, {{L}}{γ }{4,4\\prime }, {{L}}{γ }5, {{L}}{\\ell } and Lη x-ray production cross sections were then employed to derive Au L1, L2 and L3 subshell ionization cross sections with relative uncertainties of 8%, 7% and 7%, respectively; these figures include the uncertainties in the atomic relaxation parameters. The correction for the increase in electron path length inside the Au film was estimated by means of Monte Carlo simulations. The experimental ionization cross sections are about 10% above the state-of-the-art distorted-wave calculations.
Cloud Type Classification (cldtype) Value-Added Product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flynn, Donna; Shi, Yan; Lim, K-S
The Cloud Type (cldtype) value-added product (VAP) provides an automated cloud type classification based on macrophysical quantities derived from vertically pointing lidar and radar. Up to 10 layers of clouds are classified into seven cloud types based on predetermined and site-specific thresholds of cloud top, base and thickness. Examples of thresholds for selected U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility sites are provided in Tables 1 and 2. Inputs for the cldtype VAP include lidar and radar cloud boundaries obtained from the Active Remotely Sensed Cloud Location (ARSCL) and Surface Meteorological Systems (MET) data. Rainmore » rates from MET are used to determine when radar signal attenuation precludes accurate cloud detection. Temporal resolution and vertical resolution for cldtype are 1 minute and 30 m respectively and match the resolution of ARSCL. The cldtype classification is an initial step for further categorization of clouds. It was developed for use by the Shallow Cumulus VAP to identify potential periods of interest to the LASSO model and is intended to find clouds of interest for a variety of users.« less
A Simplified Approach to Cloud Masking with VIIRS in the S-NPP/JPSS Era
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Lafontaine, Frank J.
2014-01-01
The quantitative detection of clouds in satellite imagery has a number of important applications in weather analysis. The proper interpretation of satellite imagery for improved situational awareness depends on knowing where the clouds are at all times of the day. Additionally, many products derived from infrared measurements need accurate cloud information to mask out regions where retrieval of geophysical parameters in the atmosphere or on the surface are not possible. Thus, the accurate detection of the presence of clouds in satellite imagery on a global basis is important to the product developers and the operational weather community to support their decision-making process. This abstract describes an application of a two-channel bispectral composite threshold (BCT) approach applied to VIIRS imagery. The simplified BCT approach uses only the 10.76 and 3.75 micrometer spectral channels in two spectral tests; a straightforward infrared threshold test with the longwave channel and a shortwave minus longwave channel difference test. The key to the success of this approach as demonstrated in past applications to GOES and MODIS data is the generation of temporally and spatially dependent thresholds used in the tests from a previous number of days at similar observations to the current data. The presentation will present an overview of the approach and intercomparison results with other satellites, methods, and against verification data.
Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig
2018-05-11
Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.
Gleiser, R M; Gorla, D E
2007-12-01
Ochlerotatus albifasciatus is a vector of western equine encephalomyelitis in Argentina and a nuisance mosquito affecting beef and dairy production. The objective of this study was to analyze whether environmental proxy data derived from 1 km resolution NOAA-AVHRR images could be useful as a rapid tool for locating areas with higher potential for Oc. albifasciatus activity at a regional scale. Training sites for mosquito abundance categories were 3.3x3.3 km polygons over sampling sites. Abundance was classified into two categories according to a proposed threshold for economic losses. Data of channels 1, 2, 4 and 5 were used to calculate five biophysical variables: normalized differences vegetation index (NDVI), land surface temperature, total precipitable water, dew point and vapour saturation deficit. A discriminant analysis correctly classified 100% of the areas predicted to be above or below the economic threshold of 2500 mosquitoes per night of capture, respectively. Components of the NDVI, the total precipitable water and the dew point temperature contributed most to the function value. The results suggest that environmental data derived from AVHRR-NOAA could be useful for rapidly identifying adequate areas for mosquito development or activity.
USDA-ARS?s Scientific Manuscript database
Through the characterization of a metapopulation cattle disease model on a directed network having source, transit, and sink nodes, we derive two global epidemic invasion thresholds. The first threshold defines the conditions necessary for an epidemic to successfully spread at the global scale. The ...
Threshold Capabilities: Threshold Concepts and Knowledge Capability Linked through Variation Theory
ERIC Educational Resources Information Center
Baillie, Caroline; Bowden, John A.; Meyer, Jan H. F.
2013-01-01
The Threshold Capability Integrated Theoretical Framework (TCITF) is presented as a framework for the design of university curricula, aimed at developing graduates' capability to deal with previously unseen situations in their professional, social, and personal lives. The TCITF is a new theoretical framework derived from, and heavily dependent…
Spatial Distribution of the Threshold Beam Spots of Laser Weapons Simulators
1993-09-08
This paper was based on the transmission theory of elliptical Gaussian beam fluxes in deriving some transmission equations for the threshold beam...spots of laser weapon simulators, in order to revise and expand the expressions for the threshold beam spots, their maximum range, the extinction
Yu, Siran; Zhao, Zhehao; Sun, Liming; Li, Ping
2017-02-15
The discovery of microRNAs encapsulated in milk-derived exosomes has revealed stability under extreme conditions reflecting the protection of membranes. We attempted to determine the variations in nanoparticles derived from milk after fermentation, and provide evidence to determine the effects of these exosomes on cells with potential bioactivity. Using scanning electron microscopy and dynamic light scattering, we compared the morphology and particle size distribution of exosomes from yogurt fermented with three different combinations of strains with those from raw milk. The protein content of the exosome was significantly reduced in fermented milk. The cycle threshold showed that the expression of miR-29b and miR-21 was relatively high in raw milk, indicating a loss of microRNA after fermentation. Milk-derived exosomes could promote cell growth and activate the mitogen-activated protein kinase pathway. These findings demonstrated biological functions in milk exosomes and provided new insight into the nutrient composition of dairy products.
Świąder, Mariusz J; Paruszewski, Ryszard; Łuszczki, Jarogniew J
2016-04-01
The aim of this study was to assess the anticonvulsant potency of 6 various benzylamide derivatives [i.e., nicotinic acid benzylamide (Nic-BZA), picolinic acid 2-fluoro-benzylamide (2F-Pic-BZA), picolinic acid benzylamide (Pic-BZA), (RS)-methyl-alanine-benzylamide (Me-Ala-BZA), isonicotinic acid benzylamide (Iso-Nic-BZA), and (R)-N-methyl-proline-benzylamide (Me-Pro-BZA)] in the threshold for maximal electroshock (MEST)-induced seizures in mice. Electroconvulsions (seizure activity) were produced in mice by means of a current (sine-wave, 50Hz, 500V, strength from 4 to 18mA, ear-clip electrodes, 0.2-s stimulus duration, tonic hindlimb extension taken as the endpoint). Nic-BZA, 2F-Pic-BZA, Pic-BZA, Me-Ala-BZA, Iso-Nic-BZA, and Me-Pro-BZA administered systemically (ip) in a dose-dependent manner increase the threshold for maximal electroconvulsions in mice. Linear regression analysis of Nic-BZA, 2F-Pic-BZA, Pic-BZA, MeAla-BZA, IsoNic-BZA, and Me-Pro-BZA doses and their corresponding threshold increases allowed determining threshold increasing doses by 20% (TID20 values) that elevate the threshold in drug-treated animals over the threshold in control animals. The experimentally derived TID20 values in the MEST test for Nic-BZA, 2F-Pic-BZA, Pic-BZA, Me-Ala-BZA, Iso-Nic-BZA, and Me-Pro-BZA were 7.45mg/kg, 7.72mg/kg, 8.74mg/kg, 15.11mg/kg, 21.95mg/kg and 28.06mg/kg, respectively. The studied benzylamide derivatives can be arranged with respect to their anticonvulsant potency in the MEST test as follows: Nic-BZA>2F-Pic-BZA>Pic-BZA>Me-Ala-BZA>Iso-Nic-BZA>Me-Pro-BZA. Copyright © 2015 Institute of Pharmacology, Polish Academy of Sciences. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
NASA Astrophysics Data System (ADS)
Kim, D.-H.; Sandanayaka, A. S. D.; Zhao, L.; Pitrat, D.; Mulatier, J. C.; Matsushima, T.; Andraud, C.; Ribierre, J. C.; Adachi, C.
2017-01-01
We report on the photophysical, amplified spontaneous emission (ASE), and electroluminescence properties of a blue-emitting octafluorene derivative in spin-coated films. The neat film shows an extremely low ASE threshold of 90 nJ/cm2, which is related to its high photoluminescence quantum yield of 87% and its large radiative decay rate of 1.7 × 109 s-1. Low-threshold organic distributed feedback semiconductor lasers and fluorescent organic light-emitting diodes with a maximum external quantum efficiency as high as 4.4% are then demonstrated, providing evidence that this octafluorene derivative is a promising candidate for organic laser applications.
Towards a unifying basis of auditory thresholds: binaural summation.
Heil, Peter
2014-04-01
Absolute auditory threshold decreases with increasing sound duration, a phenomenon explainable by the assumptions that the sound evokes neural events whose probabilities of occurrence are proportional to the sound's amplitude raised to an exponent of about 3 and that a constant number of events are required for threshold (Heil and Neubauer, Proc Natl Acad Sci USA 100:6151-6156, 2003). Based on this probabilistic model and on the assumption of perfect binaural summation, an equation is derived here that provides an explicit expression of the binaural threshold as a function of the two monaural thresholds, irrespective of whether they are equal or unequal, and of the exponent in the model. For exponents >0, the predicted binaural advantage is largest when the two monaural thresholds are equal and decreases towards zero as the monaural threshold difference increases. This equation is tested and the exponent derived by comparing binaural thresholds with those predicted on the basis of the two monaural thresholds for different values of the exponent. The thresholds, measured in a large sample of human subjects with equal and unequal monaural thresholds and for stimuli with different temporal envelopes, are compatible only with an exponent close to 3. An exponent of 3 predicts a binaural advantage of 2 dB when the two ears are equally sensitive. Thus, listening with two (equally sensitive) ears rather than one has the same effect on absolute threshold as doubling duration. The data suggest that perfect binaural summation occurs at threshold and that peripheral neural signals are governed by an exponent close to 3. They might also shed new light on mechanisms underlying binaural summation of loudness.
Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.
d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K
2015-12-01
Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.
The Dubna-Mainz-Taipei Dynamical Model for πN Scattering and π Electromagnetic Production
NASA Astrophysics Data System (ADS)
Yang, Shin Nan
Some of the featured results of the Dubna-Mainz-Taipei (DMT) dynamical model for πN scattering and π0 electromagnetic production are summarized. These include results for threshold π0 production, deformation of Δ(1232),and the extracted properties of higher resonances below 2 GeV. The excellent agreement of DMT model's predictions with threshold π0 production data, including the recent precision measurements from MAMI establishes results of DMT model as a benchmark for experimentalists and theorists in dealing with threshold pion production.
Higgs boson production at hadron colliders at N3LO in QCD
NASA Astrophysics Data System (ADS)
Mistlberger, Bernhard
2018-05-01
We present the Higgs boson production cross section at Hadron colliders in the gluon fusion production mode through N3LO in perturbative QCD. Specifically, we work in an effective theory where the top quark is assumed to be infinitely heavy and all other quarks are considered to be massless. Our result is the first exact formula for a partonic hadron collider cross section at N3LO in perturbative QCD. Furthermore, our result is an analytic computation of a hadron collider cross section involving elliptic integrals. We derive numerical predictions for the Higgs boson cross section at the LHC. Previously this result was approximated by an expansion of the cross section around the production threshold of the Higgs boson and we compare our findings. Finally, we study the impact of our new result on the state of the art prediction for the Higgs boson cross section at the LHC.
Level crossings and excess times due to a superposition of uncorrelated exponential pulses
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-01-01
A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.
A derivation of the stable cavitation threshold accounting for bubble-bubble interactions.
Guédra, Matthieu; Cornu, Corentin; Inserra, Claude
2017-09-01
The subharmonic emission of sound coming from the nonlinear response of a bubble population is the most used indicator for stable cavitation. When driven at twice their resonance frequency, bubbles can exhibit subharmonic spherical oscillations if the acoustic pressure amplitude exceeds a threshold value. Although various theoretical derivations exist for the subharmonic emission by free or coated bubbles, they all rest on the single bubble model. In this paper, we propose an analytical expression of the subharmonic threshold for interacting bubbles in a homogeneous, monodisperse cloud. This theory predicts a shift of the subharmonic resonance frequency and a decrease of the corresponding pressure threshold due to the interactions. For a given sonication frequency, these results show that an optimal value of the interaction strength (i.e. the number density of bubbles) can be found for which the subharmonic threshold is minimum, which is consistent with recently published experiments conducted on ultrasound contrast agents. Copyright © 2017 Elsevier B.V. All rights reserved.
Use of Quality Controlled AIRS Temperature Soundings to Improve Forecast Skill
NASA Technical Reports Server (NTRS)
Susskind, Joel; Reale, Oreste; Iredell, Lena
2010-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU-A are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. Also included are the clear column radiances used to derive these products which are representative of the radiances AIRS would have seen if there were no clouds in the field of view. All products also have error estimates. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20 percent, in cases with up to 90 percent effective cloud cover. The products are designed for data assimilation purposes for the improvement of numerical weather prediction, as well as for the study of climate and meteorological processes. With regard to data assimilation, one can use either the products themselves or the clear column radiances from which the products were derived. The AIRS Version 5 retrieval algorithm is now being used operationally at the Goddard DISC in the routine generation of geophysical parameters derived from AIRS/AMSU data. A major innovation in Version 5 is the ability to generate case-by-case level-by-level error estimates for retrieved quantities and clear column radiances, and the use of these error estimates for Quality Control. The temperature profile error estimates are used to determine a case-by-case characteristic pressure pbest, down to which the profile is considered acceptable for data assimilation purposes. The characteristic pressure p(sub best) is determined by comparing the case dependent error estimate (delta)T(p) to the threshold values (Delta)T(p). The AIRS Version 5 data set provides error estimates of T(p) at all levels, and also profile dependent values of pbest based on use of a Standard profile dependent threshold (Delta)T(p). These Standard thresholds were designed as a compromise between optimal use for data assimilation purposes, which requires highest accuracy (tighter Quality Control), and climate purposes, which requires more spatial coverage (looser Quality Control). Subsequent research using Version 5 sounding and error estimates showed that tighter Quality Control performs better for data assimilation proposes, while looser Quality Control better spatial coverage) performs better for climate purposes. We conducted a number of data assimilation experiments using the NASA GEOS-5 Data Assimilation System as a step toward finding an optimum balance of spatial coverage and sounding accuracy with regard to improving forecast skill. The model was run at a horizontal resolution of 0.5 degree latitude x 0.67 degree longitude with 72 vertical levels. These experiments were run during four different seasons, each using a different year. The AIRS temperature profiles were presented to the GEOS-5 analysis as rawinsonde profiles, and the profile error estimates (delta)T(p) were used as the uncertainty for each measurement in the data assimilation process.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
40 CFR 98.391 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.391 Section 98.391 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Petroleum Products § 98.391 Reporting threshold. Any supplier of petroleum products who meets the...
40 CFR 98.391 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.391 Section 98.391 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Petroleum Products § 98.391 Reporting threshold. Any supplier of petroleum products who meets the...
40 CFR 98.391 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.391 Section 98.391 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Petroleum Products § 98.391 Reporting threshold. Any supplier of petroleum products who meets the...
Dark photon decay beyond the Euler-Heisenberg limit
NASA Astrophysics Data System (ADS)
McDermott, Samuel D.; Patel, Hiren H.; Ramani, Harikrishnan
2018-04-01
We calculate the exact width for a dark photon decaying to three photons at one-loop order for dark photon masses m' below the e+e- production threshold of 2 me. We find substantial deviations from previous results derived from the lowest order Euler-Heisenberg effective Lagrangian in the range me≲m'≤2 me, where higher order terms in the derivative expansion are non-negligible. This mass range is precisely where the three photon decay takes place on cosmologically relevant timescales. Our improved analysis reveals a window for dark photons in the range 850 KeV ≲m'≤2 me , 10-5≲ɛ ≲10-4 that is only constrained by possibly model-dependent bounds on the number of light degrees of freedom in the early Universe.
Using Remotely Sensed Information for Near Real-Time Landslide Hazard Assessment
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Adler, Robert; Peters-Lidard, Christa
2013-01-01
The increasing availability of remotely sensed precipitation and surface products provides a unique opportunity to explore how landslide susceptibility and hazard assessment may be approached at larger spatial scales with higher resolution remote sensing products. A prototype global landslide hazard assessment framework has been developed to evaluate how landslide susceptibility and satellite-derived precipitation estimates can be used to identify potential landslide conditions in near-real time. Preliminary analysis of this algorithm suggests that forecasting errors are geographically variable due to the resolution and accuracy of the current susceptibility map and the application of satellite-based rainfall estimates. This research is currently working to improve the algorithm through considering higher spatial and temporal resolution landslide susceptibility information and testing different rainfall triggering thresholds, antecedent rainfall scenarios, and various surface products at regional and global scales.
Scheerans, Christian; Heinig, Roland; Mueck, Wolfgang
2015-01-01
Recently, the European Medicines Agency (EMA) published the new draft guideline on the pharmacokinetic and clinical evaluation of modified release (MR) formulations. The draft guideline contains the new requirement of performing multiple dose (MD) bioequivalence studies, in the case when the MR formulation is expected to show ‘relevant’ drug accumulation at steady state (SS). This new requirement reveals three fundamental issues, which are discussed in the current work: first, measurement for the extent of drug accumulation (MEDA) predicted from single dose (SD) study data; second, its relationship with the percentage residual area under the plasma concentration–time curve (AUC) outside the dosing interval (τ) after SD administration, %AUC(τ-∞)SD; and third, the rationale for a threshold of %AUC(τ-∞)SD that predicts ‘relevant’ drug accumulation at SS. This work revealed that the accumulation ratio RA,AUC, derived from the ratio of the time-averaged plasma concentrations during τ at SS and after SD administration, respectively, is the ‘preferred’ MEDA for MR formulations. A causal relationship was derived between %AUC(τ-∞)SD and RA,AUC, which is valid for any drug (product) that shows (dose- and time-) linear pharmacokinetics regardless of the shape of the plasma concentration–time curve. Considering AUC thresholds from other guidelines together with the causal relationship between %AUC(τ-∞)SD and RA,AUC indicates that values of %AUC(τ-∞)SD ≤ 20%, resulting in RA,AUC ≤ 1.25, can be considered as leading to non-relevant drug accumulation. Hence, the authors suggest that 20% for %AUC(τ-∞)SD is a reasonable threshold and selection criterion between SD or MD study designs for bioequivalence studies of new MR formulations. © 2014 The Authors Biopharmaceutics & Drug Disposition Published by John Wiley & Sons Ltd. PMID:25327367
Estimating background and threshold nitrate concentrations using probability graphs
Panno, S.V.; Kelly, W.R.; Martinsek, A.T.; Hackley, Keith C.
2006-01-01
Because of the ubiquitous nature of anthropogenic nitrate (NO 3-) in many parts of the world, determining background concentrations of NO3- in shallow ground water from natural sources is probably impossible in most environments. Present-day background must now include diffuse sources of NO3- such as disruption of soils and oxidation of organic matter, and atmospheric inputs from products of combustion and evaporation of ammonia from fertilizer and livestock waste. Anomalies can be defined as NO3- derived from nitrogen (N) inputs to the environment from anthropogenic activities, including synthetic fertilizers, livestock waste, and septic effluent. Cumulative probability graphs were used to identify threshold concentrations separating background and anomalous NO3-N concentrations and to assist in the determination of sources of N contamination for 232 spring water samples and 200 well water samples from karst aquifers. Thresholds were 0.4, 2.5, and 6.7 mg/L for spring water samples, and 0.1, 2.1, and 17 mg/L for well water samples. The 0.4 and 0.1 mg/L values are assumed to represent thresholds for present-day precipitation. Thresholds at 2.5 and 2.1 mg/L are interpreted to represent present-day background concentrations of NO3-N. The population of spring water samples with concentrations between 2.5 and 6.7 mg/L represents an amalgam of all sources of NO3- in the ground water basins that feed each spring; concentrations >6.7 mg/L were typically samples collected soon after springtime application of synthetic fertilizer. The 17 mg/L threshold (adjusted to 15 mg/L) for well water samples is interpreted as the level above which livestock wastes dominate the N sources. Copyright ?? 2006 The Author(s).
NASA Astrophysics Data System (ADS)
Scholl, V.; Hulslander, D.; Goulden, T.; Wasser, L. A.
2015-12-01
Spatial and temporal monitoring of vegetation structure is important to the ecological community. Airborne Light Detection and Ranging (LiDAR) systems are used to efficiently survey large forested areas. From LiDAR data, three-dimensional models of forests called canopy height models (CHMs) are generated and used to estimate tree height. A common problem associated with CHMs is data pits, where LiDAR pulses penetrate the top of the canopy, leading to an underestimation of vegetation height. The National Ecological Observatory Network (NEON) currently implements an algorithm to reduce data pit frequency, which requires two height threshold parameters, increment size and range ceiling. CHMs are produced at a series of height increments up to a height range ceiling and combined to produce a CHM with reduced pits (referred to as a "pit-free" CHM). The current implementation uses static values for the height increment and ceiling (5 and 15 meters, respectively). To facilitate the generation of accurate pit-free CHMs across diverse NEON sites with varying vegetation structure, the impacts of adjusting the height threshold parameters were investigated through development of an algorithm which dynamically selects the height increment and ceiling. A series of pit-free CHMs were generated using three height range ceilings and four height increment values for three ecologically different sites. Height threshold parameters were found to change CHM-derived tree heights up to 36% compared to original CHMs. The extent of the parameters' influence on modelled tree heights was greater than expected, which will be considered during future CHM data product development at NEON. (A) Aerial image of Harvard National Forest, (B) standard CHM containing pits, appearing as black speckles, (C) a pit-free CHM created with the static algorithm implementation, and (D) a pit-free CHM created through varying the height threshold ceiling up to 82 m and the increment to 1 m.
Can gravity waves significantly impact PSC occurrence in the Antarctic?
NASA Astrophysics Data System (ADS)
McDonald, A. J.; George, S. E.; Woollands, R. M.
2009-11-01
A combination of POAM III aerosol extinction and CHAMP RO temperature measurements are used to examine the role of atmospheric gravity waves in the formation of Antarctic Polar Stratospheric Clouds (PSCs). POAM III aerosol extinction observations and quality flag information are used to identify Polar Stratospheric Clouds using an unsupervised clustering algorithm. A PSC proxy, derived by thresholding Met Office temperature analyses with the PSC Type Ia formation temperature (TNAT), shows general agreement with the results of the POAM III analysis. However, in June the POAM III observations of PSC are more abundant than expected from temperature threshold crossings in five out of the eight years examined. In addition, September and October PSC identified using temperature thresholding is often significantly higher than that derived from POAM III; this observation probably being due to dehydration and denitrification. Comparison of the Met Office temperature analyses with corresponding CHAMP observations also suggests a small warm bias in the Met Office data in June. However, this bias cannot fully explain the differences observed. Analysis of CHAMP data indicates that temperature perturbations associated with gravity waves may partially explain the enhanced PSC incidence observed in June (relative to the Met Office analyses). For this month, approximately 40% of the temperature threshold crossings observed using CHAMP RO data are associated with small-scale perturbations. Examination of the distribution of temperatures relative to TNAT shows a large proportion of June data to be close to this threshold, potentially enhancing the importance of gravity wave induced temperature perturbations. Inspection of the longitudinal structure of PSC occurrence in June 2005 also shows that regions of enhancement are geographically associated with the Antarctic Peninsula; a known mountain wave "hotspot". The latitudinal variation of POAM III observations means that we only observe this region in June-July, and thus the true pattern of enhanced PSC production may continue operating into later months. The analysis has shown that early in the Antarctic winter stratospheric background temperatures are close to the TNAT threshold (and PSC formation), and are thus sensitive to temperature perturbations associated with mountain wave activity near the Antarctic peninsula (40% of PSC formation). Later in the season, and at latitudes away from the peninsula, temperature perturbations associated with gravity waves contribute to about 15% of the observed PSC (a value which corresponds well to several previous studies). This lower value is likely to be due to colder background temperatures already achieving the TNAT threshold unaided. Additionally, there is a reduction in the magnitude of gravity waves perturbations observed as POAM III samples poleward of the peninsula.
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang
2014-10-01
Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pardo, R.; Berg, A. A.; Warland, J. S.
2017-12-01
The use of microwave remote sensing for surface ground ice detection has been well documented using both active and passive systems. Typical validation of these remotely sensed F/T state products relies on in-situ air or soil temperature measurements and a threshold of 0°C to identify frozen soil. However, in soil pores, the effects of capillary and adsorptive forces combine with the presence of dissolved salts to depress the freezing point. This is further confounded by the fact that water over this temperature range releases/absorbs latent heat of freezing/fusion. Indeed, recent results from SLAPEx2015, a campaign conducted to evaluate the ability to detect F/T state and examine the controls on F/T detection at multiple resolutions, suggest that using a soil temperature of 0°C as a threshold for freezing may not be appropriate. Coaxial impedance sensors, like Steven's HydraProbeII (HP), are the most widely used soil sensor in water supply forecast and climatological networks. These soil moisture probes have recently been used to validate remote sensing F/T products. This kind of validation is still relatively uncommon and dependent on categorical techniques based on seasonal reference states of frozen and non-frozen soil conditions. An experiment was conducted to identify the correlation between the phase state of the soil moisture and the probe measurements. Eight soil cores were subjected to F/T transitions in an environmental chamber. For each core, at a depth of 2.5 cm, the temperature and real dielectric constant (rdc) were measured every five minutes using HPs while two heat pulse probes captured the apparent heat capacity 24 minutes apart. Preliminary results show the phase transition of water is bounded by inflection points in the soil temperature, attributed to latent heat. The rdc, however, appears to be highly sensitive to changes in the water preceding the phase change. This opens the possibility of estimating a dynamic temperature threshold for soil F/T by identifying the soil temperatures at the times during which these inflection points in the soil rdc occur. This technique provides a more accurate threshold for F/T product than the static reference temperature currently established.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
Experimental studies of the near threshold production of K{sup +}K{sup -} pairs at COSY-11
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gil, Damian; Smyrski, Jerzy
2007-11-07
This paper sums up experimental studies of the near threshold production of K{sup +}K{sup -} pairs at COSY-11. The total cross section of the reaction pp{yields}ppK{sup +}K{sup -} has been measured at five excess energies below the {phi} production threshold with the magnetic spectrometer COSY-11. The new data show a significant enhancement of the total cross section compared to pure phase space expectations.
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
Lavender, Ashley L; Bartol, Soraya M; Bartol, Ian K
2014-07-15
Sea turtles reside in different acoustic environments with each life history stage and may have different hearing capacity throughout ontogeny. For this study, two independent yet complementary techniques for hearing assessment, i.e. behavioral and electrophysiological audiometry, were employed to (1) measure hearing in post-hatchling and juvenile loggerhead sea turtles Caretta caretta (19-62 cm straight carapace length) to determine whether these migratory turtles exhibit an ontogenetic shift in underwater auditory detection and (2) evaluate whether hearing frequency range and threshold sensitivity are consistent in behavioral and electrophysiological tests. Behavioral trials first required training turtles to respond to known frequencies, a multi-stage, time-intensive process, and then recording their behavior when they were presented with sound stimuli from an underwater speaker using a two-response forced-choice paradigm. Electrophysiological experiments involved submerging restrained, fully conscious turtles just below the air-water interface and recording auditory evoked potentials (AEPs) when sound stimuli were presented using an underwater speaker. No significant differences in behavior-derived auditory thresholds or AEP-derived auditory thresholds were detected between post-hatchling and juvenile sea turtles. While hearing frequency range (50-1000/1100 Hz) and highest sensitivity (100-400 Hz) were consistent in audiograms pooled by size class for both behavior and AEP experiments, both post-hatchlings and juveniles had significantly higher AEP-derived than behavior-derived auditory thresholds, indicating that behavioral assessment is a more sensitive testing approach. The results from this study suggest that post-hatchling and juvenile loggerhead sea turtles are low-frequency specialists, exhibiting little differences in threshold sensitivity and frequency bandwidth despite residence in acoustically distinct environments throughout ontogeny. © 2014. Published by The Company of Biologists Ltd.
Ottinger, Harald; Soldo, Tomislav; Hofmann, Thomas
2003-02-12
Application of a novel screening procedure, the comparative taste dilution analysis (cTDA), on the non-solvent-extractable reaction products formed in a thermally processed aqueous solution of glucose and l-alanine led to the discovery of the presence of a sweetness-enhancing Maillard reaction product. Isolation, followed by LC-MS and 1D- and 2D-NMR measurements, and synthesis led to its unequivocal identification as N-(1-carboxyethyl)-6-(hydroxymethyl)pyridinium-3-ol inner salt. This so-called alapyridaine, although being tasteless itself, is the first nonvolatile, sweetness-enhancing Maillard reaction product reported in the literature. Depending on the pH value, the detection thresholds of sweet sugars, amino acids, and aspartame, respectively, were found to be significantly decreased when alapyridaine was present; for example, the threshold of glucose decreased by a factor of 16 in an equimolar mixture of glucose and alapyridaine. Studies on the influence of the stereochemistry on taste-enhancing activity revealed that the (+)-(S)-alapyridaine is the physiologically active enantiomer, whereas the (-)-(R)-enantiomer did not affect sweetness perception at all. Thermal processing of aqueous solutions of alapyridaine at 80 degrees C demonstrated a high thermal and hydrolytic stability of that sweetness enhancer; for example, more than 90 or 80% of alapyridaine was recovered when heated for 5 h at pH 7.0, 5.0, or 3.0, respectively.
The perturbed compound Poisson risk model with constant interest and a threshold dividend strategy
NASA Astrophysics Data System (ADS)
Gao, Shan; Liu, Zaiming
2010-03-01
In this paper, we consider the compound Poisson risk model perturbed by diffusion with constant interest and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the nth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case that the claim size distribution is exponential is considered in some detail.
Currie, Benjamin J; Johns, Chris; Chin, Matthew; Charalampopolous, Thanos; Elliot, Charlie A; Garg, Pankaj; Rajaram, Smitha; Hill, Catherine; Wild, Jim W; Condliffe, Robin A; Kiely, David G; Swift, Andy J
2018-06-01
Patients with pulmonary hypertension due to left heart disease (PH-LHD) have overlapping clinical features with pulmonary arterial hypertension making diagnosis reliant on right heart catheterization (RHC). This study aimed to investigate computed tomography pulmonary angiography (CTPA) derived cardiopulmonary structural metrics, in comparison to magnetic resonance imaging (MRI) for the diagnosis of left heart disease in patients with suspected pulmonary hypertension. Patients with suspected pulmonary hypertension who underwent CTPA, MRI and RHC were identified. Measurements of the cardiac chambers and vessels were recorded from CTPA and MRI. The diagnostic thresholds of individual measurements to detect elevated pulmonary arterial wedge pressure (PAWP) were identified in a derivation cohort (n = 235). Individual CT and MRI derived metrics were tested in validation cohort (n = 211). 446 patients, of which 88 had left heart disease. Left atrial area was a strong predictor of elevated PAWP>15 mm Hg and PAWP>18 mm Hg, area under curve (AUC) 0.854, and AUC 0.873 respectively. Similar accuracy was also identified for MRI derived LA volume, AUC 0.852 and AUC 0.878 for PAWP > 15 and 18 mm Hg, respectively. Left atrial area of 26.8 cm 2 and 30.0 cm 2 were optimal specific thresholds for identification of PAWP > 15 and 18 mm Hg, had sensitivity of 60%/53% and specificity 89%/94%, respectively in a validation cohort. CTPA and MRI derived left atrial size identifies left heart disease in suspected pulmonary hypertension with high specificity. The proposed diagnostic thresholds for elevated left atrial area on routine CTPA may be a useful to indicate the diagnosis of left heart disease in suspected pulmonary hypertension. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Beauty and charm production in fixed target experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidonakis, Nikolaos; Vogt, Ramona
We present calculations of NNLO threshold corrections for beauty and charm production in {pi}{sup -} p and pp interactions at fixed-target experiments. Recent calculations for heavy quark hadroproduction have included next-to-next-to-leading-order (NNLO) soft-gluon corrections [1] to the double differential cross section from threshold resummation techniques [2]. These corrections are important for near-threshold beauty and charm production at fixed-target experiments, including HERA-B and some of the current and future heavy ion experiments.
Xu, Fuqing; Wang, Zhi-Wu; Tang, Li; Li, Yebo
2014-09-01
In solid-state anaerobic digestion (SS-AD) of cellulosic biomass, the volumetric methane production rate has often been found to increase with the increase in total solids (TS) content until a threshold is reached, and then to decrease. This phenomenon cannot be explained by conventional understanding derived from liquid anaerobic digestion. This study proposed that the high TS content-caused mass diffusion limitation may be responsible for the observed methane production deterioration. Based on this hypothesis, a new SS-AD model was developed by taking into account the mass diffusion limitation and hydrolysis inhibition. The good agreement between model simulation and the experimental as well as literature data verified that the observed reduction in volumetric methane production rate could be ascribed to hydrolysis inhibition as a result of the mass diffusion limitation in SS-AD. Copyright © 2014 Elsevier Ltd. All rights reserved.
Creating a Holistic Extractables and Leachables (E&L) Program for Biotechnology Products.
Li, Kim; Rogers, Gary; Nashed-Samuel, Yasser; Lee, Hans; Mire-Sluis, Anthony; Cherney, Barry; Forster, Ronald; Yeh, Ping; Markovic, Ingrid
2015-01-01
The risk mitigation of extractables and leachables presents significant challenges to regulators and drug manufacturers with respect to the development, as well as the lifecycle management, of drug products. A holistic program is proposed, using a science- and risk-based strategy for testing extractables and leachables from primary containers, drug delivery devices, and single-use systems for the manufacture of biotechnology products. The strategy adopts the principles and concepts from ICH Q9 and ICH Q8(R2). The strategy is phase-appropriate, progressing from extractables testing for material screening/selection/qualification through leachables testing of final products. The strategy is designed primarily to ensure patient safety and product quality of biotechnology products. The holistic program requires robust extraction studies using model solvents, with careful consideration of solvation effect, pH, ionic strength, temperature, and product-contact surface and duration. From a wide variety of process- and product-contact materials, such extraction studies have identified and quantified over 200 organic extractable compounds. The most commonly observed compounds were siloxanes, fatty acid amides, and methacrylates. Toxicology assessments were conducted on these compounds using risk-based decision analysis. Parenteral permitted daily exposure limits were derived, as appropriate, for the majority of these compounds. Analysis of the derived parenteral permitted daily exposure limits helped to establish action thresholds to target high-risk leachables in drug products on stability until expiry. Action thresholds serve to trigger quality investigations to determine potential product impact. The holistic program also evaluates the potential risk for immunogenicity. This approach for primary drug containers and delivery devices is also applicable to single-use systems when justified with a historical knowledge base and understanding of the manufacturing processes of biotechnology products. In the development of a drug product, careful consideration is given to impurities that may originate from manufacturing equipment, process components, and packaging materials. The majority of such impurities are common chemical additives used to improve the physicochemical properties of a wide range of plastic materials. Suppliers and drug manufacturers conduct studies to extract chemical additives from the plastic materials in order to screen and predict those that may leach into a drug product. In this context, the term extractables refers to a profile of extracted compounds observed in studies under harsh conditions. In contrast, the term leachables refers to those impurities that leach from the materials under real-use conditions and may be present in final drug products. The purpose of this article is to present a holistic approach that effectively minimizes the risk of leachables to patient safety and product quality. © PDA, Inc. 2015.
NASA Astrophysics Data System (ADS)
Muench, R.; Jones, M.; Herndon, K. E.; Bell, J. R.; Anderson, E. R.; Markert, K. N.; Molthan, A.; Adams, E. C.; Shultz, L.; Cherrington, E. A.; Flores, A.; Lucey, R.; Munroe, T.; Layne, G.; Pulla, S. T.; Weigel, A. M.; Tondapu, G.
2017-12-01
On August 25, 2017, Hurricane Harvey made landfall between Port Aransas and Port O'Connor, Texas, bringing with it unprecedented amounts of rainfall and flooding. In times of natural disasters of this nature, emergency responders require timely and accurate information about the hazard in order to assess and plan for disaster response. Due to the extreme flooding impacts associated with Hurricane Harvey, delineations of water extent were crucial to inform resource deployment. Through the USGS's Hazards Data Distribution System, government and commercial vendors were able to acquire and distribute various satellite imagery to analysts to create value-added products that can be used by these emergency responders. Rapid-response water extent maps were created through a collaborative multi-organization and multi-sensor approach. One team of researchers created Synthetic Aperture Radar (SAR) water extent maps using modified Copernicus Sentinel data (2017), processed by ESA. This group used backscatter images, pre-processed by the Alaska Satellite Facility's Hybrid Pluggable Processing Pipeline (HyP3), to identify and apply a threshold to identify water in the image. Quality control was conducted by manually examining the image and correcting for potential errors. Another group of researchers and graduate student volunteers derived water masks from high resolution DigitalGlobe and SPOT images. Through a system of standardized image processing, quality control measures, and communication channels the team provided timely and fairly accurate water extent maps to support a larger NASA Disasters Program response. The optical imagery was processed through a combination of various band thresholds by using Normalized Difference Water Index (NDWI), Modified Normalized Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI), and cloud masking. Several aspects of the pre-processing and image access were run on internal servers to expedite the provision of images to analysts who could focus on manipulating thresholds and quality control checks for maximum accuracy within the time constraints. The combined results of the radar- and optical-derived value-added products through the coordination of multiple organizations provided timely information for emergency response and recovery efforts
NASA Technical Reports Server (NTRS)
Muench, Rebekke; Jones, Madeline; Herndon, Kelsey; Schultz, Lori; Bell, Jordan; Anderson, Eric; Markert, Kel; Molthan, Andrew; Adams, Emily; Cherrington, Emil;
2017-01-01
On August 25, 2017, Hurricane Harvey made landfall between Port Aransas and Port O'Connor, Texas, bringing with it unprecedented amounts of rainfall and record flooding. In times of natural disasters of this nature, emergency responders require timely and accurate information about the hazard in order to assess and plan for disaster response. Due to the extreme flooding impacts associated with Hurricane Harvey, delineations of water extent were crucial to inform resource deployment. Through the USGS's Hazards Data Distribution System, government and commercial vendors were able to acquire and distribute various satellite imagery to analysts to create value-added products that can be used by these emergency responders. Rapid-response water extent maps were created through a collaborative multi-organization and multi-sensor approach. One team of researchers created Synthetic Aperture Radar (SAR) water extent maps using modified Copernicus Sentinel data (2017), processed by ESA. This group used backscatter images, pre-processed by the Alaska Satellite Facility's Hybrid Pluggable Processing Pipeline (HyP3), to identify and apply a threshold to identify water in the image. Quality control was conducted by manually examining the image and correcting for potential errors. Another group of researchers and graduate student volunteers derived water masks from high resolution DigitalGlobe and SPOT images. Through a system of standardized image processing, quality control measures, and communication channels the team provided timely and fairly accurate water extent maps to support a larger NASA Disasters Program response. The optical imagery was processed through a combination of various band thresholds and by using Normalized Difference Water Index (NDWI), Modified Normalized Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI), and cloud masking. Several aspects of the pre-processing and image access were run on internal servers to expedite the provision of images to analysts who could focus on manipulating thresholds and quality control checks for maximum accuracy within the time constraints. The combined results of the radar- and optical-derived value-added products through the coordination of multiple organizations provided timely information for emergency response and recovery efforts.
NASA Technical Reports Server (NTRS)
Temkin, A.
1984-01-01
Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).
Wetting in Color: Designing a colorometric indicator for wettability
NASA Astrophysics Data System (ADS)
Raymond, Kevin; Burgess, Ian B.; Koay, Natalie; Kolle, Mathias; Loncar, Marko; Aizenberg, Joanna
2012-02-01
Colorimetric litmus tests such as pH paper have enjoyed wide commercial success due to their inexpensive production and exceptional ease of use. While such indicators commonly rely on a specific photochemical response to an analyte, we exploit structural color, derived from coherent scattering from wavelength-scale porosity rather than molecular absorption or luminescence, to create a Wetting-in-Color-Kit (WICK). This inexpensive and highly selective colorimetric indicator for organic liquids employs chemically encoded inverse-opal photonic crystals to translate minute differences in liquids' wettability to macroscopically distinct, easy-to-visualize color patterns. The highly symmetric re-entrant inter-pore geometry imparts a highly specific wetting threshold for liquids. We developed surface modification techniques to generate built-in chemistry gradients within the porous network. These let us tailor the wettability threshold to specific liquids across a continuous range. As wetting is a generic fluidic phenomenon, we envision that WICK could be suitable for applications in authentication or identification of unknown liquids across a broad range of industries.
NASA Astrophysics Data System (ADS)
Chen, Wen-Shiang
Ultrasound contrast agents (UCA) have shown great potential in both diagnostic and therapeutic applications recently. To fully explore the possible applications and the safety concerns of using UCA, a complete understanding of the UCA responses to various acoustic fields is necessary. Therefore, we performed a series of experiments and simulations to investigate the various acoustic properties of UCA with different gases and shells. We also investigated the mechanisms of some UCA-enhanced bioeffects including thrombolysis, hemolysis and high-intensity focused ultrasound (HIFU) tumor ablation. Two pressure thresholds were found: the fragmentation threshold and continuous inertial cavitation (IC) threshold. At the fragmentation threshold, bubbles were destroyed and the released gas dissolved in the surrounding solution at a rate which depended on the bubble's initial size and type of gas. The continuous IC threshold occurred at a higher pressure, where fragments of destroyed UCA (derivative bubbles) underwent violent inertial collapse; the period of activity depending on acoustic parameters such as frequency, pressure, pulse length, and pulse repetition frequency (PRF). Different UCA had different threshold pressures and demonstrated different magnitudes of IC activity after destruction. The amount of derivative bubbles generated by IC was determined by several acoustic parameters including pressure, pulse length and PRE For the same acoustic energy delivered, longer pulses generated more bubbles. More IC could be induced if the derivative bubbles could survive through the 'off' period of the pulsed ultrasound waves, and served as nuclei for the subsequent IC. In therapeutic applications, evidences of IC activity were recorded during the hemolysis, thrombolysis, and the lesion-formation processes with UCA. Hemolysis and thrombolysis were highly correlated to the presence of ultrasound and UCA, and correlated well with the amount of the IC activity. Finally, the 'tadpole-shaped' lesion formed during high-intensity, focused ultrasound treatment was the result of bubble formation by boiling.
Dong, Liang; Zheng, Lei; Yang, Suwen; Yan, Zhenguang; Jin, Weidong; Yan, Yuhong
2017-05-01
Hexabromocyclododecane (HBCD) is a brominated flame retardant used throughout the world. It has been detected in various environmental media and has been shown toxic to aquatic life. The toxic effects of HBCD to aquatic organisms in Chinese freshwater ecosystems are discussed here. Experiments were conducted with nine types of acute toxicity testing and three types of chronic toxicity testing. After comparing a range of species sensitivity distribution models, the optimal model of Bull III was used to derive the safety thresholds for HBCD. The acute safety threshold and the chronic safety threshold of HBCD for Chinese freshwater organisms were found to be 2.32mg/L and 0.128mg/L, respectively. Both values were verified by the methods of the Netherlands and the United States. HBCD was found to be less toxic compared to other widely used brominated flame retardants. The present results provide valuable information for revision of the water quality standard of HBCD in China. Copyright © 2017 Elsevier Inc. All rights reserved.
Model for Predicting Passage of Invasive Fish Species Through Culverts
NASA Astrophysics Data System (ADS)
Neary, V.
2010-12-01
Conservation efforts to promote or inhibit fish passage include the application of simple fish passage models to determine whether an open channel flow allows passage of a given fish species. Derivations of simple fish passage models for uniform and nonuniform flow conditions are presented. For uniform flow conditions, a model equation is developed that predicts the mean-current velocity threshold in a fishway, or velocity barrier, which causes exhaustion at a given maximum distance of ascent. The derivation of a simple expression for this exhaustion-threshold (ET) passage model is presented using kinematic principles coupled with fatigue curves for threatened and endangered fish species. Mean current velocities at or above the threshold predict failure to pass. Mean current velocities below the threshold predict successful passage. The model is therefore intuitive and easily applied to predict passage or exclusion. The ET model’s simplicity comes with limitations, however, including its application only to uniform flow, which is rarely found in the field. This limitation is addressed by deriving a model that accounts for nonuniform conditions, including backwater profiles and drawdown curves. Comparison of these models with experimental data from volitional swimming studies of fish indicates reasonable performance, but limitations are still present due to the difficulty in predicting fish behavior and passage strategies that can vary among individuals and different fish species.
NASA Astrophysics Data System (ADS)
Sicard, Pierre; Martin-lauzer, François-regis
2017-04-01
In the context of global climate change and adjustment/resilience policies' design and implementation, there is a need not only i. for environmental monitoring, e.g. through a range of Earth Observations (EO) land "products" but ii. for a precise assessment of uncertainties of the aforesaid information that feed environmental decision-making (to be introduced in the EO metadata) and also iii. for a perfect handing of the thresholds which help translate "environment tolerance limits" to match detected EO changes through ecosystem modelling. Uncertainties' insight means precision and accuracy's knowledge and subsequent ability of setting thresholds for change detection systems. Traditionally, the validation of satellite-derived products has taken the form of intensive field campaigns to sanction the introduction of data processors in Payload Data Ground Segments chains. It is marred by logistical challenges and cost issues, reason why it is complemented by specific surveys at ground-based monitoring sites which can provide near-continuous observations at a high temporal resolution (e.g. RadCalNet). Unfortunately, most of the ground-level monitoring sites, in the number of 100th or 1000th, which are part of wider observation networks (e.g. FLUXNET, NEON, IMAGINES) mainly monitor the state of the atmosphere and the radiation exchange at the surface, which are different to the products derived from EO data. In addition they are "point-based" compared to the EO cover to be obtained from Sentinel-2 or Sentinel-3. Yet, data from these networks, processed by spatial extrapolation models, are well-suited to the bottom-up approach and relevant to the validation of vegetation parameters' consistency (e.g. leaf area index, fraction of absorbed photosynthetically active radiation). Consistency means minimal errors on spatial and temporal gradients of EO products. Test of the procedure for land-cover products' consistency assessment with field measurements delivered by worldwide networks will be presented. The samples' extrapolation models will make use of the conventional geographic variables (e.g. major biogeographical or biomes, climatic and socio-economic zones and different ecosystem types and land cover classes, focusing on important ecosystems such as forests and grasslands). Focus will be on i. upscaling procedures, from in-situ data to land products matchup, ii. continuous calibration (spectral, radiometric) and adjustment (geometric, radiometric) of processors.
Modeling heat stress under different environmental conditions.
Carabaño, M J; Logar, B; Bormann, J; Minet, J; Vanrobays, M-L; Díaz, C; Tychon, B; Gengler, N; Hammami, H
2016-05-01
Renewed interest in heat stress effects on livestock productivity derives from climate change, which is expected to increase temperatures and the frequency of extreme weather events. This study aimed at evaluating the effect of temperature and humidity on milk production in highly selected dairy cattle populations across 3 European regions differing in climate and production systems to detect differences and similarities that can be used to optimize heat stress (HS) effect modeling. Milk, fat, and protein test day data from official milk recording for 1999 to 2010 in 4 Holstein populations located in the Walloon Region of Belgium (BEL), Luxembourg (LUX), Slovenia (SLO), and southern Spain (SPA) were merged with temperature and humidity data provided by the state meteorological agencies. After merging, the number of test day records/cows per trait ranged from 686,726/49,655 in SLO to 1,982,047/136,746 in BEL. Values for the daily average and maximum temperature-humidity index (THIavg and THImax) ranges for THIavg/THImax were largest in SLO (22-74/28-84) and shortest in SPA (39-76/46-83). Change point techniques were used to determine comfort thresholds, which differed across traits and climatic regions. Milk yield showed an inverted U-shaped pattern of response across the THI scale with a HS threshold around 73 THImax units. For fat and protein, thresholds were lower than for milk yield and were shifted around 6 THI units toward larger values in SPA compared with the other countries. Fat showed lower HS thresholds than protein traits in all countries. The traditional broken line model was compared with quadratic and cubic fits of the pattern of response in production to increasing heat loads. A cubic polynomial model allowing for individual variation in patterns of response and THIavg as heat load measure showed the best statistical features. Higher/lower producing animals showed less/more persistent production (quantity and quality) across the THI scale. The estimated correlations between comfort and THIavg values of 70 (which represents the upper end of the THIavg scale in BEL-LUX) were lower for BEL-LUX (0.70-0.80) than for SPA (0.83-0.85). Overall, animals producing in the more temperate climates and semi-extensive grazing systems of BEL and LUX showed HS at lower heat loads and more re-ranking across the THI scale than animals producing in the warmer climate and intensive indoor system of SPA. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
NASA Astrophysics Data System (ADS)
Ampil, L. J. Y.; Yao, J. G.; Lagrosas, N.; Lorenzo, G. R. H.; Simpas, J.
2017-12-01
The Global Precipitation Measurement (GPM) mission is a group of satellites that provides global observations of precipitation. Satellite-based observations act as an alternative if ground-based measurements are inadequate or unavailable. Data provided by satellites however must be validated for this data to be reliable and used effectively. In this study, the Integrated Multisatellite Retrievals for GPM (IMERG) Final Run v3 half-hourly product is validated by comparing against interpolated ground measurements derived from sixteen ground stations in Metro Manila. The area considered in this study is the region 14.4° - 14.8° latitude and 120.9° - 121.2° longitude, subdivided into twelve 0.1° x 0.1° grid squares. Satellite data from June 1 - August 31, 2014 with the data aggregated to 1-day temporal resolution are used in this study. The satellite data is directly compared to measurements from individual ground stations to determine the effect of the interpolation by contrast against the comparison of satellite data and interpolated measurements. The comparisons are calculated by taking a fractional root-mean-square error (F-RMSE) between two datasets. The results show that interpolation improves errors compared to using raw station data except during days with very small amounts of rainfall. F-RMSE reaches extreme values of up to 654 without a rainfall threshold. A rainfall threshold is inferred to remove extreme error values and make the distribution of F-RMSE more consistent. Results show that the rainfall threshold varies slightly per month. The threshold for June is inferred to be 0.5 mm, reducing the maximum F-RMSE to 9.78, while the threshold for July and August is inferred to be 0.1 mm, reducing the maximum F-RMSE to 4.8 and 10.7, respectively. The maximum F-RMSE is reduced further as the threshold is increased. Maximum F-RMSE is reduced to 3.06 when a rainfall threshold of 10 mm is applied over the entire duration of JJA. These results indicate that IMERG performs well for moderate to high intensity rainfall and that the interpolation remains effective only when rainfall exceeds a certain threshold value. Over Metro Manila, an F-RMSE threshold of 0.5 mm indicated better correspondence between ground measured and satellite measured rainfall.
Kastelein, Ronald A; Hoek, Lean; Wensveen, Paul J; Terhune, John M; de Jong, Christ A F
2010-02-01
The underwater hearing sensitivities of two 2-year-old female harbor seals were quantified in a pool built for acoustic research by using a behavioral psycho-acoustic technique. The animals were trained only to respond when they detected an acoustic signal ("go/no-go" response). Detection thresholds were obtained for pure tone signals (frequencies: 0.2-40 kHz; durations: 0.5-5000 ms, depending on the frequency; 59 frequency-duration combinations). Detection thresholds were quantified by varying the signal amplitude by the 1-up, 1-down staircase method, and were defined as the stimulus levels, resulting in a 50% detection rate. The hearing thresholds of the two seals were similar for all frequencies except for 40 kHz, for which the thresholds differed by, on average, 3.7 dB. There was an inverse relationship between the time constant (tau), derived from an exponential model of temporal integration, and the frequency [log(tau)=2.86-0.94 log(f);tau in ms and f in kHz]. Similarly, the thresholds increased when the pulse was shorter than approximately 780 cycles (independent of the frequency). For pulses shorter than the integration time, the thresholds increased by 9-16 dB per decade reduction in the duration or number of cycles in the pulse. The results of this study suggest that most published hearing thresholds
Drakesmith, M; Caeyenberghs, K; Dutt, A; Lewis, G; David, A S; Jones, D K
2015-09-01
Graph theory (GT) is a powerful framework for quantifying topological features of neuroimaging-derived functional and structural networks. However, false positive (FP) connections arise frequently and influence the inferred topology of networks. Thresholding is often used to overcome this problem, but an appropriate threshold often relies on a priori assumptions, which will alter inferred network topologies. Four common network metrics (global efficiency, mean clustering coefficient, mean betweenness and smallworldness) were tested using a model tractography dataset. It was found that all four network metrics were significantly affected even by just one FP. Results also show that thresholding effectively dampens the impact of FPs, but at the expense of adding significant bias to network metrics. In a larger number (n=248) of tractography datasets, statistics were computed across random group permutations for a range of thresholds, revealing that statistics for network metrics varied significantly more than for non-network metrics (i.e., number of streamlines and number of edges). Varying degrees of network atrophy were introduced artificially to half the datasets, to test sensitivity to genuine group differences. For some network metrics, this atrophy was detected as significant (p<0.05, determined using permutation testing) only across a limited range of thresholds. We propose a multi-threshold permutation correction (MTPC) method, based on the cluster-enhanced permutation correction approach, to identify sustained significant effects across clusters of thresholds. This approach minimises requirements to determine a single threshold a priori. We demonstrate improved sensitivity of MTPC-corrected metrics to genuine group effects compared to an existing approach and demonstrate the use of MTPC on a previously published network analysis of tractography data derived from a clinical population. In conclusion, we show that there are large biases and instability induced by thresholding, making statistical comparisons of network metrics difficult. However, by testing for effects across multiple thresholds using MTPC, true group differences can be robustly identified. Copyright © 2015. Published by Elsevier Inc.
The perturbed Sparre Andersen model with a threshold dividend strategy
NASA Astrophysics Data System (ADS)
Gao, Heli; Yin, Chuancun
2008-10-01
In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.
Tokuda, Isao T; Shimamura, Ryo
2017-08-01
As an alternative factor to produce asymmetry between left and right vocal folds, the present study focuses on level difference, which is defined as the distance between the upper surfaces of the bilateral vocal folds in the inferior-superior direction. Physical models of the vocal folds were utilized to study the effect of the level difference on the phonation threshold pressure. A vocal tract model was also attached to the vocal fold model. For two types of different models, experiments revealed that the phonation threshold pressure tended to increase as the level difference was extended. Based upon a small amplitude approximation of the vocal fold oscillations, a theoretical formula was derived for the phonation threshold pressure. This theory agrees with the experiments, especially when the phase difference between the left and right vocal folds is not extensive. Furthermore, an asymmetric two-mass model was simulated with a level difference to validate the experiments as well as the theory. The primary conclusion is that the level difference has a potential effect on voice production especially for patients with an extended level of vertical difference in the vocal folds, which might be taken into account for the diagnosis of voice disorders.
Prediction of threshold pain skin temperature from thermal properties of materials in contact.
Stoll, A M; Chianta, M A; Piergallini, J R
1982-12-01
Aerospace design engineers have long sought concrete data with respect to the thermal safety of materials in contact with human skin. A series of studies on this subject has been completed and some of the results have been reported earlier. In these studies over 2,000 observations were made of pain threshold during contact with materials at elevated temperatures. Six materials were used representing the full range of thermal properties from good conductors to good insulators. Previous reports gave methods for determining the maximum permissible temperatures for any material in safe contact with bare skin for 1-5 s solely from a knowledge of its thermal properties. This report presents the comparison of the theoretical and experimental contact temperatures at pain threshold and provides a method for deriving the skin temperature productive of threshold pain from the thermal properties of any material within the range of those studies. Ratios reflecting the heat transfer coefficient associated with the materials in contact are related to their thermal properties so that the skin temperature at pain threshold may be determined from that calculated from heat transfer theory. Tabular and graphical representation of these data permits interpolation within the range of properties so that any material of known thermal conductivity, density and specific heat may be assessed with respect to its effect on the skin temperature during contact to the end point of pain. These data, in conjunction with those already reported, constitute a system for the complete assessment of the thermal aspects of practically any material suitable for construction and manufacturing applications with respect to safe contact with human skin.
Reassessment of data used in setting exposure limits for hot particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baum, J.W.; Kaurin, D.G.
1991-05-01
A critical review and a reassessment of data reviewed in NCRP Report 106 on effects of hot particles'' on the skin of pigs, monkeys, and humans were made. Our analysis of the data of Forbes and Mikhail on effects from activated UC{sub 2} particles, ranging in diameter from 144 {mu}m to 328 {mu}m, led to the formulation of a new model for prediction of both the threshold for acute ulceration and for ulcer diameter. A dose of 27 Gy at a depth of 1.33 mm in tissue in this model will result in an acute ulcer with a diameter determinedmore » by the radius over which this dose (at 1.33-mm depth) extends. Application of the model to the Forbes-Mikhail data yielded a threshold'' (5% probability) of 6 {times} 10{sup 9} beta particles from a point source on skin of mixed fission product beta particles, or about 10{sup 10} beta particles from Sr--Y-90, since few of the Sr-90 beta particles reach this depth. The data of Hopewell et al. for their 1 mm Sr-Y-90 exposures were also analyzed with the above model and yielded a predicted threshold of 2 {times} 10{sup 10} Sr-Y-90 beta particles for a point source on skin. Dosimetry values were employed in this latter analysis that are 3.3 times higher than previously reported for this source. An alternate interpretation of the Forbes and Mikhail data, derived from linear plots of the data, is that the threshold depends strongly on particle size with the smaller particles yielding a much lower threshold and smaller minimum size ulcer. Additional animal exposures are planned to distinguish between the above explanations. 17 refs., 3 figs., 3 tabs.« less
NASA Astrophysics Data System (ADS)
Engel, Michael; Bertoldi, Giacomo; Notarnicola, Claudia; Comiti, Francesco
2017-04-01
To assess the performance of simulated snow cover of hydrological models, it is common practice to compare simulated data with observed ones derived from satellite images such as MODIS. However, technical and methodological limitations such as data availability of MODIS products, its spatial resolution or difficulties in finding appropriate parameterisations of the model need to be solved previously. Another important assumption usually made is the threshold of minimum simulated snow depth, generally set to 10 mm of snow depth, to respect the MODIS detection thresholds for snow cover. But is such a constant threshold appropriate for complex alpine terrain? How important is the impact of different snow depth thresholds on the spatial and temporal distribution of the pixel-based overall accuracy (OA)? To address this aspect, we compared the snow covered area (SCA) simulated by the GEOtop 2.0 snow model to the daily composite 250 m EURAC MODIS SCA in the upper Saldur basin (61 km2, Eastern Italian Alps) during the period October 2011 - October 2013. Initially, we calibrated the snow model against snow depths and snow water equivalents at point scale, taken from measurements at different meteorological stations. We applied different snow depth thresholds (0 mm, 10 mm, 50 mm, and 100 mm) to obtain the simulated snow cover and assessed the changes in OA both in time (during the entire evaluation period, accumulation and melting season) and space (entire catchment and specific areas of topographic characteristics such as elevation, slope, aspect, landcover, and roughness). Results show remarkable spatial and temporal differences in OA with respect to different snow depth thresholds. Inaccuracies of simulated and observed SCA during the accumulation season September to November 2012 were located in areas with north-west aspect, slopes of 30° or little elevation differences at sub-pixel scale (-0.25 to 0 m). We obtained best agreements with MODIS SCA for a snow depth threshold of 100 mm, leading to increased OA (> 0.8) in 13‰ of the catchment area. SCA agreement in January 2012 and 2013 was slightly limited by MODIS sensor detection due to shading effects and low illumination in areas exposed north-west to north. On the contrary, during the melting season in April 2013 and after the September 2013 snowfall event seemed to depend more on parameterisation than on snow depth thresholds. In contrast, inaccuracies during the melting season March to June 2013 could hardly be attributed to topographic characteristics and different snow depth thresholds but rather on model parameterisation. We identified specific conditions (p.e. specific snowfall events in autumn 2012 and spring 2013) when either MODIS data or the hydrological model was less accurate, thus justifying the need for improvements of precision in the snow cover detection algorithms or in the model's process description. In consequence, our study observations could support future snow cover evaluations in mountain areas, where spatially and temporally dynamic snow depth thresholds are transferred from the catchment scale to the regional scale. Keywords: snow cover, snow modelling, MODIS, snow depth sensitivity, alpine catchment
NASA Astrophysics Data System (ADS)
Verduzco, Vivian S.; Garatuza-Payán, Jaime; Yépez, Enrico A.; Watts, Christopher J.; Rodríguez, Julio C.; Robles-Morua, Agustin; Vivoni, Enrique R.
2015-10-01
Due to their large extent and high primary productivity, tropical dry forests (TDF) are important contributors to atmospheric carbon exchanges in subtropical and tropical regions. In northwest Mexico, a bimodal precipitation regime that includes winter precipitation derived from Pacific storms and summer precipitation from the North American monsoon (NAM) couples water availability with ecosystem processes. We investigated the net ecosystem production of a TDF ecosystem using a 4.5 year record of water and carbon fluxes obtained from the eddy covariance method complemented with remotely sensed data. We identified a large CO2 efflux at the start of the summer season that is strongly related to the preceding winter precipitation and greenness. Since this CO2 efflux occurs prior to vegetation green-up, we infer that respiration is mainly due to decomposition of soil organic matter accumulated from the prior growing season. Overall, ecosystem respiration has an important effect on the net ecosystem production but can be overwhelmed by the strength of the primary productivity during the NAM. Precipitation characteristics during NAM have significant controls on sustaining carbon fixation in the TDF into the fall season. We identified that a threshold of ~350 to 400 mm of monsoon precipitation leads to a switch in the annual carbon balance in the TDF ecosystem from a net source (+102 g C/m2/yr) to a net sink (-249 g C/m2/yr). This monsoonal precipitation threshold is typically exceeded one out of every 2 years. The close coupling of winter and summer periods with respect to carbon fluxes suggests that the annual carbon balance is dependent on precipitation amounts in both seasons in TDF ecosystems.
Frossard, Victor; Verneaux, Valérie; Millet, Laurent; Magny, Michel; Perga, Marie-Elodie
2015-06-01
Stable C isotope ratio (δ(13)C) values of chironomid remains (head capsules; HC) were used to infer changes in benthic C sources over the last 150 years for two French sub-Alpine lakes. The HCs were retrieved from a series of sediment cores from different depths. The HC δ(13)C values started to decrease with the onset of eutrophication. The HC δ(13)C temporal patterns varied among depths, which revealed spatial differences in the contribution of methanotrophic bacteria to the benthic secondary production. The estimates of the methane (CH4)-derived C contribution to chironomid biomass ranged from a few percent prior to the 1930s to up to 30 % in recent times. The chironomid fluxes increased concomitantly with changes in HC δ(13)C values before a drastic decrease due to the development of hypoxic conditions. The hypoxia reinforced the implication for CH4-derived C transfer to chironomid production. In Lake Annecy, the HC δ(13)C values were negatively correlated to total organic C (TOC) content in the sediment (Corg), whereas no relationship was found in Lake Bourget. In Lake Bourget, chironomid abundances reached their maximum with TOC contents between 1 and 1.5 % Corg, which could constitute a threshold for change in chironomid abundance and consequently for the integration of CH4-derived C into the lake food webs. Our results indicated that the CH4-derived C contribution to the benthic food webs occurred at different depths in these two large, deep lakes (deep waters and sublittoral zone), and that the trophic transfer of this C was promoted in sublittoral zones where O2 gradients were dynamic.
A study on the temperature dependence of the threshold switching characteristics of Ge2Sb2Te5
NASA Astrophysics Data System (ADS)
Lee, Suyoun; Jeong, Doo Seok; Jeong, Jeung-hyun; Zhe, Wu; Park, Young-Wook; Ahn, Hyung-Woo; Cheong, Byung-ki
2010-01-01
We investigated the temperature dependence of the threshold switching characteristics of a memory-type chalcogenide material, Ge2Sb2Te5. We found that the threshold voltage (Vth) decreased linearly with temperature, implying the existence of a critical conductivity of Ge2Sb2Te5 for its threshold switching. In addition, we investigated the effect of bias voltage and temperature on the delay time (tdel) of the threshold switching of Ge2Sb2Te5 and described the measured relationship by an analytic expression which we derived based on a physical model where thermally activated hopping is a dominant transport mechanism in the material.
40 CFR 98.121 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.121 Section 98...) MANDATORY GREENHOUSE GAS REPORTING Fluorinated Gas Production § 98.121 Reporting threshold. You must report...). To calculate GHG emissions for comparison to the 25,000 metric ton CO2e per year emission threshold...
Choice of threshold line angle for binary phase-only filters
NASA Astrophysics Data System (ADS)
Vijaya Kumar, Bhagavatula; Hendrix, Charles D.
1993-10-01
The choice of threshold line angle (TLA) is an important issue in designing Binary Phase-Only Filters (BPOFs). In this paper, we derive expressions that explicitly relate the TLA to correlation peak intensity. We also show some examples that illustrate the effect of choosing the wrong TLA.
Climatic thresholds for concentrations of minerals and heavy metals in Argentinean soybeans
USDA-ARS?s Scientific Manuscript database
Mineral undernourishment is of concern throughout the world, and plant-derived foods are considered a major dietary source contributing to adequate daily mineral intake. Soybeans and soy ingredients are consumed daily by humans and animals. In this study, we demonstrate the climate thresholds for op...
Threshold law for positron-atom impact ionisation
NASA Technical Reports Server (NTRS)
Temkin, A.
1982-01-01
The threshold law for ionisation of atoms by positron impact is adduced in analogy with our approach to the electron-atom ionization. It is concluded the Coulomb-dipole region of the potential gives the essential part of the interaction in both cases and leads to the same kind of result: a modulated linear law. An additional process which enters positron ionization is positronium formation in the continuum, but that will not dominate the threshold yield. The result is in sharp contrast to the positron threshold law as recently derived by Klar on the basis of a Wannier-type analysis.
Statistical properties of light from optical parametric oscillators
NASA Astrophysics Data System (ADS)
Vyas, Reeta; Singh, Surendra
2009-12-01
Coherence properties of light beams generated by optical parametric oscillators (OPOs) are discussed in the region of threshold. Analytic expressions, that are valid throughout the threshold region, for experimentally measurable quantities such as the mean and variance of photon number fluctuations, squeezing of field quadratures, and photon counting distributions are derived. These expressions describe non-Gaussian fluctuations of light in the region of threshold and reproduce Gaussian fluctuations below and above threshold, thus providing a bridge between below and above threshold regimes of operation. They are used to study the transformation of fluctuation properties of light as the OPOs make a transition from below to above threshold. The results for the OPOs are compared to those for the single-mode and two-mode lasers and their similarities and differences are discussed.
{omega} meson production in pp collisions with a polarized beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balasubramanyam, J.; Venkataraya,; Ramachandran, G.
2008-07-15
Model independent formulas are derived for the beam analyzing power A{sub y} and beam to meson spin transfers in pp{yields}pp{omega}, taking into consideration all six threshold partial wave amplitudes f{sub 1},...,f{sub 6} covering the Ss, Sp, and Ps channels. It is shown that the lowest three partial wave amplitudes f{sub 1},f{sub 2},f{sub 3} can be determined empirically without any discrete ambiguities. Partial information with regard to the amplitudes f{sub 4},f{sub 5},f{sub 6} covering the Ps channel may be extracted, if the measurements are carried through at the double differential level.
Kastelein, Ronald A; Wensveen, Paul J; Terhune, John M; de Jong, Christ A F
2011-01-01
Equal-loudness functions describe relationships between the frequencies of sounds and their perceived loudness. This pilot study investigated the possibility of deriving equal-loudness contours based on the assumption that sounds of equal perceived loudness elicit equal reaction times (RTs). During a psychoacoustic underwater hearing study, the responses of two young female harbor seals to tonal signals between 0.125 and 100 kHz were filmed. Frame-by-frame analysis was used to quantify RT (the time between the onset of the sound stimulus and the onset of movement of the seal away from the listening station). Near-threshold equal-latency contours, as surrogates for equal-loudness contours, were estimated from RT-level functions fitted to mean RT data. The closer the received sound pressure level was to the 50% detection hearing threshold, the more slowly the animals reacted to the signal (RT range: 188-982 ms). Equal-latency contours were calculated relative to the RTs shown by each seal at sound levels of 0, 10, and 20 dB above the detection threshold at 1 kHz. Fifty percent detection thresholds are obtained with well-trained subjects actively listening for faint familiar sounds. When calculating audibility ranges of sounds for harbor seals in nature, it may be appropriate to consider levels 20 dB above this threshold.
Lilge, L.; Olivo, M. C.; Schatz, S. W.; MaGuire, J. A.; Patterson, M. S.; Wilson, B. C.
1996-01-01
The applicability and limitations of a photodynamic threshold model, used to describe quantitatively the in vivo response of tissues to photodynamic therapy, are currently being investigated in a variety of normal and malignant tumour tissues. The model states that tissue necrosis occurs when the number of photons absorbed by the photosensitiser per unit tissue volume exceeds a threshold. New Zealand White rabbits were sensitised with porphyrin-based photosensitisers. Normal brain or intracranially implanted VX2 tumours were illuminated via an optical fibre placed into the tissue at craniotomy. The light fluence distribution in the tissue was measured by multiple interstitial optical fibre detectors. The tissue concentration of the photosensitiser was determined post mortem by absorption spectroscopy. The derived photodynamic threshold values for normal brain are significantly lower than for VX2 tumour for all photosensitisers examined. Neuronal damage is evident beyond the zone of frank necrosis. For Photofrin the threshold decreases with time delay between photosensitiser administration and light treatment. No significant difference in threshold is found between Photofrin and haematoporphyrin derivative. The threshold in normal brain (grey matter) is lowest for sensitisation by 5 delta-aminolaevulinic acid. The results confirm the very high sensitivity of normal brain to porphyrin photodynamic therapy and show the importance of in situ light fluence monitoring during photodynamic irradiation. Images Figure 1 Figure 4 Figure 5 Figure 6 Figure 7 PMID:8562339
Seasonal forecasts for the agricultural sector in Peru through user-tailored indices
NASA Astrophysics Data System (ADS)
Sedlmeier, Katrin; Gubler, Stefanie; Spierig, Christoph; Quevedo, Karim; Escajadillo, Yury; Avalos, Griña; Liniger, Mark A.; Schwierz, Cornelia
2017-04-01
In the agricultural sector, the demand for seasonal forecast information is high since agriculture depends strongly on climatic conditions during the growing season. Unfavorable weather and climate events, such as droughts or frost events, can lead to crop losses and thereby to large economic damages or life-threatening conditions in case of subsistence farming. The generally used presentation form of tercile probabilities of seasonally averaged meteorological quantities are not specific enough for end users. More user-tailored seasonal information is necessary. For example, warmer than average temperatures might be favorable for a crop as long as they remain below a plant-specific critical threshold. If, on the other hand, too many days show temperatures above this critical threshold, a mitigation action such as e.g. changing the crop type would be required. In the framework of the CLIMANDES project (a pilot project of the Global Framework for Climate Services led by WMO [http://www.wmo.int/gfcs/climandes]), user-tailored seasonal forecast products are developed for the agricultural sector in the Peruvian Andes. Such products include indices such as e.g. the frost risk, the occurrence of long dry periods, or the start of the rainy season which is crucial to schedule sowing. Furthermore, more specific indices derived from crop requirement studies are elaborated such as the number of days exceeding or falling below plant specific temperature thresholds for given phenological stages. The applicability of these products highly depends on forecast skill. In this study, the potential predictability and the skill of selected indicators are presented using seasonal hindcast data of the ECMWF system 4 for Peru during the time period 1981-2010. Furthermore, the influence of ENSO on the prediction skill is investigated. In this study, reanalysis data, ground measurements, and a gridded precipitation dataset are used for verification. The results indicate that temperature-based indicators show sizeable skill in the Peruvian highlands while precipitation-based forecasts are much more challenging.
NASA Technical Reports Server (NTRS)
Fassnacht, Steven R.; Sexstone, Graham A.; Kashipazha, Amir H.; Lopez-Moreno, Juan Ignacio; Jasinski, Michael F.; Kampf, Stephanie K.; Von Thaden, Benjamin C.
2015-01-01
During the melting of a snowpack, snow water equivalent (SWE) can be correlated to snow-covered area (SCA) once snow-free areas appear, which is when SCA begins to decrease below 100%. This amount of SWE is called the threshold SWE. Daily SWE data from snow telemetry stations were related to SCA derived from moderate-resolution imaging spectro radiometer images to produce snow-cover depletion curves. The snow depletion curves were created for an 80,000 sq km domain across southern Wyoming and northern Colorado encompassing 54 snow telemetry stations. Eight yearly snow depletion curves were compared, and it is shown that the slope of each is a function of the amount of snow received. Snow-cover depletion curves were also derived for all the individual stations, for which the threshold SWE could be estimated from peak SWE and the topography around each station. A stations peak SWE was much more important than the main topographic variables that included location, elevation, slope, and modelled clear sky solar radiation. The threshold SWE mostly illustrated inter-annual consistency.
Montei, Carolyn; McDougal, Susan; Mozola, Mark; Rice, Jennifer
2014-01-01
The Soleris Non-fermenting Total Viable Count method was previously validated for a wide variety of food products, including cocoa powder. A matrix extension study was conducted to validate the method for use with cocoa butter and cocoa liquor. Test samples included naturally contaminated cocoa liquor and cocoa butter inoculated with natural microbial flora derived from cocoa liquor. A probability of detection statistical model was used to compare Soleris results at multiple test thresholds (dilutions) with aerobic plate counts determined using the AOAC Official Method 966.23 dilution plating method. Results of the two methods were not statistically different at any dilution level in any of the three trials conducted. The Soleris method offers the advantage of results within 24 h, compared to the 48 h required by standard dilution plating methods.
Dekant, Wolfgang; Melching-Kollmuss, Stephanie; Kalberlah, Fritz
2010-03-01
In Europe, limits for tolerable concentrations of "non-relevant metabolites" for active ingredients (AI) of plant protection products in drinking water between 0.1 and 10 microg/L are discussed depending on the toxicological information available. "Non-relevant metabolites" are degradation products of AIs, which do not or only partially retain the targeted toxicities of AIs. For "non-relevant metabolites" without genotoxicity (to be confirmed by testing in vitro), the application of the concept of "thresholds of toxicological concern" results in a health-based drinking water limit of 4.5 microg/L even for Cramer class III compounds, using the TTC threshold of 90 microg/person/day (divided by 10 and 2). Taking into account the thresholds derived from two reproduction toxicity data bases a drinking water limit of 3.0 microg/L is proposed. Therefore, for "non-relevant metabolites" whose drinking water concentration is below 3.0 microg/L, no toxicity testing is necessary. This work develops a toxicity assessment strategy as a basis to delineate health-based limits for "non-relevant metabolites" in ground and drinking water. Toxicological testing is recommended to investigate, whether the metabolites are relevant or not, based on the hazard properties of the parent AIs, as outlined in the SANCO Guidance document. Also, genotoxicity testing of the water metabolites is clearly recommended. In this publication, tiered testing strategies are proposed for non-relevant metabolites, when drinking water concentrations >3.0 microg/L will occur. Conclusions based on structure-activity relationships and the detailed toxicity database on the parent AI should be included. When testing in animals is required for risk assessment, key aspects are studies along OECD-testing guidelines with "enhanced" study designs addressing additional endpoints such as reproductive toxicity and a developmental screening test to derive health-based tolerable drinking water limits with a limited number of animals. The testing strategies are similar to those used in the initial hazard assessment of high production volume (HPV) chemicals. For "non-relevant metabolites" which are also formed as products of the biotransformation of the parent AI in mammals, the proposed toxicity testing strategies uses the repeat-dose oral toxicity study combined with a reproductive/developmental screening as outlined in OECD test guidelines 407 and 422 with integration of determination of hormonal activities. For "non-relevant metabolites" not formed during biotransformation of the AI in mammals, the strategy relies on an "enhanced" 90-day oral study covering additional endpoints regarding hormonal effects and male and female fertility in combination with a prenatal developmental toxicity study (OECD test guideline 414). The integration of the results of these studies into the risk assessment process applies large minimal margins of exposure (MOEs) to compensate for the shorter duration of the studies. The results of the targeted toxicity testing will provide a science basis for setting tolerable drinking water limits for "non-relevant metabolites" based on their toxicology. Based on the recommendations given in the SANCO guidance document and the work described in this and the accompanying paper, a concise re-evaluation of the Guidance document is proposed. (c) 2009 Elsevier Inc. All rights reserved.
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
Kanis, John A; Harvey, Nicholas C; Cooper, Cyrus; Johansson, Helena; Odén, Anders; McCloskey, Eugene V
2016-01-01
In most assessment guidelines, treatment for osteoporosis is recommended in individuals with prior fragility fractures, especially fractures at spine and hip. However, for those without prior fractures, the intervention thresholds can be derived using different methods. The aim of this report was to undertake a systematic review of the available information on the use of FRAX® in assessment guidelines, in particular the setting of thresholds and their validation. We identified 120 guidelines or academic papers that incorporated FRAX of which 38 provided no clear statement on how the fracture probabilities derived are to be used in decision-making in clinical practice. The remainder recommended a fixed intervention threshold (n=58), most commonly as a component of more complex guidance (e.g. bone mineral density (BMD) thresholds) or an age-dependent threshold (n=22). Two guidelines have adopted both age-dependent and fixed thresholds. Fixed probability thresholds have ranged from 4 to 20 % for a major fracture and 1.3-5 % for hip fracture. More than one half (39) of the 58 publications identified utilized a threshold probability of 20 % for a major osteoporotic fracture, many of which also mention a hip fracture probability of 3 % as an alternative intervention threshold. In nearly all instances, no rationale is provided other than that this was the threshold used by the National Osteoporosis Foundation of the US. Where undertaken, fixed probability thresholds have been determined from tests of discrimination (Hong Kong), health economic assessment (US, Switzerland), to match the prevalence of osteoporosis (China) or to align with pre-existing guidelines or reimbursement criteria (Japan, Poland). Age-dependent intervention thresholds, first developed by the National Osteoporosis Guideline Group (NOGG), are based on the rationale that if a woman with a prior fragility fracture is eligible for treatment, then, at any given age, a man or woman with the same fracture probability but in the absence of a previous fracture (i.e. at the ‘fracture threshold’) should also be eligible. Under current NOGG guidelines, based on age-dependent probability thresholds, inequalities in access to therapy arise especially at older ages (≥ 70 years) depending on the presence or absence of a prior fracture. An alternative threshold using a hybrid model reduces this disparity. The use of FRAX (fixed or age-dependent thresholds) as the gateway to assessment identifies individuals at high risk more effectively than the use of BMD. However, the setting of intervention thresholds need to be country-specific. PMID:27465509
A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.
Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco
2018-01-01
One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.
NASA Astrophysics Data System (ADS)
Chazette, Patrick; Royer, Philippe
2017-08-01
A study of the intense spring pollution events occurring between 2007 and 2016 on the Paris Area is presented using ground-based and spaceborne measurements. Emphasis is placed on 2011 where data included ground-based lidar measurements. This last period corresponds with the highest regional pollution levels of the past decade. The information threshold (daily average of (mass concentration of particles with aerodynamic diameter less than 10 μm) PM10 > 50 μg m-3) was exceeded 16 times, while the alert threshold (daily average of PM10 > 80 μg m-3) was exceeded twice. The information (alert) threshold exists to protect the most fragile people (the entire population). Ground-based and spaceborne measurements demonstrate the benefit of their synergy as each is representative of specific space and time scales. The operational products of the spaceborne instruments Cloud-Aerosol LIdar with Orthogonal Polarization (CALIOP) and the Moderate Resolution Imaging Spectroradiometer are used. For 2011, CALIOP vertical profiles are inversed to assess the backscatter to extinction ratio, which is then successfully compared with similar results derived from the CALIOP operational products, a ground-based lidar and Sun photometers. The aerosols are identified to be polluted continental and polluted dust aerosols following the criteria used for the inversion of the CALIOP profiles. Aerosol typing is consistent between the ground-based and spaceborne lidars, demonstrating the importance of CALIOP for other years where the ground-based lidar was not in operation. The main pollution sources responsible for the spring aerosol pollution, occurring during anticyclonic meteorological conditions, are identified as coming from Western Europe: Benelux, Rhine-Ruhr area, and the Lorraine area.
Baumert, Joseph L; Taylor, Steve L; Koppelman, Stef J
Peanut immunotherapy studies are conducted with the aim to decrease the sensitivity of patients to peanut exposure with the outcome evaluated by testing the threshold for allergic response in a double-blind placebo-controlled food challenge. The clinical relevance of increasing this threshold is not well characterized. We aimed to quantify the clinical benefit of an increased threshold for peanut-allergic patients. Quantitative risk assessment was performed by matching modeled exposure to peanut protein with individual threshold levels. Exposure was modeled by pairing US consumption data for various food product categories with potential contamination levels of peanut that have been demonstrated to be present on occasion in such food products. Cookies, ice cream, doughnuts/snack cakes, and snack chip mixes were considered in the risk assessment. Increasing the baseline threshold before immunotherapy from 100 mg or less peanut protein to 300 mg peanut protein postimmunotherapy reduces the risk of experiencing an allergic reaction by more than 95% for all 4 food product categories that may contain trace levels of peanut residue. Further increase in the threshold to 1000 mg of peanut protein had an additional quantitative benefit in risk reduction for all patients reacting to 300 mg or less at baseline. We conclude that achieving thresholds of 300 mg and 1000 mg of peanut protein by peanut immunotherapy is clinically relevant, and that the risk for peanut-allergic patients who have achieved this increased threshold to experience an allergic reaction is reduced in a clinically meaningful way. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Optical implementation of inner product neural associative memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1995-01-01
An optical implementation of an inner-product neural associative memory is realized with a first spatial light modulator for entering an initial two-dimensional N-tuple vector and for entering a thresholded output vector image after each iteration until convergence is reached, and a second spatial light modulator for entering M weighted vectors of inner-product scalars multiplied with each of the M stored vectors, where the inner-product scalars are produced by multiplication of the initial input vector in the first iterative cycle (and thresholded vectors in subsequent iterative cycles) with each of the M stored vectors, and the weighted vectors are produced by multiplication of the scalars with corresponding ones of the stored vectors. A Hughes liquid crystal light valve is used for the dual function of summing the weighted vectors and thresholding the sum vector. The thresholded vector is then entered through the first spatial light modulator for reiteration of the process cycle until convergence is reached.
NASA Astrophysics Data System (ADS)
Penin, A. A.; Pivovarov, A. A.
2001-02-01
We present an analytical description of top-antitop pair production near the threshold in $e^+e^-$ annihilation and $\\g\\g$ collisions. A set of basic observables considered includes the total cross sections, forward-backward asymmetry and top quark polarization. The threshold effects relevant for the basic observables are described by three universal functions related to S wave production, P wave production and S-P interference. These functions are computed analytically up to the next-to-next-to-leading order of NRQCD. The total $e^+e^-\\to t\\bar t$ cross section near the threshold is obtained in the next-to-next-to-leading order in the closed form including the contribution due to the axial coupling of top quark and mediated by the Z-boson. The effects of the running of the strong coupling constant and of the finite top quark width are taken into account analytically for the P wave production and S-P wave interference.
Threshold and non-threshold chemical carcinogens: A survey of the present regulatory landscape.
Bevan, Ruth J; Harrison, Paul T C
2017-08-01
For the proper regulation of a carcinogenic material it is necessary to fully understand its mode of action, and in particular whether it demonstrates a threshold of effect. This paper explores our present understanding of carcinogenicity and the mechanisms underlying the carcinogenic response. The concepts of genotoxic and non-genotoxic and threshold and non-threshold carcinogens are fully described. We provide summary tables of the types of cancer considered to be associated with exposure to a number of carcinogens and the available evidence relating to whether carcinogenicity occurs through a threshold or non-threshold mechanism. In light of these observations we consider how different regulatory bodies approach the question of chemical carcinogenesis, looking in particular at the definitions and methodologies used to derive Occupational Exposure Levels (OELs) for carcinogens. We conclude that unless proper differentiation is made between threshold and non-threshold carcinogens, inappropriate risk management measures may be put in place - and lead also to difficulties in translating carcinogenicity research findings into appropriate health policies. We recommend that clear differentiation between threshold and non-threshold carcinogens should be made by all expert groups and regulatory bodies dealing with carcinogen classification and risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.
Health technology assessment and haemophilia.
Farrugia, A; O'Mahony, B; Cassar, J
2012-03-01
Although the funding of rare diseases such as haemophilia in developing countries remains a low priority, pressures on the funding of haemophilia treatment are also emerging in developed economies affected by the global economic downturn and the other demands on health care budgets. This is leading advisory bodies and payers alike to explore the tools of Health Technology Assessment (HTAs) in deriving recommendations for reimbursement policies. In particular, the use of cost utility analysis (CUA) in deriving costs per quality adjusted life year (QALY) for different interventions is being used to rank interventions in order of priorities relative to a threshold cost per QALY. In these exercises, rare chronic disorders such as haemophilia emerge as particularly unattractive propositions for reimbursement, as the accepted methodology of deriving a CUA. For e.g. the use of prophylaxis in haemophilia leads to a range of costs/QALY which exceed the willingness to pay thresholds of most payers. In this commentary, we review the principles utilized in a recent systematic review of the use of haemophilia products carried out in Sweden as part of an HTA. We suggest that ranking haemophilia related interventions with the standard interventions of therapeutics and public health in CUA comparisons is inappropriate. Given that haemophilia treatment is a form of blood replacement therapy, we propose that such comparisons should be made with the interventions of mainstream blood transfusion. We suggest that unequivocally effective treatments such as haemophilia therapies should be assessed differently from mainstream interventions, that new methodologies are required for these kinds of diseases and that evidence of a societal willingness to support people with rare disorders needs to be recognized when reimbursement policies are developed. © 2012 Blackwell Publishing Ltd.
Petterson, Stephen; Burke, Matthew; Phillips, Robert; Teevan, Bridget
2011-05-01
Legislation proposed in 2009 to expand GME set institutional primary care and general surgery production eligibility thresholds at 25% at entry into training. The authors measured institutions' production of primary care physicians and general surgeons on completion of first residency versus two to four years after graduation to inform debate and explore residency expansion and physician workforce implications. Production of primary care physicians and general surgeons was assessed by retrospective analysis of the 2009 American Medical Association Masterfile, which includes physicians' training institution, residency specialty, and year of completion for up to six training experiences. The authors measured production rates for each institution based on physicians completing their first residency during 2005-2007 in family or internal medicine, pediatrics, or general surgery. They then reassessed rates to account for those who completed additional training. They compared these rates with proposed expansion eligibility thresholds and current workforce needs. Of 116,004 physicians completing their first residency, 54,245 (46.8%) were in primary care and general surgery. Of 683 training institutions, 586 met the 25% threshold for expansion eligibility. At two to four years out, only 29,963 physicians (25.8%) remained in primary care or general surgery, and 135 institutions lost eligibility. A 35% threshold eliminated 314 institutions collectively training 93,774 residents (80.8%). Residency expansion thresholds that do not account for production at least two to four years after completion of first residency overestimate eligibility. The overall primary care production rate from GME will not sustain the current physician workforce composition. Copyright © by the Association of American medical Colleges.
40 CFR 98.391 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.391 Section 98.391 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Petroleum Products § 98.391 Reporting threshold. Any...
40 CFR 98.241 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.241 Section 98.241 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Petrochemical Production § 98.241 Reporting threshold. You must report...
40 CFR 98.241 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.241 Section 98.241 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Petrochemical Production § 98.241 Reporting threshold. You must report...
40 CFR 98.391 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.391 Section 98.391 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Petroleum Products § 98.391 Reporting threshold. Any...
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
40 CFR 98.61 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.61 Section 98.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Aluminum Production § 98.61 Reporting threshold. You must report GHG...
40 CFR 98.281 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.281 Section 98.281 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Silicon Carbide Production § 98.281 Reporting threshold. You must report...
40 CFR 98.81 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.81 Section 98.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Cement Production § 98.81 Reporting threshold. You must report GHG...
40 CFR 98.81 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.81 Section 98.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Cement Production § 98.81 Reporting threshold. You must report GHG...
40 CFR 98.201 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.201 Section 98.201 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Magnesium Production § 98.201 Reporting threshold. You must report GHG...
40 CFR 98.111 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.111 Section 98.111 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ferroalloy Production § 98.111 Reporting threshold. You must report GHG...
40 CFR 98.181 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.181 Section 98.181 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Lead Production § 98.181 Reporting threshold. You must report GHG...
40 CFR 98.151 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.151 Section 98.151 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING HCFC-22 Production and HFC-23 Destruction § 98.151 Reporting threshold...
40 CFR 98.311 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.311 Section 98.311 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Titanium Dioxide Production § 98.311 Reporting threshold. You must report...
40 CFR 98.171 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.171 Section 98.171 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Iron and Steel Production § 98.171 Reporting threshold. You must report...
40 CFR 98.111 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.111 Section 98.111 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ferroalloy Production § 98.111 Reporting threshold. You must report GHG...
40 CFR 98.311 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.311 Section 98.311 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Titanium Dioxide Production § 98.311 Reporting threshold. You must report...
40 CFR 98.141 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.141 Section 98.141 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Glass Production § 98.141 Reporting threshold. You must report GHG...
40 CFR 98.221 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.221 Section 98.221 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Nitric Acid Production § 98.221 Reporting threshold. You must report GHG...
40 CFR 98.61 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.61 Section 98.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Aluminum Production § 98.61 Reporting threshold. You must report GHG...
40 CFR 98.51 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.51 Section 98.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Adipic Acid Production § 98.51 Reporting threshold. You must report GHG...
40 CFR 98.331 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.331 Section 98.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Zinc Production § 98.331 Reporting threshold. You must report GHG...
40 CFR 98.171 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.171 Section 98.171 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Iron and Steel Production § 98.171 Reporting threshold. You must report...
40 CFR 98.181 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.181 Section 98.181 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Lead Production § 98.181 Reporting threshold. You must report GHG...
40 CFR 98.51 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.51 Section 98.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Adipic Acid Production § 98.51 Reporting threshold. You must report GHG...
40 CFR 98.141 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.141 Section 98.141 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Glass Production § 98.141 Reporting threshold. You must report GHG...
40 CFR 98.151 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.151 Section 98.151 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING HCFC-22 Production and HFC-23 Destruction § 98.151 Reporting threshold...
40 CFR 98.261 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.261 Section 98.261 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Phosphoric Acid Production § 98.261 Reporting threshold. You must report...
40 CFR 98.281 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.281 Section 98.281 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Silicon Carbide Production § 98.281 Reporting threshold. You must report...
40 CFR 98.261 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.261 Section 98.261 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Phosphoric Acid Production § 98.261 Reporting threshold. You must report...
40 CFR 98.161 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.161 Section 98.161 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Hydrogen Production § 98.161 Reporting threshold. You must report GHG...
40 CFR 98.221 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.221 Section 98.221 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Nitric Acid Production § 98.221 Reporting threshold. You must report GHG...
40 CFR 98.331 - Reporting threshold.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Reporting threshold. 98.331 Section 98.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Zinc Production § 98.331 Reporting threshold. You must report GHG...
40 CFR 98.161 - Reporting threshold.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Reporting threshold. 98.161 Section 98.161 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Hydrogen Production § 98.161 Reporting threshold. You must report GHG...
A Multinomial Model for Identifying Significant Pure-Tone Threshold Shifts
ERIC Educational Resources Information Center
Schlauch, Robert S.; Carney, Edward
2007-01-01
Purpose: Significant threshold differences on retest for pure-tone audiometry are often evaluated by application of ad hoc rules, such as a shift in a pure-tone average or in 2 adjacent frequencies that exceeds a predefined amount. Rules that are so derived do not consider the probability of observing a particular audiogram. Methods: A general…
Irwin, R John; Irwin, Timothy C
2011-06-01
Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Delaney, Edward J
2007-11-01
The recent application of the threshold of toxicological concern (TTC) concept to the regulation of pharmaceuticals in the European Union is analyzed. The derivation of TTC and the threshold of regulation that followed it were originally intended to provide makers of food contact materials greater flexibility with their products, while allowing the CFSAN branch of FDA to conserve its resources for more important issues. A reanalysis of the scientific data employed by EMEA regulators to rationalize its 1.5 mcg default genotoxic impurity limit is presented to demonstrate (a) that direct translation of conclusions relevant to food consumption are unduly influenced by many classes of potent carcinogens of historic concern which would be impossible to generate unknowingly as pharmaceutical impurities, and (b) that the majority of reactive chemicals that would be useful to synthetic chemists are among the least potent carcinogens in the underpinning supportive analyses. Evidence is further presented to show that implementation and acceptance of a 1.5 mcg TTC-based total limit on such impurities can be expected to impede pharmaceutical research and development efficiency while providing an insignificant cancer risk-avoidance benefit to patients who require pharmaceutical treatments. The conclusion drawn is that a significantly higher default limit can readily be defended that would be both in keeping with TTC principles and the best interest of patients.
Traverso, A; Bassoli, Viviana; Cioè, A; Anselmo, Silvia; Ferro, Marta
2010-01-01
Aflatoxins are mycotoxins derived from foodstuffs colonized by fungal species of the genus Aspergillus; they are common food contaminants with immunosuppressive, mutagenic and carcinogenic activity. Aflatoxins are heat-resistant and are thus easily transmitted along the food chain. They are hepatotoxic and have the potential to induce hepatocellular carcinoma. Agri-food industry workers are thus at risk of ingestion as well as transmucosal absorption or inhalation of toxins released during product preparation or processing. To measure the levels of airborne mycotoxins, particularly aflatoxins, in a laboratory analysing imported foodstuffs for mycotoxin contamination. The protocol used to analyse a batch of shelled peanuts from Vietnam, especially the grinding phase, which is held to be at the highest risk ofgenerating airborne toxins, was assessed at the A.R.PA.L. laboratory (Liguria Region Environmental Protection Agency) of Genoa, Italy, which participates in a European aflatoxin monitoring project. Wet grinding was performed to avoid production of large amounts of dust. Comparison of airborne concentrations before and after grinding with legal thresholds disclosed that the analytical procedures involved negligible aflatoxin levels for operators (environmental burden 0.11 pg/ m3). Given the toxicity of aflatoxins, worker protection measures should be consistently adopted and enforced. Threshold limit values for working environments should be introduced besides the existing ones for public health.
Cytotoxicity assessment of antibiofouling compounds and by-products in marine bivalve cell cultures.
Domart-Coulon, I; Auzoux-Bordenave, S; Doumenc, D; Khalanski, M
2000-06-01
Short-term primary cell cultures were derived from adult marine bivalve tissues: the heart of oyster Crassostrea gigas and the gill of clam Ruditapes decussatus. These cultures were used as experimental in vitro models to assess the acute cytotoxicity of an organic molluscicide, Mexel-432, used in antibiofouling treatments in industrial cooling water systems. A microplate cell viability assay, based on the enzymatic reduction of tetrazolium dye (MTT) in living bivalve cells, was adapted to test the cytotoxicity of this compound: in both in vitro models, toxicity thresholds of Mexel-432 were compared to those determined in vivo with classic acute toxicity tests. The clam gill cell model was also used to assess the cytotoxicity of by-products of chlorination, a major strategy of biofouling control in the marine environment. The applications and limits of these new in vitro models for monitoring aquatic pollutants were discussed, in reference with the standardized Microtox test.
Flow-induced flutter in a wall-bounded elastic sheet
NASA Astrophysics Data System (ADS)
Weidman, M. S.; Argentina, M.; Hosoi, A. E.; Mahadevan, L.
2004-11-01
Inspired by voice production in natural and artificial systems, we consider the flow between a long but finite flexible elastic sheet and a rigid wall close to it. We derive evolution equations for the coupled dynamics of the fluid and solid in two limits corresponding to the viscously dominated and inertially dominated regimes of the flow. In both situations, the inertia of the solid remains important. We show that a long wavelength instability via a 1:1 resonance mechanism arises in both situations when the flow rate is increased beyond a critical threshold. We also compare the results of our analytical, numerical and scaling calculations with those of simple experiments. Finally we comment on the rich nonlinear dynamics of these systems which suggest that at least some aspects of voice and song production may be more a manifestation of physics rather than neurophysiology.
NASA Astrophysics Data System (ADS)
Kaertner, Franz X.; Russer, Peter
1990-11-01
The master equation for a dc-pumped degenerate Josephson parametric amplifier is derived. It is shown that the Wigner distribution representation of this master equation can be approximated by a Fokker-Planck equation. By using this equation, the dynamical behavior of this degenerate Josephson amplifier with respect to squeezing of the radiation field is investigated. It is shown that below threshold of parametric oscillation, a squeezed vacuum state can be generated, and above threshold a second bifurcation point exists, where the device generates amplitude squeezed radiation. Basic relations between the achievable amplitude squeezing, the output power, and the operation frequency are derived.
High-energy neutrino fluxes from AGN populations inferred from X-ray surveys
NASA Astrophysics Data System (ADS)
Jacobsen, Idunn B.; Wu, Kinwah; On, Alvina Y. L.; Saxton, Curtis J.
2015-08-01
High-energy neutrinos and photons are complementary messengers, probing violent astrophysical processes and structural evolution of the Universe. X-ray and neutrino observations jointly constrain conditions in active galactic nuclei (AGN) jets: their baryonic and leptonic contents, and particle production efficiency. Testing two standard neutrino production models for local source Cen A (Koers & Tinyakov and Becker & Biermann), we calculate the high-energy neutrino spectra of single AGN sources and derive the flux of high-energy neutrinos expected for the current epoch. Assuming that accretion determines both X-rays and particle creation, our parametric scaling relations predict neutrino yield in various AGN classes. We derive redshift-dependent number densities of each class, from Chandra and Swift/BAT X-ray luminosity functions (Silverman et al. and Ajello et al.). We integrate the neutrino spectrum expected from the cumulative history of AGN (correcting for cosmological and source effects, e.g. jet orientation and beaming). Both emission scenarios yield neutrino fluxes well above limits set by IceCube (by ˜4-106 × at 1 PeV, depending on the assumed jet models for neutrino production). This implies that: (i) Cen A might not be a typical neutrino source as commonly assumed; (ii) both neutrino production models overestimate the efficiency; (iii) neutrino luminosity scales with accretion power differently among AGN classes and hence does not follow X-ray luminosity universally; (iv) some AGN are neutrino-quiet (e.g. below a power threshold for neutrino production); (v) neutrino and X-ray emission have different duty cycles (e.g. jets alternate between baryonic and leptonic flows); or (vi) some combination of the above.
Threshold current for fireball generation
NASA Astrophysics Data System (ADS)
Dijkhuis, Geert C.
1982-05-01
Fireball generation from a high-intensity circuit breaker arc is interpreted here as a quantum-mechanical phenomenon caused by severe cooling of electrode material evaporating from contact surfaces. According to the proposed mechanism, quantum effects appear in the arc plasma when the radius of one magnetic flux quantum inside solid electrode material has shrunk to one London penetration length. A formula derived for the threshold discharge current preceding fireball generation is found compatible with data reported by Silberg. This formula predicts linear scaling of the threshold current with the circuit breaker's electrode radius and concentration of conduction electrons.
NASA Astrophysics Data System (ADS)
Gómez-Ocampo, E.; Gaxiola-Castro, G.; Durazo, Reginaldo
2017-06-01
Threshold is defined as the point where small changes in an environmental driver produce large responses in the ecosystem. Generalized additive models (GAMs) were used to estimate the thresholds and contribution of key dynamic physical variables in terms of phytoplankton production and variations in biomass in the tropical-subtropical Pacific Ocean off Mexico. The statistical approach used here showed that thresholds were shallower for primary production than for phytoplankton biomass (pycnocline < 68 m and mixed layer < 30 m versus pycnocline < 45 m and mixed layer < 80 m) but were similar for absolute dynamic topography and Ekman pumping (ADT < 59 cm and EkP > 0 cm d-1 versus ADT < 60 cm and EkP > 4 cm d-1). The relatively high productivity on seasonal (spring) and interannual (La Niña 2008) scales was linked to low ADT (45-60 cm) and shallow pycnocline depth (9-68 m) and mixed layer (8-40 m). Statistical estimations from satellite data indicated that the contributions of ocean circulation to phytoplankton variability were 18% (for phytoplankton biomass) and 46% (for phytoplankton production). Although the statistical contribution of models constructed with in situ integrated chlorophyll a and primary production data was lower than the one obtained with satellite data (11%), the fits were better for the former, based on the residual distribution. The results reported here suggest that estimated thresholds may reliably explain the spatial-temporal variations of phytoplankton in the tropical-subtropical Pacific Ocean off the coast of Mexico.
Flagging optically shallow pixels for improved analysis of ocean color data
NASA Astrophysics Data System (ADS)
McKinna, L. I. W.; Werdell, J.; Knowles, D., Jr.
2016-02-01
Ocean color remote-sensing is routinely used to derive marine geophysical parameters from sensor-observed water-leaving radiances. However, in clear geometrically shallow regions, traditional ocean color algorithms can be confounded by light reflected from the seafloor. Such regions are typically referred to as "optically shallow". When performing spatiotemporal analyses of ocean color datasets, optically shallow features such as coral reefs can lead to unexpected regional biases. Benthic contamination of the water-leaving radiance is dependent on bathymetry, water clarity and seafloor albedo. Thus, a prototype ocean color processing flag called OPTSHAL has been developed that takes all three variables into account. In the method described here, the optical depth of the water column at 547 nm, ζ(547), is predicted from known bathymetry and estimated inherent optical properties. If ζ(547) is less then the pre-defined threshold, a pixel is flagged as optically shallow. Radiative transfer modeling was used to identify the appropriate threshold value of ζ(547) for a generic benthic sand albedo. OPTSHAL has been evaluated within the NASA Ocean Biology Processing Group's L2GEN code. Using MODIS Aqua imagery, OPTSHAL was tested in two regions: (i) the Pedro Bank south-west of Jamaica, and (ii) the Great Barrier Reef, Australia. It is anticipated that OPTSHAL will benefit end-users when quality controlling derived ocean color products. Further, OPTSHAL may prove useful as a mechanism for switching between optically deep and shallow algorithms during ocean color processing.
USDA-ARS?s Scientific Manuscript database
Degree-days can be used to adjust for seasonal variation in water temperature when planning tilapia fingerling production strategies and are calculated by subtracting a threshold temperature ("biological zero") from the mean daily water temperature; the threshold temperature is the temperature below...
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Climate Products and Services to Meet the Challenges of Extreme Events
NASA Astrophysics Data System (ADS)
McCalla, M. R.
2008-12-01
The 2002 Office of the Federal Coordinator for Meteorological Services and Supporting Research (OFCM1)-sponsored report, Weather Information for Surface Transportation: National Needs Assessment Report, addressed meteorological needs for six core modes of surface transportation: roadway, railway, transit, marine transportation/operations, pipeline, and airport ground operations. The report's goal was to articulate the weather information needs and attendant surface transportation weather products and services for those entities that use, operate, and manage America's surface transportation infrastructure. The report documented weather thresholds and associated impacts which are critical for decision-making in surface transportation. More recently, the 2008 Climate Change Science Program's (CCSP) Synthesis and Assessment Product (SAP) 4.7 entitled, Impacts of Climate Change and Variability on Transportation Systems and Infrastructure: Gulf Coast Study, Phase I, included many of the impacts from the OFCM- sponsored report in Table 1.1 of this SAP.2 The Intergovernmental Panel on Climate Change (IPCC) reported that since 1950, there has been an increase in the number of heat waves, heavy precipitation events, and areas of drought. Moreover, the IPCC indicated that greater wind speeds could accompany more severe tropical cyclones.3 Taken together, the OFCM, CCSP, and IPCC reports indicate not only the significance of extreme events, but also the potential increasing significance of many of the weather thresholds and associated impacts which are critical for decision-making in surface transportation. Accordingly, there is a real and urgent need to understand what climate products and services are available now to address the weather thresholds within the surface transportation arena. It is equally urgent to understand what new climate products and services are needed to address these weather thresholds, and articulate what can be done to fill the gap between the existing federal climate products and services and the needed federal climate products and services which will address these weather thresholds. Just as important, as we work to meet the needs, a robust education and outreach program is essential to take full advantage of new products, services and capabilities. To ascertain what climate products and services currently exist to address weather thresholds relative to surface transportation, what climate products and services are needed to address these weather thresholds, and how to bridge the gap between what is available and what is needed, the OFCM surveyed the federal meteorological community. Consistent with the extreme events highlighted in the IPCC report, the OFCM survey categorized the weather thresholds associated with surface transportation into the following extreme event areas: (a) excessive heat, (b) winter precipitation, (c) summer precipitation, (d) high winds, and (e) flooding and coastal inundation. The survey results, the gap analysis, as well as OFCM's planned, follow-on activities with additional categories (i.e., in addition to surface transportation) and weather thresholds will be shared with meeting participants. 1 The OFCM is an interdepartmental office established in response to Public Law 87-843 with the mission to ensure the effective use of federal meteorological resources by leading the systematic coordination of operational weather and climate requirements, products, services, and supporting research among the federal agencies. 2 http://www.climatescience.gov/Library/sap/sap4-7/final-report/sap4-7-final-ch1.pdf 3 http://www.gcrio.org/ipcc/ar4/wg1/faq/ar4wg1faq-3-3.pdf
Production of X(3872) at PANDA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G. Y.; Ma, J. P.
2008-05-01
The recently discovered X(3872) has many possible interpretations. We study the production of X(3872) with PANDA at GSI for the antiproton-proton collision with two possible interpretations of X(3872). One is as a loosely bound molecule of D mesons, while another is a 2P charmonium state {chi}{sub c1(2P)}. Using effective couplings we are able to give numerical predictions for the production near the threshold and the production associated with {pi}{sup 0}. We also study the possible background near the threshold production for X(3872){yields}J/{psi}{pi}{sup +}{pi}{sup -}. With the designed luminosity 1.5 fb{sup -1} per year of PANDA we find that the eventmore » number of pp{yields}J/{psi}{pi}{sup +}{pi}{sup -} near the threshold is at the order of 10{sup 6}-10{sup 8}. Our study shows that two interpretations are distinguishable from the line shape of the production.« less
Yang, Chihae; Barlow, Susan M; Muldoon Jacobs, Kristi L; Vitcheva, Vessela; Boobis, Alan R; Felter, Susan P; Arvidson, Kirk B; Keller, Detlef; Cronin, Mark T D; Enoch, Steven; Worth, Andrew; Hollnagel, Heli M
2017-11-01
A new dataset of cosmetics-related chemicals for the Threshold of Toxicological Concern (TTC) approach has been compiled, comprising 552 chemicals with 219, 40, and 293 chemicals in Cramer Classes I, II, and III, respectively. Data were integrated and curated to create a database of No-/Lowest-Observed-Adverse-Effect Level (NOAEL/LOAEL) values, from which the final COSMOS TTC dataset was developed. Criteria for study inclusion and NOAEL decisions were defined, and rigorous quality control was performed for study details and assignment of Cramer classes. From the final COSMOS TTC dataset, human exposure thresholds of 42 and 7.9 μg/kg-bw/day were derived for Cramer Classes I and III, respectively. The size of Cramer Class II was insufficient for derivation of a TTC value. The COSMOS TTC dataset was then federated with the dataset of Munro and colleagues, previously published in 1996, after updating the latter using the quality control processes for this project. This federated dataset expands the chemical space and provides more robust thresholds. The 966 substances in the federated database comprise 245, 49 and 672 chemicals in Cramer Classes I, II and III, respectively. The corresponding TTC values of 46, 6.2 and 2.3 μg/kg-bw/day are broadly similar to those of the original Munro dataset. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Shaw, W. M., Jr.
1990-01-01
These two articles discuss clustering structure in the Cystic Fibrosis Document Collection, which is derived from the National Library of Medicine's MEDLINE file. The exhaustivity of four subject representations and two citation representations is examined, and descriptor-weight thresholds and similarity thresholds are used to compute…
Hinck, Jo E.; Linder, Greg L.; Otton, James K.; Finger, Susan E.; Little, Edward E.; Tillitt, Donald E.
2013-01-01
Chemical data from soil and weathered waste material samples collected from five uranium mines north of the Grand Canyon (three reclaimed, one mined but not reclaimed, and one never mined) were used in a screening-level risk analysis for the Arizona chisel-toothed kangaroo rat (Dipodomys microps leucotis); risks from radiation exposure were not evaluated. Dietary toxicity reference values were used to estimate soil-screening thresholds presenting risk to kangaroo rats. Sensitivity analyses indicated that body weight critically affected outcomes of exposed-dose calculations; juvenile kangaroo rats were more sensitive to the inorganic constituent toxicities than adult kangaroo rats. Species-specific soil-screening thresholds were derived for arsenic (137 mg/kg), cadmium (16 mg/kg), copper (1,461 mg/kg), lead (1,143 mg/kg), nickel (771 mg/kg), thallium (1.3 mg/kg), uranium (1,513 mg/kg), and zinc (731 mg/kg) using toxicity reference values that incorporate expected chronic field exposures. Inorganic contaminants in soils within and near the mine areas generally posed minimal risk to kangaroo rats. Most exceedances of soil thresholds were for arsenic and thallium and were associated with weathered mine wastes.
Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach
NASA Astrophysics Data System (ADS)
Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar
2013-06-01
We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.
Cockburn, Neil; Kovacs, Michael
2016-01-01
CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia time by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155–180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877
NASA Astrophysics Data System (ADS)
Hanhart, C.; Kaiser, N.
2002-11-01
Based on a counting scheme that explicitly takes into account the large momentum (Mmπ) characteristic for pion production in nucleon-nucleon collisions we calculate all diagrams for the reaction NN-->NNπ at threshold up to next-to-leading-order. At this order there are no free parameters and the size of the next-to-leading- order contributions is in line with the expectation from power counting. The sum of loop corrections at that order vanishes for the process pp-->ppπ0 at threshold. The total contribution at next-to-leading-order from loop diagrams that include the delta degree of freedom vanishes at threshold in both reaction channels pp-->ppπ0,pnπ+.
Zdraljevic, Stefan; Wagner, Drew; Cheng, Kevin; Ruohonen, Laura; Jäntti, Jussi; Penttilä, Merja; Resnekov, Orna
2013-01-01
Organic acids derived from engineered microbes can replace fossil-derived chemicals in many applications. Fungal hosts are preferred for organic acid production because they tolerate lignocellulosic hydrolysates and low pH, allowing economic production and recovery of the free acid. However, cell death caused by cytosolic acidification constrains productivity. Cytosolic acidification affects cells asynchronously, suggesting that there is an underlying cell-to-cell heterogeneity in acid productivity and/or in resistance to toxicity. We used fluorescence microscopy to investigate the relationship between enzyme concentration, cytosolic pH, and viability at the single-cell level in Saccharomyces cerevisiae engineered to synthesize xylonic acid. We found that cultures producing xylonic acid accumulate cells with cytosolic pH below 5 (referred to here as “acidified”). Using live-cell time courses, we found that the probability of acidification was related to the initial levels of xylose dehydrogenase and sharply increased from 0.2 to 0.8 with just a 60% increase in enzyme abundance (Hill coefficient, >6). This “switch-like” relationship likely results from an enzyme level threshold above which the produced acid overwhelms the cell's pH buffering capacity. Consistent with this hypothesis, we showed that expression of xylose dehydrogenase from a chromosomal locus yields ∼20 times fewer acidified cells and ∼2-fold more xylonic acid relative to expression of the enzyme from a plasmid with variable copy number. These results suggest that strategies that further reduce cell-to-cell heterogeneity in enzyme levels could result in additional gains in xylonic acid productivity. Our results demonstrate a generalizable approach that takes advantage of the cell-to-cell variation of a clonal population to uncover causal relationships in the toxicity of engineered pathways. PMID:24038690
40 CFR 98.311 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.311 Section 98.311 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Titanium Dioxide Production § 98.311 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.61 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.61 Section 98.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Aluminum Production § 98.61 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.81 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.81 Section 98.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Cement Production § 98.81 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.151 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.151 Section 98.151 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING HCFC-22 Production and HFC-23 Destruction § 98.151 Reporting threshold. You must report GHG emissions under this...
40 CFR 98.381 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.381 Section 98.381 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Coal-based Liquid Fuels § 98.381 Reporting threshold. Any supplier of coal-to-liquid products who...
40 CFR 98.201 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.201 Section 98.201 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Magnesium Production § 98.201 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.61 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.61 Section 98.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Aluminum Production § 98.61 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.201 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.201 Section 98.201 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Magnesium Production § 98.201 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.121 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.121 Section 98.121 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Fluorinated Gas Production § 98.121 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.61 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.61 Section 98.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Aluminum Production § 98.61 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.281 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.281 Section 98.281 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Silicon Carbide Production § 98.281 Reporting threshold. You must report GHG emissions under this subpart if your...
16 CFR 1061.4 - Threshold requirements for applications for exemption.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Threshold requirements for applications for exemption. 1061.4 Section 1061.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION GENERAL APPLICATIONS FOR EXEMPTION FROM PREEMPTION § 1061.4 Threshold requirements for applications for exemption. (a) The Commission will consider an...
40 CFR 98.331 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.331 Section 98.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Zinc Production § 98.331 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.161 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.161 Section 98.161 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Hydrogen Production § 98.161 Reporting threshold. You must report GHG emissions under this subpart if your facilit...
40 CFR 98.331 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.331 Section 98.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Zinc Production § 98.331 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.171 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.171 Section 98.171 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Iron and Steel Production § 98.171 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.151 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.151 Section 98.151 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING HCFC-22 Production and HFC-23 Destruction § 98.151 Reporting threshold. You must report GHG emissions under this...
40 CFR 98.221 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.221 Section 98.221 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Nitric Acid Production § 98.221 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.81 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.81 Section 98.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Cement Production § 98.81 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.311 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.311 Section 98.311 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Titanium Dioxide Production § 98.311 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.331 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.331 Section 98.331 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Zinc Production § 98.331 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.141 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.141 Section 98.141 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Glass Production § 98.141 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.181 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.181 Section 98.181 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Lead Production § 98.181 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.141 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.141 Section 98.141 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Glass Production § 98.141 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.51 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.51 Section 98.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Adipic Acid Production § 98.51 Reporting threshold. You must report GHG emissions under this subpart if your facilit...
40 CFR 98.241 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.241 Section 98.241 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Petrochemical Production § 98.241 Reporting threshold. You must report GHG emissions under this subpart if your...
16 CFR 1061.4 - Threshold requirements for applications for exemption.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Threshold requirements for applications for exemption. 1061.4 Section 1061.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION GENERAL APPLICATIONS FOR EXEMPTION FROM PREEMPTION § 1061.4 Threshold requirements for applications for exemption. (a) The Commission will consider an...
40 CFR 98.281 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.281 Section 98.281 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Silicon Carbide Production § 98.281 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.111 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.111 Section 98.111 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ferroalloy Production § 98.111 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.141 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.141 Section 98.141 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Glass Production § 98.141 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.311 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.311 Section 98.311 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Titanium Dioxide Production § 98.311 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.161 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.161 Section 98.161 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Hydrogen Production § 98.161 Reporting threshold. You must report GHG emissions under this subpart if your facilit...
40 CFR 98.261 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.261 Section 98.261 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Phosphoric Acid Production § 98.261 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.241 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.241 Section 98.241 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Petrochemical Production § 98.241 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.201 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.201 Section 98.201 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Magnesium Production § 98.201 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.181 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.181 Section 98.181 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Lead Production § 98.181 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.111 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.111 Section 98.111 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ferroalloy Production § 98.111 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.381 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.381 Section 98.381 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Coal-based Liquid Fuels § 98.381 Reporting threshold. Any supplier of coal-to-liquid products who...
40 CFR 98.111 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.111 Section 98.111 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Ferroalloy Production § 98.111 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.221 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.221 Section 98.221 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Nitric Acid Production § 98.221 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.161 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.161 Section 98.161 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Hydrogen Production § 98.161 Reporting threshold. You must report GHG emissions under this subpart if your facilit...
40 CFR 98.261 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.261 Section 98.261 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Phosphoric Acid Production § 98.261 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.121 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.121 Section 98.121 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Fluorinated Gas Production § 98.121 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.281 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.281 Section 98.281 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Silicon Carbide Production § 98.281 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.221 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.221 Section 98.221 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Nitric Acid Production § 98.221 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.51 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.51 Section 98.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Adipic Acid Production § 98.51 Reporting threshold. You must report GHG emissions under this subpart if your facilit...
16 CFR 1061.4 - Threshold requirements for applications for exemption.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Threshold requirements for applications for exemption. 1061.4 Section 1061.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION GENERAL APPLICATIONS FOR EXEMPTION FROM PREEMPTION § 1061.4 Threshold requirements for applications for exemption. (a) The Commission will consider an...
40 CFR 98.171 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.171 Section 98.171 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Iron and Steel Production § 98.171 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.51 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.51 Section 98.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Adipic Acid Production § 98.51 Reporting threshold. You must report GHG emissions under this subpart if your facilit...
40 CFR 98.81 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.81 Section 98.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Cement Production § 98.81 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.171 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.171 Section 98.171 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Iron and Steel Production § 98.171 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.261 - Reporting threshold.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Reporting threshold. 98.261 Section 98.261 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Phosphoric Acid Production § 98.261 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.151 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.151 Section 98.151 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING HCFC-22 Production and HFC-23 Destruction § 98.151 Reporting threshold. You must report GHG emissions under this...
40 CFR 98.381 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.381 Section 98.381 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Coal-based Liquid Fuels § 98.381 Reporting threshold. Any supplier of coal-to-liquid products who...
40 CFR 98.241 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.241 Section 98.241 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Petrochemical Production § 98.241 Reporting threshold. You must report GHG emissions under this subpart if your...
40 CFR 98.181 - Reporting threshold.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Reporting threshold. 98.181 Section 98.181 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Lead Production § 98.181 Reporting threshold. You must report GHG emissions under this subpart if your facility...
40 CFR 98.121 - Reporting threshold.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Reporting threshold. 98.121 Section 98.121 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Fluorinated Gas Production § 98.121 Reporting threshold. You must report GHG emissions under this subpart if your...
NASA Astrophysics Data System (ADS)
Zhang, Leiming; Cao, Peiyu; Li, Shenggong; Yu, Guirui; Zhang, Junhui; Li, Yingnian
2016-04-01
To accurately assess the change of phenology and its relationship with ecosystem gross primary productivity (GPP) is one of the key issues in context of global change study. In this study, an alpine shrubland meadow in Haibei (HBS) of Qinghai-Tibetan plateau and a broad-leaved Korean pine forest in Changbai Mountain (CBM) of Northeastern China were selected. Based on the long-term GPP from eddy flux measurements and the Normalized Difference Vegetation Index (NDVI) from remote sensed vegetation index, phenological indicators including the start of growing season (SOS), the end of growing season (EOS), and the growing season length (GSL) since 2003 were derived via multiple methods, and then the influences of phenology variation on GPP were explored. Compared with ground phenology observations of dominant plant species, both GPP- and NDVI-derived SOS and EOS exhibited a similar interannual trend. GPP-derived SOS was quite close to NDVI-derived SOS, but GPP-derived EOS differed significantly from NDVI-derived EOS, and thus leading to a significant difference between GPP- and NDVI-derived GSL. Relative to SOS, EOS presented larger differences between the extraction methods, indicating large uncertainties to accurately define EOS. In general, among the methods used, the threshold methods produced more satisfactory assessment on phenology change. This study highlights that how to harmonize with the flux measurements, remote sensing and ground monitoring are a big challenge that needs further consideration in phenology study, especially the accurate extraction of EOS. Key words: phenological variation, carbon flux, vegetation index, vegetation grwoth, interannual varibility
Updating Landsat-derived land-cover maps using change detection and masking techniques
NASA Technical Reports Server (NTRS)
Likens, W.; Maw, K.
1982-01-01
The California Integrated Remote Sensing System's San Bernardino County Project was devised to study the utilization of a data base at a number of jurisdictional levels. The present paper discusses the implementation of change-detection and masking techniques in the updating of Landsat-derived land-cover maps. A baseline landcover classification was first created from a 1976 image, then the adjusted 1976 image was compared with a 1979 scene by the techniques of (1) multidate image classification, (2) difference image-distribution tails thresholding, (3) difference image classification, and (4) multi-dimensional chi-square analysis of a difference image. The union of the results of methods 1, 3 and 4 was used to create a mask of possible change areas between 1976 and 1979, which served to limit analysis of the update image and reduce comparison errors in unchanged areas. The techniques of spatial smoothing of change-detection products, and of combining results of difference change-detection algorithms are also shown to improve Landsat change-detection accuracies.
Measurement of {pi}{sup -}p{yields}{eta}n from threshold to p{sub {pi}}{sub {sup -}}=747 MeV/c
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakhov, S.; Nefkens, B.M.K.; Clajus, M.
2005-07-01
The differential cross section for {eta} production in reaction {pi}{sup -}p{yields}{eta}n has been measured over the full angular range at seven incident {pi}{sup -} beam momenta from threshold to p{sub {pi}}{sub {sup -}}=747 MeV/c using the Crystal Ball multiphoton spectrometer. The angular distributions are S wave dominated. At 10 MeV/c above threshold, a small D-wave contribution appears that interferes with the main S wave. The total {eta} production cross section {sigma}{sup tot} is obtained by integration of d{sigma}/d{omega}. Starting at threshold, {sigma}{sup tot} rises rapidly, as expected for S-wave-dominated production. The features of the {pi}{sup -}p{yields}{eta}n cross section are strikinglymore » similar to those of the SU(3) flavor-related process K{sup -}p{yields}{eta}{lambda}. Comparison of the {pi}{sup -}p{yields}{eta}n reaction is made with {eta} photoproduction.« less
Threshold for extinction and survival in stochastic tumor immune system
NASA Astrophysics Data System (ADS)
Li, Dongxi; Cheng, Fangjuan
2017-10-01
This paper mainly investigates the stochastic character of tumor growth and extinction in the presence of immune response of a host organism. Firstly, the mathematical model describing the interaction and competition between the tumor cells and immune system is established based on the Michaelis-Menten enzyme kinetics. Then, the threshold conditions for extinction, weak persistence and stochastic persistence of tumor cells are derived by the rigorous theoretical proofs. Finally, stochastic simulation are taken to substantiate and illustrate the conclusion we have derived. The modeling results will be beneficial to understand to concept of immunoediting, and develop the cancer immunotherapy. Besides, our simple theoretical model can help to obtain new insight into the complexity of tumor growth.
NASA Astrophysics Data System (ADS)
Jhang, Hogun
2018-05-01
We show that the threshold condition for the toroidal ion temperature gradient (ITG) mode with an inverted density profile can be derived from a simple physics argument. The key in this picture is that the density inversion reduces the ion compression due to the ITG mode and the electron drift motion mitigates the poloidal potential build-up. This condition reproduces the same result that has been reported from a linear gyrokinetic calculation [T. S. Hahm and W. M. Tang, Phys. Fluids B 1, 1185 (1989)]. The destabilizing role of trapped electrons in toroidal geometry is easily captured in this picture.
Analysis of Critical Mass in Threshold Model of Diffusion
NASA Astrophysics Data System (ADS)
Kim, Jeehong; Hur, Wonchang; Kang, Suk-Ho
2012-04-01
Why does diffusion sometimes show cascade phenomena but at other times is impeded? In addressing this question, we considered a threshold model of diffusion, focusing on the formation of a critical mass, which enables diffusion to be self-sustaining. Performing an agent-based simulation, we found that the diffusion model produces only two outcomes: Almost perfect adoption or relatively few adoptions. In order to explain the difference, we considered the various properties of network structures and found that the manner in which thresholds are arrayed over a network is the most critical factor determining the size of a cascade. On the basis of the results, we derived a threshold arrangement method effective for generation of a critical mass and calculated the size required for perfect adoption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slipchenko, S. O., E-mail: serghpl@mail.ioffe.ru; Podoskin, A. A.; Pikhtin, N. A.
Threshold conditions for generation of a closed mode in the crystal of the Fabry-Perot semiconductor laser with a quantum-well active region are analyzed. It is found that main parameters affecting the closed mode lasing threshold for the chosen laser heterostructure are as follows: the optical loss in the passive region, the optical confinement factor of the closed mode in the gain region, and material gain detuning. The relations defining the threshold conditions for closed mode lasing in terms of optical and geometrical characteristics of the semiconductor laser are derived. It is shown that the threshold conditions can be satisfied atmore » a lower material gain in comparison with the Fabry-Perot cavity mode due to zero output loss for the closed mode.« less
Generalised form of a power law threshold function for rainfall-induced landslides
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Díaz, Manuel Roberto; Nadim, Farrokh; Høeg, Kaare; Elverhøi, Anders
2010-05-01
The following new function is proposed for estimating thresholds for rainfall-triggered landslides: I = α1Anα2Dβ, where I is rainfall intensity in mm/h, D is rainfall duration in h, An is the n-hours or n-days antecedent precipitation, and α1, α2, β and n are threshold parameters. A threshold model that combines two functions with different durations of antecedent precipitation is also introduced. A storm observation exceeds the threshold when the storm parameters are located at or above the two functions simultaneously. A novel optimisation procedure for estimating the threshold parameters is proposed using Receiver Operating Characteristics (ROC) analysis. The new threshold function and optimisation procedure are applied for estimating thresholds for triggering of debris flows in the Western Metropolitan Area of San Salvador (AMSS), El Salvador, where up to 500 casualties were produced by a single event. The resulting thresholds are I = 2322 A7d-1D-0.43 and I = 28534 A150d-1D-0.43 for debris flows having volumes greater than 3000 m3. Thresholds are also derived for debris flows greater than 200 000 m3 and for hyperconcentrated flows initiating in burned areas caused by forest fires. The new thresholds show an improved performance compared to the traditional formulations, indicated by a reduction in false alarms from 51 to 5 for the 3000 m3 thresholds and from 6 to 0 false alarms for the 200 000 m3 thresholds.
Higgs boson gluon–fusion production at threshold in N 3LO QCD
Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...
2014-09-02
We present the cross-section for the threshold production of the Higgs boson at hadron-colliders at next-to-next-to-next-to-leading order (N 3LO) in perturbative QCD. Furthermore, we present an analytic expression for the partonic cross-section at threshold and the impact of these corrections on the numerical estimates for the hadronic cross-section at the LHC. With this result we achieve a major milestone towards a complete evaluation of the cross-section at N 3LO which will reduce the theoretical uncertainty in the determination of the strengths of the Higgs boson interactions.
Cloud vertical profiles derived from CALIPSO and CloudSat and a comparison with MODIS derived clouds
NASA Astrophysics Data System (ADS)
Kato, S.; Sun-Mack, S.; Miller, W. F.; Rose, F. G.; Minnis, P.; Wielicki, B. A.; Winker, D. M.; Stephens, G. L.; Charlock, T. P.; Collins, W. D.; Loeb, N. G.; Stackhouse, P. W.; Xu, K.
2008-05-01
CALIPSO and CloudSat from the a-train provide detailed information of vertical distribution of clouds and aerosols. The vertical distribution of cloud occurrence is derived from one month of CALIPSO and CloudSat data as a part of the effort of merging CALIPSO, CloudSat and MODIS with CERES data. This newly derived cloud profile is compared with the distribution of cloud top height derived from MODIS on Aqua from cloud algorithms used in the CERES project. The cloud base from MODIS is also estimated using an empirical formula based on the cloud top height and optical thickness, which is used in CERES processes. While MODIS detects mid and low level clouds over the Arctic in April fairly well when they are the topmost cloud layer, it underestimates high- level clouds. In addition, because the CERES-MODIS cloud algorithm is not able to detect multi-layer clouds and the empirical formula significantly underestimates the depth of high clouds, the occurrence of mid and low-level clouds is underestimated. This comparison does not consider sensitivity difference to thin clouds but we will impose an optical thickness threshold to CALIPSO derived clouds for a further comparison. The effect of such differences in the cloud profile to flux computations will also be discussed. In addition, the effect of cloud cover to the top-of-atmosphere flux over the Arctic using CERES SSF and FLASHFLUX products will be discussed.
Spreading dynamics of a SIQRS epidemic model on scale-free networks
NASA Astrophysics Data System (ADS)
Li, Tao; Wang, Yuanmei; Guan, Zhi-Hong
2014-03-01
In order to investigate the influence of heterogeneity of the underlying networks and quarantine strategy on epidemic spreading, a SIQRS epidemic model on the scale-free networks is presented. Using the mean field theory the spreading dynamics of the virus is analyzed. The spreading critical threshold and equilibria are derived. Theoretical results indicate that the critical threshold value is significantly dependent on the topology of the underlying networks and quarantine rate. The existence of equilibria is determined by threshold value. The stability of disease-free equilibrium and the permanence of the disease are proved. Numerical simulations confirmed the analytical results.
Use of LiDAR to define habitat thresholds for forest bird conservation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garabedian, James E.; Moorman, Christopher E.; Nils Peterson, M.
Quantifying species-habitat relationships provides guidance for establishment of recovery standards for endangered species, but research on forest bird habitat has been limited by availability of fine-grained forest structure data across broad extents. New tools for collection of data on forest bird response to fine-grained forest structure provide opportunities to evaluate habitat thresholds for forest birds. We used LiDAR-derived estimates of habitat attributes and resource selection to evaluate foraging habitat thresholds for recovery of the federally endangered red-cockaded woodpecker (Leuconotopicus borealis; RCW) on the Savannah River Site, South Carolina.
Use of LiDAR to define habitat thresholds for forest bird conservation
Garabedian, James E.; Moorman, Christopher E.; Nils Peterson, M.; ...
2017-09-01
Quantifying species-habitat relationships provides guidance for establishment of recovery standards for endangered species, but research on forest bird habitat has been limited by availability of fine-grained forest structure data across broad extents. New tools for collection of data on forest bird response to fine-grained forest structure provide opportunities to evaluate habitat thresholds for forest birds. We used LiDAR-derived estimates of habitat attributes and resource selection to evaluate foraging habitat thresholds for recovery of the federally endangered red-cockaded woodpecker (Leuconotopicus borealis; RCW) on the Savannah River Site, South Carolina.
Toward a generalized theory of epidemic awareness in social networks
NASA Astrophysics Data System (ADS)
Wu, Qingchu; Zhu, Wenfang
We discuss the dynamics of a susceptible-infected-susceptible (SIS) model with local awareness in networks. Individual awareness to the infectious disease is characterized by a general function of epidemic information in its neighborhood. We build a high-accuracy approximate equation governing the spreading dynamics and derive an approximate epidemic threshold above which the epidemic spreads over the whole network. Our results extend the previous work and show that the epidemic threshold is dependent on the awareness function in terms of one infectious neighbor. Interestingly, when a pow-law awareness function is chosen, the epidemic threshold can emerge in infinite networks.
Definition of temperature thresholds: the example of the French heat wave warning system.
Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal
2013-01-01
Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.
An algorithm to detect fire activity using Meteosat: fine tuning and quality assesment
NASA Astrophysics Data System (ADS)
Amraoui, M.; DaCamara, C. C.; Ermida, S. L.
2012-04-01
Hot spot detection by means of sensors on-board geostationary satellites allows studying wildfire activity at hourly and even sub-hourly intervals, an advantage that cannot be met by polar orbiters. Since 1997, the Satellite Application Facility for Land Surface Analysis has been running an operational procedure that allows detecting active fires based on information from Meteosat-8/SEVIRI. This is the so-called Fire Detection and Monitoring (FD&M) product and the procedure takes advantage of the temporal resolution of SEVIRI (one image every 15 min), and relies on information from SEVIRI channels (namely 0.6, 0.8, 3.9, 10.8 and 12.0 μm) together with information on illumination angles. The method is based on heritage from contextual algorithms designed for polar, sun-synchronous instruments, namely NOAA/AVHRR and MODIS/TERRAAQUA. A potential fire pixel is compared with the neighboring ones and the decision is made based on relative thresholds as derived from the pixels in the neighborhood. Generally speaking, the observed fire incidence compares well against hot spots extracted from the global daily active fire product developed by the MODIS Fire Team. However, values of probability of detection (POD) tend to be quite low, a result that may be partially expected by the finer resolution of MODIS. The aim of the present study is to make a systematic assessment of the impacts on POD and False Alarm Ratio (FAR) of the several parameters that are set in the algorithms. Such parameters range from the threshold values of brightness temperature in the IR3.9 and 10.8 channels that are used to select potential fire pixels up to the extent of the background grid and thresholds used to statistically characterize the radiometric departures of a potential pixel from the respective background. The impact of different criteria to identify pixels contaminated by clouds, smoke and sun glint is also evaluated. Finally, the advantages that may be brought to the algorithm by adding contextual tests in the time domain are discussed. The study lays the grounds to the development of improved quality flags that will be integrated in the FD&M product in the nearby future.
Atmospheric Science Data Center
2017-10-11
... new inland water class for RCCM calculation and changed threshold and surface classification datasets accordingly. Modified land second ... 06/21/2000 First version of RCCM. Pre-launch threshold values are used. New ancillary files: ...
NASA Technical Reports Server (NTRS)
Iversen, J. D.; White, B. R.; Pollack, J. B.; Greeley, R.
1976-01-01
Results are reported for wind-tunnel experiments performed to determine the threshold friction speed of particles with different densities. Experimentally determined threshold speeds are plotted as a function of particle diameter and in terms of threshold parameter vs particle friction Reynolds number. The curves are compared with those of previous experiments, and an A-B curve is plotted to show differences in threshold speed due to differences in size distributions and particle shapes. Effects of particle diameter are investigated, an expression for threshold speed is derived by considering the equilibrium forces acting on a single particle, and other approximately valid expressions are evaluated. It is shown that the assumption of universality of the A-B curve is in error at very low pressures for small particles and that only predictions which take account of both Reynolds number and effects of interparticle forces yield reasonable agreement with experimental data. Effects of nonerodible surface roughness are examined, and threshold speeds computed with allowance for this factor are compared with experimental values. Threshold friction speeds on Mars are then estimated for a surface pressure of 5 mbar, taking into account all the factors considered.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Li, Yangfan; Li, Yi; Wu, Wei
2016-01-01
The concept of thresholds shows important implications for environmental and resource management. Here we derived potential landscape thresholds which indicated abrupt changes in water quality or the dividing points between exceeding and failing to meet national surface water quality standards for a rapidly urbanizing city on the Eastern Coast in China. The analysis of landscape thresholds was based on regression models linking each of the seven water quality variables to each of the six landscape metrics for this coupled land-water system. We found substantial and accelerating urban sprawl at the suburban areas between 2000 and 2008, and detected significant nonlinear relations between water quality and landscape pattern. This research demonstrated that a simple modeling technique could provide insights on environmental thresholds to support more-informed decision making in land use, water environmental and resilience management. Copyright © 2015 Elsevier Ltd. All rights reserved.
The threshold laws for electron-atom and positron-atom impact ionization
NASA Technical Reports Server (NTRS)
Temkin, A.
1983-01-01
The Coulomb-dipole theory is employed to derive a threshold law for the lowest energy needed for the separation of three particles from one another. The study focuses on an electron impinging on a neutral atom, and the dipole is formed between an inner electron and the nucleus. The analytical dependence of the transition matrix element on energy is reduced to lowest order to obtain the threshold law, with the inner electron providing a shield for the nucleus. Experimental results using the LAMPF accelerator to produce a high energy beam of H- ions, which are then exposed to an optical laser beam to detach the negative H- ion, are discussed. The threshold level is found to be confined to the region defined by the upper bound of the inverse square of the Coulomb-dipole region. Difficulties in exact experimental confirmation of the threshold are considered.
Schumm, Phillip; Scoglio, Caterina; Zhang, Qian; Balcan, Duygu
2015-02-21
Through the characterization of a metapopulation cattle disease model on a directed network having source, transit, and sink nodes, we derive two global epidemic invasion thresholds. The first threshold defines the conditions necessary for an epidemic to successfully spread at the global scale. The second threshold defines the criteria that permit an epidemic to move out of the giant strongly connected component and to invade the populations of the sink nodes. As each sink node represents a final waypoint for cattle before slaughter, the existence of an epidemic among the sink nodes is a serious threat to food security. We find that the relationship between these two thresholds depends on the relative proportions of transit and sink nodes in the system and the distributions of the in-degrees of both node types. These analytic results are verified through numerical realizations of the metapopulation cattle model. Published by Elsevier Ltd.
On the dynamic readout characteristic of nonlinear super-resolution optical storage
NASA Astrophysics Data System (ADS)
Wei, Jingsong
2013-03-01
Researchers have developed nonlinear super-resolution optical storage for the past twenty years. However, several concerns remain, including (1) the presence of readout threshold power; (2) the increase of threshold power with the reduction of the mark size, and (3) the increase of the carrier-to-noise ratio (CNR) at the initial stage and then decrease with the increase of readout laser power or laser irradiation time. The present work calculates and analyzes the super-resolution spot formed by the thin film masks and the readout threshold power characteristic according to the derived formula and based on the nonlinear saturable absorption characteristic and threshold of structural change. The obtained theoretical calculation and experimental data answer the concerns regarding the dynamic readout threshold characteristic and CNR dependence on laser power and irradiation time. The near-field optical spot scanning experiment further verifies the super-resolution spot formation produced through the nonlinear thin film masks.
NASA Astrophysics Data System (ADS)
Vembris, Aivars; Zarins, Elmars; Kokars, Valdis
2017-10-01
Organic solid state lasers are thoughtfully investigated due to their potential applications in communication, sensors, biomedicine, etc. Low amplified spontaneous emission (ASE) excitation threshold value is essential for further use of the material in devices. Intramolecular interaction limits high molecule density load in the matrix. It is the case of the well-known red light emitting laser dye - 4-(dicyanomethylene)-2-methyl-6-(4-dimethylaminostyryl)-4H-pyran (DCM). The lowest ASE threshold value of the mentioned laser dye could be obtained within the concentration range between 2 and 4 wt%. At higher concentration threshold energy drastically increases. In this work optical and ASE properties of three original DCM derivatives in poly(N-vinylcarbazole) (PVK) at various concentrations will be discussed. One of the derivatives is modified DCM dye in which the methyl substituents in the electron donor part have been replaced with bulky trityloxyethyl groups (DWK-1). These sterically significant functional groups do not influence electron transitions in the dye but prevent aggregation of the molecules. The chemical structure of the second investigated compound is similar to DWK-1 where the methyl group is replaced with the tert-butyl substituent (DWK-1TB). The third derivative (DWK-2) consists of two N,N-di(trityloxyethyl)amino electron donor groups. All results were compared with DCM:PVK system. Photoluminescence quantum yield (PLQY) is up to ten times larger for DWK-1TB with respect to DCM systems. Bulky trityloxyethyl groups prevent aggregation of the molecules thus decreasing interaction between dyes and amount of non-radiative decays. The red shift of the photoluminescence and amplified spontaneous emission at higher concentrations were observed due to the solid state solvation effect. The increase of the investigated dye density in the matrix with a smaller reduction in PLQY resulted in low ASE threshold energy. The lowest threshold value was obtained around 21 μJ/cm2 (2.1 kW/cm2) in DWK-1TB:PVK films.
Rapid Corner Detection Using FPGAs
NASA Technical Reports Server (NTRS)
Morfopoulos, Arin C.; Metz, Brandon C.
2010-01-01
In order to perform precision landings for space missions, a control system must be accurate to within ten meters. Feature detection applied against images taken during descent and correlated against the provided base image is computationally expensive and requires tens of seconds of processing time to do just one image while the goal is to process multiple images per second. To solve this problem, this algorithm takes that processing load from the central processing unit (CPU) and gives it to a reconfigurable field programmable gate array (FPGA), which is able to compute data in parallel at very high clock speeds. The workload of the processor then becomes simpler; to read an image from a camera, it is transferred into the FPGA, and the results are read back from the FPGA. The Harris Corner Detector uses the determinant and trace to find a corner score, with each step of the computation occurring on independent clock cycles. Essentially, the image is converted into an x and y derivative map. Once three lines of pixel information have been queued up, valid pixel derivatives are clocked into the product and averaging phase of the pipeline. Each x and y derivative is squared against itself, as well as the product of the ix and iy derivative, and each value is stored in a WxN size buffer, where W represents the size of the integration window and N is the width of the image. In this particular case, a window size of 5 was chosen, and the image is 640 480. Over a WxN size window, an equidistance Gaussian is applied (to bring out the stronger corners), and then each value in the entire window is summed and stored. The required components of the equation are in place, and it is just a matter of taking the determinant and trace. It should be noted that the trace is being weighted by a constant k, a value that is found empirically to be within 0.04 to 0.15 (and in this implementation is 0.05). The constant k determines the number of corners available to be compared against a threshold sigma to mark a valid corner. After a fixed delay from when the first pixel is clocked in (to fill the pipeline), a score is achieved after each successive clock. This score corresponds with an (x,y) location within the image. If the score is higher than the predetermined threshold sigma, then a flag is set high and the location is recorded.
NASA Technical Reports Server (NTRS)
Griebeler, Elmer L.
2011-01-01
Binary communication through long cables, opto-isolators, isolating transformers, or repeaters can become distorted in characteristic ways. The usual solution is to slow the communication rate, change to a different method, or improve the communication media. It would help if the characteristic distortions could be accommodated at the receiving end to ease the communication problem. The distortions come from loss of the high-frequency content, which adds slopes to the transitions from ones to zeroes and zeroes to ones. This weakens the definition of the ones and zeroes in the time domain. The other major distortion is the reduction of low frequency, which causes the voltage that defines the ones or zeroes to drift out of recognizable range. This development describes a method for recovering a binary data stream from a signal that has been subjected to a loss of both higher-frequency content and low-frequency content that is essential to define the difference between ones and zeroes. The method makes use of the frequency structure of the waveform created by the data stream, and then enhances the characteristics related to the data to reconstruct the binary switching pattern. A major issue is simplicity. The approach taken here is to take the first derivative of the signal and then feed it to a hysteresis switch. This is equivalent in practice to using a non-resonant band pass filter feeding a Schmitt trigger. Obviously, the derivative signal needs to be offset to halfway between the thresholds of the hysteresis switch, and amplified so that the derivatives reliably exceed the thresholds. A transition from a zero to a one is the most substantial, fastest plus movement of voltage, and therefore will create the largest plus first derivative pulse. Since the quiet state of the derivative is sitting between the hysteresis thresholds, the plus pulse exceeds the plus threshold, switching the hysteresis switch plus, which re-establishes the data zero to one transition, except now at the logic levels of the receiving circuit. Similarly, a transition from a one to a zero will be the most substantial and fastest minus movement of voltage and therefore will create the largest minus first derivative pulse. The minus pulse exceeds the minus threshold, switching the hysteresis switch minus, which re-establishes the data one to zero transition. This innovation has a large increase in tolerance for the degradation of the binary pattern of ones and zeroes, and can reject the introduction of noise in the form of low frequencies that can cause the voltage pattern to drift up or down, and also higher frequencies that are beyond the recognizable content in the binary transitions.
A flash flood early warning system based on rainfall thresholds and daily soil moisture indexes
NASA Astrophysics Data System (ADS)
Brigandì, Giuseppina; Tito Aronica, Giuseppe
2015-04-01
Main focus of the paper is to present a flash flood early warning system, developed for Civil Protection Agency for the Sicily Region, for alerting extreme hydrometeorological events by using a methodology based on the combined use of rainfall thresholds and soil moisture indexes. As matter of fact, flash flood warning is a key element to improve the Civil Protection achievements to mitigate damages and safeguard the security of people. It is a rather complicated task, particularly in those catchments with flashy response where even brief anticipations are important and welcomed. In this context, some kind of hydrological precursors can be considered to improve the effectiveness of the emergency actions (i.e. early flood warning). Now, it is well known how soil moisture is an important factor in flood formation, because the runoff generation is strongly influenced by the antecedent soil moisture conditions of the catchment. The basic idea of the work here presented is to use soil moisture indexes derived in a continuous form to define a first alert phase in a flash flood forecasting chain and then define a unique rainfall threshold for a given day for the subsequent alarm phases activation, derived as a function of the soil moisture conditions at the beginning of the day. Daily soil moisture indexes, representative of the moisture condition of the catchment, were derived by using a parsimonious and simply to use approach based on the IHACRES model application in a modified form developed by the authors. It is a simple, spatially-lumped rainfall-streamflow model, based on the SCS-CN method and on the unit hydrograph approach that requires only rainfall, streamflow and air temperature data. It consists of two modules. In the first a non linear loss model, based on the SCS-CN method, was used to transform total rainfall into effective rainfall. In the second, a linear convolution of effective rainfall was performed using a total unit hydrograph with a configuration of one parallel channel and reservoir, thereby corresponding to 'quick' and 'slow' components of runoff. In the non linear model a wetness/soil moisture index, varying from 0 to 1, was derived to define daily soil moisture catchment conditions and then conveniently linked to a corresponding CN value to use as input to derive the corresponding rainfall threshold for a given day. Finally, rainfall thresholds for flash flooding were derived using an Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall. Application of the proposed methodology was carried out with reference to a river basin in Sicily, Italy.
Company, Nuri; Nadal, Anna; Ruiz, Cristina; Pla, Maria
2014-01-01
Synthetic linear antimicrobial peptides with cationic α-helical structures, such as BP100, have potent and specific activities against economically important plant pathogenic bacteria. They are also recognized as valuable therapeutics and preservatives. However, highly active BP100 derivatives are often phytotoxic when expressed at high levels as recombinant peptides in plants. Here we demonstrate that production of recombinant phytotoxic peptides in transgenic plants is possible by strictly limiting transgene expression to certain tissues and conditions, and specifically that minimization of this expression during transformation and regeneration of transgenic plants is essential to obtain viable plant biofactories. On the basis of whole-genome transcriptomic data available online, we identified the Os.hsp82 promoter that fulfilled this requirement and was highly induced in response to heat shock. Using this strategy, we generated transgenic rice lines producing moderate yields of severely phytotoxic BP100 derivatives on exposure to high temperature. In addition, a threshold for gene expression in selected tissues and stages was experimentally established, below which the corresponding promoters should be suitable for driving the expression of recombinant phytotoxic proteins in genetically modified plants. In view of the growing transcriptomics data available, this approach is of interest to assist promoter selection for specific purposes.
New stomatal flux-based critical levels for ozone effects on vegetation
NASA Astrophysics Data System (ADS)
Mills, Gina; Pleijel, Håkan; Braun, Sabine; Büker, Patrick; Bermejo, Victoria; Calvo, Esperanza; Danielsson, Helena; Emberson, Lisa; Fernández, Ignacio González; Grünhage, Ludger; Harmens, Harry; Hayes, Felicity; Karlsson, Per-Erik; Simpson, David
2011-09-01
The critical levels for ozone effects on vegetation have been reviewed and revised by the LRTAP Convention. Eight new or revised critical levels based on the accumulated stomatal flux of ozone (POD Y, the Phytotoxic Ozone Dose above a threshold flux of Y nmol m -2 PLA s -1, where PLA is the projected leaf area) have been agreed. For each receptor, data were combined from experiments conducted under naturally fluctuating environmental conditions in 2-4 countries, resulting in linear dose-response relationships with response variables specific to each receptor ( r2 = 0.49-0.87, p < 0.001 for all). For crops, critical levels were derived for effects on wheat (grain yield, grain mass, and protein yield), potato (tuber yield) and tomato (fruit yield). For forest trees, critical levels were derived for effects on changes in annual increment in whole tree biomass for beech and birch, and Norway spruce. For (semi-)natural vegetation, the critical level for effects on productive and high conservation value perennial grasslands was based on effects on important component species of the genus Trifolium (clover species). These critical levels can be used to assess protection against the damaging effects of ozone on food security, important ecosystem services provided by forest trees (roundwood production, C sequestration, soil stability and flood prevention) and the vitality of pasture.
Company, Nuri; Nadal, Anna; Ruiz, Cristina; Pla, Maria
2014-01-01
Synthetic linear antimicrobial peptides with cationic α-helical structures, such as BP100, have potent and specific activities against economically important plant pathogenic bacteria. They are also recognized as valuable therapeutics and preservatives. However, highly active BP100 derivatives are often phytotoxic when expressed at high levels as recombinant peptides in plants. Here we demonstrate that production of recombinant phytotoxic peptides in transgenic plants is possible by strictly limiting transgene expression to certain tissues and conditions, and specifically that minimization of this expression during transformation and regeneration of transgenic plants is essential to obtain viable plant biofactories. On the basis of whole-genome transcriptomic data available online, we identified the Os.hsp82 promoter that fulfilled this requirement and was highly induced in response to heat shock. Using this strategy, we generated transgenic rice lines producing moderate yields of severely phytotoxic BP100 derivatives on exposure to high temperature. In addition, a threshold for gene expression in selected tissues and stages was experimentally established, below which the corresponding promoters should be suitable for driving the expression of recombinant phytotoxic proteins in genetically modified plants. In view of the growing transcriptomics data available, this approach is of interest to assist promoter selection for specific purposes. PMID:25387106
Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly
2016-12-01
To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.
Simon, Quentin; Thouveny, Nicolas; Bourlès, Didier L; Valet, Jean-Pierre; Bassinot, Franck; Ménabréaz, Lucie; Guillou, Valéry; Choy, Sandrine; Beaufort, Luc
2016-11-01
Geomagnetic dipole moment variations associated with polarity reversals and excursions are expressed by large changes of the cosmogenic nuclide beryllium-10 ( 10 Be) production rates. Authigenic 10 Be/ 9 Be ratios (proxy of atmospheric 10 Be production) from oceanic cores therefore complete the classical information derived from relative paleointensity (RPI) records. This study presents new authigenic 10 Be/ 9 Be ratio results obtained from cores MD05-2920 and MD05-2930 collected in the west equatorial Pacific Ocean. Be ratios from cores MD05-2920, MD05-2930 and MD90-0961 have been stacked and averaged. Variations of the authigenic 10 Be/ 9 Be ratio are analyzed and compared with the geomagnetic dipole low series reported from global RPI stacks. The largest 10 Be overproduction episodes are related to dipole field collapses (below a threshold of 2 × 10 22 Am 2 ) associated with the Brunhes/Matuyama reversal, the Laschamp (41 ka) excursion, and the Iceland Basin event (190 ka). Other significant 10 Be production peaks are correlated to geomagnetic excursions reported in literature. The record was then calibrated by using absolute dipole moment values drawn from the Geomagia and Pint paleointensity value databases. The 10 Be-derived geomagnetic dipole moment record, independent from sedimentary paleomagnetic data, covers the Brunhes-Matuyama transition and the whole Brunhes Chron. It provides new and complementary data on the amplitude and timing of millennial-scale geomagnetic dipole moment variations and particularly on dipole moment collapses triggering polarity instabilities.
Threshold Laws for Two-Electron Ejection Processes: A Still Controversial Problem in Atomic Physics
NASA Technical Reports Server (NTRS)
Temkin, Aaron
2003-01-01
This talk deals with collision processes of the following kind: (a) an ionizing collision of an electron with a neutral atom, (b) a photon incident of a negative ion resulting in two-electron ejection. In both cases the final state is a positive ion and two outgoing electrons, and in principle both processes should be governed by the same form of threshold law. It is generally conceded that this is one of the most difficult basic problems in nonrelativistic quantum mechanics. The standard treatment (due to Wannier) will be briefly reviewed in terms of the derivation of his well- known threshold law for the yield (Q) of positive ions vs. the excess energy (E): Q(sub w) varies as E(exp 1.127...). The derivation is a brilliant analysis based on Newton's equations, leading to the dominance of events in which the two electrons emerge on opposite sides of the residual ion with similar energies. In contrast, I will argue on the basis of quantum mechanical ideas that in the threshold limit the more likely outcome are events in which the electrons emerge with decidedly different energies, leading to a formally different (Coulomb-dipole) threshold law Q(sub CD) varies as E(1 + C sin(alpha ln(E)+mu)]/[ln(E)](exp 2). Additional aspects of that approach will be discussed . Some: experimental results will be presented, and more incisive predictions involving polarized projectiles and targets will be given.
Amplified spontaneous emission of pyranyliden derivatives in PVK matrix
NASA Astrophysics Data System (ADS)
Vembris, Aivars; Zarinsh, Elmars; Kokars, Valdis
2016-04-01
One of the well-known red light emitting laser dyes is 4-(dicyanomethylene)-2-methyl-6-(4-dimethylaminostyryl)-4Hpyran (DCM). Amplified spontaneous emission (ASE) has been widely investigated of DCM molecules or its derivatives in polymer or low molecular weight matrix. The main issue for these molecules is aggregation which limits doping concentration in matrix. Lowest ASE threshold values within concentration range of 2 and 4 wt% were obtained. In this work ASE properties of two original DCM derivatives in poly(N-vinylcarbazole) (PVK) at various concentrations will be discussed. One of the derivatives is the same DCM dye with replaced butyl groups at electron donor part with bulky trytiloxyethyl groups (DWK-1). These groups do not influence electron transitions in the dye but prevent aggregation of the molecules. Second derivative (DWK-2) consists of two equal donor groups with the attached trytiloxyethyl groups. All results were compared with DCM:PVK system. Photoluminescence quantum yield (PLQY) is almost three times larger for DWK-1 concentration up to 20wt% with respect to DCM systems. PLQY was saturated on 0.06 at higher DWK-1 concentrations. Bulky trytiloxyethyl groups prevent aggregation of the molecules thus decreasing interaction between dyes and numbers of non-radiative decays. Red shift of photoluminescence and amplified spontaneous emission at higher concentrations were observed due to the solid state solvation effect. Increases of dye density in matrix with smaller lose in PLQY resulted in low ASE threshold energy. The lowest threshold value was obtained around 29 μJ/cm2 in DWK-1:PVK films.
Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions
NASA Astrophysics Data System (ADS)
Peacock, Sheila; Douglas, Alan; Bowers, David
2017-08-01
Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.
75 FR 3468 - Revised Jurisdictional Thresholds For Section 7A of the Clayton Act
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-21
... revised thresholds for the Hart-Scott-Rodino Antitrust Improvements Act of 1976 required by the 2000... Hart-Scott-Rodino Antitrust Improvements Act of 1976, Pub. L. 94-435, 90 Stat. 1390 (``the Act... product, in accordance with Section 8(a)(5). The new thresholds, which take effect 30 days after...
16 CFR § 1061.4 - Threshold requirements for applications for exemption.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Threshold requirements for applications for exemption. § 1061.4 Section § 1061.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION GENERAL APPLICATIONS FOR EXEMPTION FROM PREEMPTION § 1061.4 Threshold requirements for applications for exemption. (a) The Commission will consider an...
Visual analytics of anomaly detection in large data streams
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.; Sharma, Ratnesh K.; Mehta, Abhay
2009-01-01
Most data streams usually are multi-dimensional, high-speed, and contain massive volumes of continuous information. They are seen in daily applications, such as telephone calls, retail sales, data center performance, and oil production operations. Many analysts want insight into the behavior of this data. They want to catch the exceptions in flight to reveal the causes of the anomalies and to take immediate action. To guide the user in finding the anomalies in the large data stream quickly, we derive a new automated neighborhood threshold marking technique, called AnomalyMarker. This technique is built on cell-based data streams and user-defined thresholds. We extend the scope of the data points around the threshold to include the surrounding areas. The idea is to define a focus area (marked area) which enables users to (1) visually group the interesting data points related to the anomalies (i.e., problems that occur persistently or occasionally) for observing their behavior; (2) discover the factors related to the anomaly by visualizing the correlations between the problem attribute with the attributes of the nearby data items from the entire multi-dimensional data stream. Mining results are quickly presented in graphical representations (i.e., tooltip) for the user to zoom into the problem regions. Different algorithms are introduced which try to optimize the size and extent of the anomaly markers. We have successfully applied this technique to detect data stream anomalies in large real-world enterprise server performance and data center energy management.
NASA Astrophysics Data System (ADS)
Bhat, G. S.; Kumar, Shailendra
2015-03-01
The vertical structure of radar reflectivity factor in active convective clouds that form during the South Asian monsoon season is reported using the 2A25 version 6 data product derived from the precipitation radar measurements on board the Tropical Rainfall Measuring Mission satellite. We define two types of convective cells, namely, cumulonimbus towers (CbTs) and intense convective cells (ICCs). CbT is defined referring to a reflectivity threshold of 20 dBZ at 12 km altitude and is at least 9 km thick. ICCs are constructed referring to reflectivity thresholds at 8 km and 3 km altitudes. Cloud properties reported here are based on 10 year climatology. It is observed that the frequency of occurrence of CbTs is highest over the foothills of Himalayas, plains of northern India and Bangladesh, and minimum over the Arabian Sea and equatorial Indian Ocean west of 90°E. The regional differences depend on the reference height selected, namely, small in the case of CbTs and prominent in 6-13 km height range for ICCs. Land cells are more intense than the oceanic ones for convective cells defined using the reflectivity threshold at 3 km, whereas land versus ocean contrasts are not observed in the case of CbTs. Compared to cumulonimbus clouds elsewhere in the tropics, the South Asian counterparts have higher reflectivity values above 11 km altitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Eric L.; Davis, Quincy C.; Morse, Michael D.
The abrupt onset of predissociation in the congested electronic spectra of jet-cooled VC, VN, and VS has been observed using resonant two-photon ionization spectroscopy. It is argued that because of the high density of electronic states in these molecules, the predissociation threshold occurs at the thermochemical threshold for the production of separated atoms in their ground electronic states. As a result, the measured threshold represents the bond dissociation energy. Using this method, bond dissociation energies of D{sub 0}(V C) = 4.1086(25) eV, D{sub 0}(V N) = 4.9968(20) eV, and D{sub 0}(V S) = 4.5353(25) eV are obtained. From these values,more » enthalpies of formation are derived as Δ{sub f,0K}H°(V C(g)) = 827.0 ± 8 kJ mol{sup −1}, Δ{sub f,0K}H°(V N(g)) = 500.9 ± 8 kJ mol{sup −1}, and Δ{sub f,0K}H°(V S(g)) = 349.3 ± 8 kJ mol{sup −1}. Using a thermochemical cycle and the well-known ionization energies of V, VC, and VN, our results also provide D{sub 0}(V{sup +}–C) = 3.7242(25) eV and D{sub 0}(V{sup +}–N) = 4.6871(20) eV. These values are compared to previous measurements and to computational results. The precision of these bond dissociation energies makes them good candidates for testing computational chemistry methods, particularly those that employ density functional theory.« less
Constraints on models for the Higgs boson with exotic spin and parity
NASA Astrophysics Data System (ADS)
Johnson, Emily Hannah
The production of a Higgs boson in association with a vector boson at the Tevatron offers a unique opportunity to study models for the Higgs boson with exotic spin J and parity P assignments. At the Tevatron the V H system is produced near threshold. Different JP assignments of the Higgs boson can be distinguished by examining the behavior of the cross section near threshold. The relatively low backgrounds at the Tevatron compared to the LHC put us in a unique position to study the direct decay of the Higgs boson to fermions. If the Higgs sector is more complex than predicted, studying the spin and parity of the Higgs boson in all decay modes is important. In this Thesis we will examine the WH → lnu bb¯ production and decay mode using 9.7 fb-1 of data collected by the D0 experiment in an attempt to derive constraints on models containing exotic values for the spin and parity of the Higgs boson. In particular, we will examine models for a Higgs boson with J P = 0- and JP = 2+. We use a likelihood ratio to quantify the degree to which our data are incompatible with exotic JP predictions for a range of possible production rates. Assuming the production cross section times branching ratio of the signals in the models considered is equal to the standard model prediction, the WH → lnu bb¯ mode alone is unable to reject either exotic model considered. We will also discuss the combination of the ZH → llbb¯, WH → lnubb¯, and V H → nunu bb¯ production modes at the D0 experiment and with the CDF experiment. When combining all three production modes at the D0 experiment we reject the JP = 0- and J P = 2+ hypotheses at the 97.6% CL and at the 99.0% CL, respectively, when assuming the signal production cross section times branching ratio is equal to the standard model predicted value. When combining with the CDF experiment we reject the JP = 0- and JP = 2 + hypotheses with significances of 5.0 standard deviations and 4.9 standard deviations, respectively.
High speed point derivative microseismic detector
Uhl, J.E.; Warpinski, N.R.; Whetten, E.B.
1998-06-30
A high speed microseismic event detector constructed in accordance with the present invention uses a point derivative comb to quickly and accurately detect microseismic events. Compressional and shear waves impinging upon microseismic receiver stations disposed to collect waves are converted into digital data and analyzed using a point derivative comb including assurance of quiet periods prior to declaration of microseismic events. If a sufficient number of quiet periods have passed, the square of a two point derivative of the incoming digital signal is compared to a trip level threshold exceeding the determined noise level to declare a valid trial event. The squaring of the derivative emphasizes the differences between noise and signal, and the valid event is preferably declared when the trip threshold has been exceeded over a temporal comb width to realize a comb over a given time period. Once a trial event has been declared, the event is verified through a spatial comb, which applies the temporal event comb to additional stations. The detector according to the present invention quickly and accurately detects initial compressional waves indicative of a microseismic event which typically exceed the ambient cultural noise level by a small amount, and distinguishes the waves from subsequent larger amplitude shear waves. 9 figs.
High speed point derivative microseismic detector
Uhl, James Eugene; Warpinski, Norman Raymond; Whetten, Ernest Blayne
1998-01-01
A high speed microseismic event detector constructed in accordance with the present invention uses a point derivative comb to quickly and accurately detect microseismic events. Compressional and shear waves impinging upon microseismic receiver stations disposed to collect waves are converted into digital data and analyzed using a point derivative comb including assurance of quiet periods prior to declaration of microseismic events. If a sufficient number of quiet periods have passed, the square of a two point derivative of the incoming digital signal is compared to a trip level threshold exceeding the determined noise level to declare a valid trial event. The squaring of the derivative emphasizes the differences between noise and signal, and the valid event is preferably declared when the trip threshold has been exceeded over a temporal comb width to realize a comb over a given time period. Once a trial event has been declared, the event is verified through a spatial comb, which applies the temporal event comb to additional stations. The detector according to the present invention quickly and accurately detects initial compressional waves indicative of a microseismic event which typically exceed the ambient cultural noise level by a small amount, and distinguishes the waves from subsequent larger amplitude shear waves.
NASA Astrophysics Data System (ADS)
Su, Y.; Guo, Q.; Collins, B.; Fry, D.; Kelly, M.
2014-12-01
Forest fuel treatments (FFT) are often employed in Sierra Nevada forest (located in California, US) to enhance forest health, regulate stand density, and reduce wildfire risk. However, there have been concerns that FFTs may have negative impacts on certain protected wildlife species. Due to the constraints and protection of resources (e.g., perennial streams, cultural resources, wildlife habitat, etc.), the actual FFT extents are usually different from planned extents. Identifying the actual extent of treated areas is of primary importance to understand the environmental influence of FFTs. Light detection and ranging (Lidar) is a powerful remote sensing technique that can provide accurate forest structure measurements, which provides great potential to monitor forest changes. This study used canopy height model (CHM) and canopy cover (CC) products derived from multi-temporal airborne Lidar data to detect FFTs by an approach combining a pixel-wise thresholding method and a object-of-interest segmentation method. We also investigated forest change following the implementation of landscape-scale FFT projects through the use of normalized difference vegetation index (NDVI) and standardized principle component analysis (PCA) from multi-temporal high resolution aerial imagery. The same FFT detection routine was applied on the Lidar data and aerial imagery for the purpose of comparing the capability of Lidar data and aerial imagery on FFT detection. Our results demonstrated that the FFT detection using Lidar derived CC products produced both the highest total accuracy and kappa coefficient, and was more robust at identifying areas with light FFTs. The accuracy using Lidar derived CHM products was significantly lower than that of the result using Lidar derived CC, but was still slightly higher than using aerial imagery. FFT detection results using NDVI and standardized PCA using multi-temporal aerial imagery produced almost identical total accuracy and kappa coefficient. Both methods showed relatively limited capacity to detect light FFT areas, and had higher false detection rate (recognized untreated areas as treated areas) compared to the methods using Lidar derived parameters.
Threshold law for electron-atom impact ionization
NASA Technical Reports Server (NTRS)
Temkin, A.
1982-01-01
A derivation of the explicit form of the threshold law for electron impact ionization of atoms is presented, based on the Coulomb-dipole theory. The important generalization is made of using a dipole function whose moment is the dipole moment formed by an inner electron and the nucleus. The result is a modulated quasi-linear law for the yield of positive ions which applies to positron-atom impact ionization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorenbos, G., E-mail: dorenbos@ny.thn.ne.jp
Percolation thresholds for solvent diffusion within hydrated model polymeric membranes are derived from dissipative particle dynamics in combination with Monte Carlo (MC) tracer diffusion calculations. The polymer backbones are composed of hydrophobic A beads to which at regular intervals Y-shaped side chains are attached. Each side chain is composed of eight A beads and contains two identical branches that are each terminated with a pendant hydrophilic C bead. Four types of side chains are considered for which the two branches (each represented as [C], [AC], [AAC], or [AAAC]) are splitting off from the 8th, 6th, 4th, or 2nd A bead,more » respectively. Water diffusion through the phase separated water containing pore networks is deduced from MC tracer diffusion calculations. The percolation threshold for the architectures containing the [C] and [AC] branches is at a water volume fraction of ∼0.07 and 0.08, respectively. These are much lower than those derived earlier for linear architectures of various side chain length and side chain distributions. Control of side chain architecture is thus a very interesting design parameter to decrease the percolation threshold for solvent and proton transports within flexible amphiphilic polymer membranes.« less
Production of Charmonium at Threshold in Hall A and C at Jefferson Lab
Hafidi, K.; Joosten, S.; Meziani, Z. -E.; ...
2017-05-27
Here, we describe in this paper two approved experiments in Hall A and Hall C at Jefferson Lab that will investigate the pure gluonic component of the strong interaction of Quantum ChromoDynamics by measuring the elastic J/ψ electro and photo-production cross section in the threshold region as well as explore the nature of the recently discovered LHCb charmed pentaquarks.
38 CFR 49.44 - Procurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standards. (iv) The specific features of “brand name or equal” descriptions that bidders are required to... threshold, specifies a “brand name” product. (4) The proposed award over the small purchase threshold is to...
Non-abelian factorisation for next-to-leading-power threshold logarithms
NASA Astrophysics Data System (ADS)
Bonocore, D.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C. D.
2016-12-01
Soft and collinear radiation is responsible for large corrections to many hadronic cross sections, near thresholds for the production of heavy final states. There is much interest in extending our understanding of this radiation to next-to-leading power (NLP) in the threshold expansion. In this paper, we generalise a previously proposed all-order NLP factorisation formula to include non-abelian corrections. We define a nonabelian radiative jet function, organising collinear enhancements at NLP, and compute it for quark jets at one loop. We discuss in detail the issue of double counting between soft and collinear regions. Finally, we verify our prescription by reproducing all NLP logarithms in Drell-Yan production up to NNLO, including those associated with double real emission. Our results constitute an important step in the development of a fully general resummation formalism for NLP threshold effects.
Generation of skeletal mechanism by means of projected entropy participation indices
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica
2017-11-01
When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.
NASA Astrophysics Data System (ADS)
Sahlu, Dejene; Moges, Semu; Anagnostou, Emmanouil; Nikolopoulos, Efthymios; Hailu, Dereje; Mei, Yiwen
2017-04-01
Water resources assessment, planning and management in Africa is often constrained by the lack of reliable spatio-temporal rainfall data. Satellite products are steadily growing and offering useful alternative datasets of rainfall globally. The aim of this paper is to examine the error characteristics of the main available global satellite precipitation products with the view of improving the reliability of wet season (June to September) and small rainy season rainfall datasets over the Upper Blue Nile Basin. The study utilized six satellite derived precipitation datasets at 0.25-deg spatial grid size and daily temporal resolution:1) the near real-time (3B42_RT) and gauge adjusted (3B42_V7) products of Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), 2) gauge adjusted and unadjusted Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) products and 3) the gauge adjusted and un-adjusted product of the National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center Morphing technique (CMORPH) over the period of 2000 to 2013.The error analysis utilized statistical techniques using bias ratio (Bias), correlation coefficient (CC) and root-mean-square-error (RMSE). Mean relative error (MRE), CC and RMSE metrics are further examined for six categories of 10th, 25th, 50th, 75th, 90thand 95th percentile rainfall thresholds. The skill of the satellite estimates is evaluated using categorical error metrics of missed rainfall volume fraction (MRV), falsely detected rainfall volume fraction (FRV), probability of detection (POD) and False Alarm Ratio (FAR). Results showed that six satellite based rainfall products underestimated wet season (June to September) gauge precipitation, with the exception of non-adjusted PERSIANN that overestimated the initial part of the rainy season (March to May). During the wet season, adjusted CMORPH has relatively better bias ratio (89 %) followed by 3B42_V7 (88%), adjusted-PERSIANN (81%), and non-adjusted products have relatively lower bias ratio. The results from CC statistic range from 0.34 to 0.43 for the wet season with adjusted products having slightly higher values. The initial rainy season has relatively higher CC than the wet season. Results from the categorical error metrics showed that CMORPH products have higher POD (91%), which are better in avoiding detecting false rainfall events in the wet season. For the initial rainy season PERSIANN (<50%), TMPA and CMORPH products are nearly equivalent (63-67%). On the other hand, FAR is below 0.1% for all products while in the wet season is higher (10-25%). In terms of rainfall volume of missed and false detected rainfall, CMORPH exhibited lower MRV ( 4.5%) than the TMPA and PERSIANN products (11-19%.) in the wet season. MRV for the initial rainy season was 20% for TMPA and CMORPH products and above 30% for PERSIANN products. All products are nearly equivalent in the wet season in terms of FRV (< 0.2%). The magnitude of MRE increases with gauge rainfall threshold categories with 3B42-V7 and adjusted CMORPH having lower magnitude, showing that underestimation of rainfall increases with increasing rainfall magnitude. CC also decreases with gauge rainfall threshold categories with CMORPH products having slightly higher values. Overall, all satellite products underestimated (overestimated) lower (higher) quantiles quantiles. We have observed that among the six satellite rainfall products the adjusted CMORPH has relatively better potential to improve wet season rainfall estimate and 3B42-V7 that initial rainy season in the Upper Blue Nile Basin.
Rico, Andreu; Jacobs, Rianne; Van den Brink, Paul J; Tello, Alfredo
2017-12-01
Estimating antibiotic pollution and antibiotic resistance development risks in environmental compartments is important to design management strategies that advance our stewardship of antibiotics. In this study we propose a modelling approach to estimate the risk of antibiotic resistance development in environmental compartments and demonstrate its application in aquaculture production systems. We modelled exposure concentrations for 12 antibiotics used in Vietnamese Pangasius catfish production using the ERA-AQUA model. Minimum selective concentration (MSC) distributions that characterize the selective pressure of antibiotics on bacterial communities were derived from the European Committee on Antimicrobial Susceptibility Testing (EUCAST) Minimum Inhibitory Concentration dataset. The antibiotic resistance development risk (RDR) for each antibiotic was calculated as the probability that the antibiotic exposure distribution exceeds the MSC distribution representing the bacterial community. RDRs in pond sediments were nearly 100% for all antibiotics. Median RDR values in pond water were high for the majority of the antibiotics, with rifampicin, levofloxacin and ampicillin having highest values. In the effluent mixing area, RDRs were low for most antibiotics, with the exception of amoxicillin, ampicillin and trimethoprim, which presented moderate risks, and rifampicin and levofloxacin, which presented high risks. The RDR provides an efficient means to benchmark multiple antibiotics and treatment regimes in the initial phase of a risk assessment with regards to their potential to develop resistance in different environmental compartments, and can be used to derive resistance threshold concentrations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dual processing model of medical decision-making.
Djulbegovic, Benjamin; Hozo, Iztok; Beckstead, Jason; Tsalatsanis, Athanasios; Pauker, Stephen G
2012-09-03
Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. We show that physician's beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker's threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories).
NASA Astrophysics Data System (ADS)
Brigandì, Giuseppina; Tito Aronica, Giuseppe; Bonaccorso, Brunella; Gueli, Roberto; Basile, Giuseppe
2017-09-01
The main focus of the paper is to present a flood and landslide early warning system, named HEWS (Hydrohazards Early Warning System), specifically developed for the Civil Protection Department of Sicily, based on the combined use of rainfall thresholds, soil moisture modelling and quantitative precipitation forecast (QPF). The warning system is referred to 9 different Alert Zones
in which Sicily has been divided into and based on a threshold system of three different increasing critical levels: ordinary, moderate and high. In this system, for early flood warning, a Soil Moisture Accounting (SMA) model provides daily soil moisture conditions, which allow to select a specific set of three rainfall thresholds, one for each critical level considered, to be used for issue the alert bulletin. Wetness indexes, representative of the soil moisture conditions of a catchment, are calculated using a simple, spatially-lumped rainfall-streamflow model, based on the SCS-CN method, and on the unit hydrograph approach, that require daily observed and/or predicted rainfall, and temperature data as input. For the calibration of this model daily continuous time series of rainfall, streamflow and air temperature data are used. An event based lumped rainfall-runoff model has been, instead, used for the derivation of the rainfall thresholds for each catchment in Sicily characterised by an area larger than 50 km2. In particular, a Kinematic Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall was developed for this purpose. For rainfall-induced shallow landslide warning, empirical rainfall thresholds provided by Gariano et al. (2015) have been included in the system. They were derived on an empirical basis starting from a catalogue of 265 shallow landslides in Sicily in the period 2002-2012. Finally, Delft-FEWS operational forecasting platform has been applied to link input data, SMA model and rainfall threshold models to produce warning on a daily basis for the entire region.
Devlin, Michelle; Painting, Suzanne; Best, Mike
2007-01-01
The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.
Spectral singularities, threshold gain, and output intensity for a slab laser with mirrors
NASA Astrophysics Data System (ADS)
Doğan, Keremcan; Mostafazadeh, Ali; Sarısaman, Mustafa
2018-05-01
We explore the consequences of the emergence of linear and nonlinear spectral singularities in TE modes of a homogeneous slab of active optical material that is placed between two mirrors. We use the results together with two basic postulates regarding the behavior of laser light emission to derive explicit expressions for the laser threshold condition and output intensity for these modes of the slab and discuss their physical implications. In particular, we reveal the details of the dependence of the threshold gain and output intensity on the position and properties of the mirrors and on the real part of the refractive index of the gain material.
NASA Astrophysics Data System (ADS)
Barbero, Ever J.; Bedard, Antoine Joseph
2018-04-01
Magnetoelectric composites can be produced by embedding magnetostrictive particles in a piezoelectric matrix derived from a piezoelectric powder precursor. Ferrite magnetostrictive particles, if allowed to percolate, can short the potential difference generated in the piezoelectric phase. Modeling a magnetoelectric composite as an aggregate of bi-disperse hard shells, molecular dynamics was used to explore relationships among relative particle size, particle affinity, and electrical percolation with the goal of maximizing the percolation threshold. It is found that two factors raise the percolation threshold, namely the relative size of magnetostrictive to piezoelectric particles, and the affinity between the magnetostrictive and piezoelectric particles.
Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C
2002-08-01
To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.
On the renewal risk model under a threshold strategy
NASA Astrophysics Data System (ADS)
Dong, Yinghui; Wang, Guojing; Yuen, Kam C.
2009-08-01
In this paper, we consider the renewal risk process under a threshold dividend payment strategy. For this model, the expected discounted dividend payments and the Gerber-Shiu expected discounted penalty function are investigated. Integral equations, integro-differential equations and some closed form expressions for them are derived. When the claims are exponentially distributed, it is verified that the expected penalty of the deficit at ruin is proportional to the ruin probability.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers.
Liu, Fang; Chan, Alice H D; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C M
2016-07-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers
Liu, Fang; Chan, Alice H. D.; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C. M.
2016-01-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production. PMID:27475178
Final state interactions at the threshold of Higgs boson pair production
NASA Astrophysics Data System (ADS)
Zhang, Zhentao
2015-11-01
We study the effect of final state interactions at the threshold of Higgs boson pair production in the Glashow-Weinberg-Salam model. We consider three major processes of the pair production in the model: lepton pair annihilation, ZZ fusion, and WW fusion. We find that the corrections caused by the effect for these processes are markedly different. According to our results, the effect can cause non-negligible corrections to the cross sections for lepton pair annihilation and small corrections for ZZ fusion, and this effect is negligible for WW fusion.
The effect of radiation on the long term productivity of a plant based CELSS
NASA Technical Reports Server (NTRS)
Thompson, B. G.; Lake, B. H.
1987-01-01
Mutations occur at a higher rate in space than under terrestrial conditions, primarily due to an increase in radiation levels. These mutations may effect the productivity of plants found in a controlled ecological life support system (CELSS). Computer simulations of plants with different ploidies, modes of reproduction, lethality thresholds, viability thresholds and susceptibilities to radiation induced mutations were performed under space normal and solar flare conditions. These simulations identified plant characteristics that would enable plants to retain high productivities over time in a CELSS.
NASA Astrophysics Data System (ADS)
Saiz, Gustavo; Goodrick, Iain; Wurster, Christopher; Nelson, Paul N.; Wynn, Jonathan; Bird, Michael
2017-12-01
Understanding the main factors driving fire regimes in grasslands and savannas is critical to better manage their biodiversity and functions. Moreover, improving our knowledge on pyrogenic carbon (PyC) dynamics, including formation, transport and deposition, is fundamental to better understand a significant slow-cycling component of the global carbon cycle, particularly as these ecosystems account for a substantial proportion of the area globally burnt. However, a thorough assessment of past fire regimes in grass-dominated ecosystems is problematic due to challenges in interpreting the charcoal record of sediments. It is therefore critical to adopt appropriate sampling and analytical methods to allow the acquisition of reliable data and information on savanna fire dynamics. This study uses hydrogen pyrolysis (HyPy) to quantify PyC abundance and stable isotope composition (δ13C) in recent sediments across 38 micro-catchments covering a wide range of mixed C3/C4 vegetation in north Queensland, Australia. We exploited the contrasting δ13C values of grasses (i.e. C4; δ13C >-15‰) and woody vegetation (i.e. C3; δ13C <-24‰) to assess the preferential production and transport of grass-derived PyC in savanna ecosystems. Analyses were conducted on bulk and size-fractionated samples to determine the fractions into which PyC preferentially accumulates. Our data show that the δ13C value of PyC in the sediments is decoupled from the δ13C value of total organic carbon, which suggests that a significant component of PyC may be derived from incomplete grass combustion, even when the proportion of C4 grass biomass in the catchment was relatively small. Furthermore, we conducted 16 experimental burns that indicate that there is a comminution of PyC produced in-situ to smaller particles, which facilitates the transport of this material, potentially affecting its preservation potential. Savanna fires preferentially burn the grass understory rather than large trees, leading to a bias toward the finer C4–derived PyC in the sedimentary record. This in turn, provides further evidence for the preferential production and transport of C4-derived PyC in mixed ecosystems where grass and woody vegetation coexist. Moreover, our isotopic approach provides independent validation of findings derived from conventional charcoal counting techniques concerning the appropriateness of adopting a relatively small particle size threshold.
Purification and characterization of recombinant supersweet protein thaumatin II from tomato fruit.
Firsov, Aleksey; Shaloiko, Lyubov; Kozlov, Oleg; Vinokurov, Leonid; Vainstein, Alexander; Dolgov, Sergey
2016-07-01
Thaumatin, a supersweet protein from the African plant katemfe (Thaumatococcus daniellii Benth.), is a promising zero-calorie sweetener for use in the food and pharmaceutical industries. Due to limited natural sources of thaumatin, its production using transgenic plants is an advantageous alternative. We report a simple protocol for purification of recombinant thaumatin II from transgenic tomato. Thaumatin was extracted from ripe tomato fruit in a low-salt buffer and purified on an SP-Sephacryl column. Recombinant thaumatin yield averaged 50 mg/kg fresh fruit. MALDI-MS analysis showed correct processing of thaumatin in tomato plants. The recombinant thaumatin was indistinguishable from the native protein in a taste test. The purified tomato-derived thaumatin had an intrinsic sweetness with a threshold value in taste tests of around 50 nM. These results demonstrate the potential of an expression system based on transgenic tomato plants for production of recombinant thaumatin for the food and pharmaceutical industries. Copyright © 2016 Elsevier Inc. All rights reserved.
Sesmero, Juan P
2014-11-01
This study develops a model of crop residue (i.e. stover) supply and derived demand for irrigation water accounting for non-linear effects of soil organic matter on soil's water holding capacity. The model is calibrated for typical conditions in central Nebraska, United States, and identifies potential interactions between water and biofuel policies. The price offered for feedstock by a cost-minimizing plant facing that stover supply response is calculated. Results indicate that as biofuel production volumes increase, soil carbon depletion per unit of biofuel produced decreases. Consumption of groundwater per unit of biofuel produced first decreases and then increases (after a threshold of 363 dam(3) of biofuels per year) due to plants' increased reliance on the extensive margin for additional biomass. The analysis reveals a tension between biofuel and water policies. As biofuel production raises the economic benefits of relaxing water conservation policies (measured by the "shadow price" of water) increase. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chatterjee, R. S.; Singh, Narendra; Thapa, Shailaja; Sharma, Dravneeta; Kumar, Dheeraj
2017-06-01
The present study proposes land surface temperature (LST) retrieval from satellite-based thermal IR data by single channel radiative transfer algorithm using atmospheric correction parameters derived from satellite-based and in-situ data and land surface emissivity (LSE) derived by a hybrid LSE model. For example, atmospheric transmittance (τ) was derived from Terra MODIS spectral radiance in atmospheric window and absorption bands, whereas the atmospheric path radiance and sky radiance were estimated using satellite- and ground-based in-situ solar radiation, geographic location and observation conditions. The hybrid LSE model which is coupled with ground-based emissivity measurements is more versatile than the previous LSE models and yields improved emissivity values by knowledge-based approach. It uses NDVI-based and NDVI Threshold method (NDVITHM) based algorithms and field-measured emissivity values. The model is applicable for dense vegetation cover, mixed vegetation cover, bare earth including coal mining related land surface classes. The study was conducted in a coalfield of India badly affected by coal fire for decades. In a coal fire affected coalfield, LST would provide precise temperature difference between thermally anomalous coal fire pixels and background pixels to facilitate coal fire detection and monitoring. The derived LST products of the present study were compared with radiant temperature images across some of the prominent coal fire locations in the study area by graphical means and by some standard mathematical dispersion coefficients such as coefficient of variation, coefficient of quartile deviation, coefficient of quartile deviation for 3rd quartile vs. maximum temperature, coefficient of mean deviation (about median) indicating significant increase in the temperature difference among the pixels. The average temperature slope between adjacent pixels, which increases the potential of coal fire pixel detection from background pixels, is significantly larger in the derived LST products than the corresponding radiant temperature images.
NASA Technical Reports Server (NTRS)
Darzi, Michael; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)
1992-01-01
Methods for detecting and screening cloud contamination from satellite derived visible and infrared data are reviewed in this document. The methods are applicable to past, present, and future polar orbiting satellite radiometers. Such instruments include the Coastal Zone Color Scanner (CZCS), operational from 1978 through 1986; the Advanced Very High Resolution Radiometer (AVHRR); the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), scheduled for launch in August 1993; and the Moderate Resolution Imaging Spectrometer (IMODIS). Constant threshold methods are the least demanding computationally, and often provide adequate results. An improvement to these methods are the least demanding computationally, and often provide adequate results. An improvement to these methods is to determine the thresholds dynamically by adjusting them according to the areal and temporal distributions of the surrounding pixels. Spatial coherence methods set thresholds based on the expected spatial variability of the data. Other statistically derived methods and various combinations of basic methods are also reviewed. The complexity of the methods is ultimately limited by the computing resources. Finally, some criteria for evaluating cloud screening methods are discussed.
L to H mode transition: Parametric dependencies of the temperature threshold
Bourdelle, C.; Chone, L.; Fedorczak, N.; ...
2015-06-15
The L to H mode transition occurs at a critical power which depends on various parameters, such as the magnetic field, the density, etc. Experimental evidence on various tokamaks (JET, ASDEX-Upgrade, DIII-D, Alcator C-Mod) points towards the existence of a critical temperature characterizing the transition. This criterion for the L-H transition is local and is therefore easier to be compared to theoretical approaches. In order to shed light on the mechanisms of the transition, simple theoretical ideas are used to derive a temperature threshold (T th). They are based on the stabilization of the underlying turbulence by a mean radialmore » electric field shear. The nature of the turbulence varies as the collisionality decreases, from resistive ballooning modes to ion temperature gradient and trapped electron modes. The obtained parametric dependencies of the derived T th are tested versus magnetic field, density, effective charge. Furthermore, various robust experimental observations are reproduced, in particular T th increases with magnetic field B and increases with density below the density roll-over observed on the power threshold.« less
78 FR 2675 - Revised Jurisdictional Thresholds of the Clayton Act
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-14
... that no corporation is covered if the competitive sales of either corporation are less than $1,000,000... change in gross national product. The new thresholds, which take effect immediately, are $28,883,000 for...
Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1996-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Unipolar terminal-attractor based neural associative memory with adaptive threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1993-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Potential of solar-simulator-pumped alexandrite lasers
NASA Technical Reports Server (NTRS)
Deyoung, Russell J.
1990-01-01
An attempt was made to pump an alexandrite laser rod using a Tamarak solar simulator and also a tungsten-halogen lamp. A very low optical laser cavity was used to achieve the threshold minimum pumping-power requirement. Lasing was not achieved. The laser threshold optical-power requirement was calculated to be approximately 626 W/sq cm for a gain length of 7.6 cm, whereas the Tamarak simulator produces 1150 W/sq cm over a gain length of 3.3 cm, which is less than the 1442 W/sq cm required to reach laser threshold. The rod was optically pulsed with 200 msec pulses, which allowed the alexandrite rod to operate at near room temperature. The optical intensity-gain-length product to achieve laser threshold should be approximately 35,244 solar constants-cm. In the present setup, this product was 28,111 solar constants-cm.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
NASA Astrophysics Data System (ADS)
Prat, O. P.; Nelson, B. R.
2013-12-01
We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, surface observations, and models to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets of TMPA 3B42, CMORPH, and PERSIANN. The satellite based QPEs are compared over the concurrent period with the NCEP Stage IV product, which is a near real time product providing precipitation data at the hourly temporal scale gridded at a nominal 4-km spatial resolution. In addition, remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model), which provides gridded precipitation estimates that are used as a baseline for multi-sensor QPE products comparison. The comparisons are performed at the annual, seasonal, monthly, and daily scales with focus on selected river basins (Southeastern US, Pacific Northwest, Great Plains). While, unconditional annual rain rates present a satisfying agreement between all products, results suggest that satellite QPE datasets exhibit important biases in particular at higher rain rates (≥4 mm/day). Conversely, on seasonal scales differences between remotely sensed data and ground surface observations can be greater than 50% and up to 90% for low daily accumulation (≤1 mm/day) such as in the Western US (summer) and Central US (winter). The conditional analysis performed using different daily rainfall accumulation thresholds (from low rainfall intensity to intense precipitation) shows that while intense events measured at the ground are infrequent (around 2% for daily accumulation above 2 inches/day), remotely sensed products displayed differences from 20-50% and up to 90-100%. A discussion on the impact of differing spatial and temporal resolutions with respect to the datasets ability to capture extreme precipitation events is also provided. Furthermore, this work is part of a broader effort to evaluate long-term multi-sensor QPEs in the perspective of developing Climate Data Records (CDRs) for precipitation.
Roach, Shane M.; Song, Dong; Berger, Theodore W.
2012-01-01
Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947
How to Assess the Value of Medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066
How to assess the value of medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.
Network-level reproduction number and extinction threshold for vector-borne diseases.
Xue, Ling; Scoglio, Caterina
2015-06-01
The basic reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or not. Thresholds for disease extinction contribute crucial knowledge of disease control, elimination, and mitigation of infectious diseases. Relationships between basic reproduction numbers of two deterministic network-based ordinary differential equation vector-host models, and extinction thresholds of corresponding stochastic continuous-time Markov chain models are derived under some assumptions. Numerical simulation results for malaria and Rift Valley fever transmission on heterogeneous networks are in agreement with analytical results without any assumptions, reinforcing that the relationships may always exist and proposing a mathematical problem for proving existence of the relationships in general. Moreover, numerical simulations show that the basic reproduction number does not monotonically increase or decrease with the extinction threshold. Consistent trends of extinction probability observed through numerical simulations provide novel insights into mitigation strategies to increase the disease extinction probability. Research findings may improve understandings of thresholds for disease persistence in order to control vector-borne diseases.
Loziuk, Philip L.; Sederoff, Ronald R.; Chiang, Vincent L.; Muddiman, David C.
2014-01-01
Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification. PMID:25154770
NASA Astrophysics Data System (ADS)
Zyrianov, M.; Droz-Georget, Th.; Sanov, A.; Reisler, H.
1996-11-01
The photoinitiated unimolecular decomposition of jet-cooled HNCO has been studied following S1(1A″)←S0(1A') excitation near the thresholds of the spin-allowed dissociation channels: (1) H(2S)+NCO(X2Π) and (2) NH(a1Δ)+CO(X1Σ+), which are separated by 4470 cm-1. Photofragment yield spectra of NCO(X2Π) and NH (a1Δ) were obtained in selected regions in the 260-220 nm photolysis range. The NCO(X2Π)yield rises abruptly at 38 380 cm-1 and the spectrum exhibits structures as narrow as 0.8 cm-1 near the threshold. The linewidths increase only slowly with photolysis energy. The jet-cooled absorption spectrum near the channel (1) threshold [D0(H+NCO)] was obtained using two-photon excitation via the S1 state, terminating in a fluorescent product. The absorption spectrum is similar to the NCO yield spectrum, and its intensity does not diminish noticeably above D0(H+NCO), indicating that dissociation near threshold is slow. The NCO product near threshold is cold, as is typical of a barrierless reaction. NH (a1Δ) products appear first at 42 840 cm-1, but their yield is initially very small, as evidenced also by the insignificant decrease in the NCO yield in the threshold region of channel (2). The NH (a1Δ) yield increases faster at higher photolysis energies and the linewidths increase as well. At the channel (2) threshold, the NH (a1Δ) product is generated only in the lowest rotational level, J=2, and rotational excitation increases with photolysis energy. We propose that in the range 260-230 nm, HNCO (S1) undergoes radiationless decay terminating in S0/T1 followed by unimolecular reaction. Decompositions via channels (1) and (2) proceed without significant exit channel barriers. At wavelengths shorter than 230 nm, the participation of an additional, direct pathway cannot be ruled out. The jet-cooled photofragment yield spectra allow the determination, with good accuracy, of thermochemical values relevant to HNCO decomposition. The following heats of formation are recommended: ΔH0f(HNCO)=-27.8±0.4 kcal/mol, and ΔH0f(NCO)=30.3±0.4 kcal/mol. These results are in excellent agreement with recent determinations using different experimental techniques.
Near-threshold NN→dπ reaction in chiral perturbation theory
NASA Astrophysics Data System (ADS)
Gårdestig, A.; Phillips, D. R.; Elster, Ch.
2006-02-01
The near-threshold np→dπ0 cross section is calculated in chiral perturbation theory to next-to-leading order in the expansion parameter √(Mmπ)/Λχ. At this order irreducible pion loops contribute to the relevant pion-production operator. Although their contribution to this operator is finite, considering initial- and final-state distortions produces a linear divergence in its matrix elements. We renormalize this divergence by introducing a counterterm, whose value we choose to reproduce the threshold np→dπ0 cross section measured at TRIUMF. The energy dependence of this cross section is then predicted in chiral perturbation theory, being determined by the production of p-wave pions, and also by energy dependence in the amplitude for the production of s-wave pions. With an appropriate choice of the counterterm, the chiral prediction for this energy dependence converges well.
Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.
2012-01-01
Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.
Very high laser-damage threshold of polymer-derived Si(B)CN-carbon nanotube composite coatings.
Bhandavat, R; Feldman, A; Cromer, C; Lehman, J; Singh, G
2013-04-10
We study the laser irradiance behavior and resulting structural evolution of polymer-derived silicon-boron-carbonitride (Si(B)CN) functionalized multiwall carbon nanotube (MWCNT) composite spray coatings on copper substrate. We report a damage threshold value of 15 kWcm(-2) and an optical absorbance of 0.97 after irradiation. This is an order of magnitude improvement over MWCNT (1.4 kWcm(-2), 0.76), SWCNT (0.8 kWcm(-2), 0.65) and carbon paint (0.1 kWcm(-2), 0.87) coatings previously tested at 10.6 μm (2.5 kW CO2 laser) exposure. Electron microscopy, Raman spectroscopy, and X-ray photoelectron spectroscopy suggests partial oxidation of Si(B)CN forming a stable protective SiO2 phase upon irradiation.
Higher-than-predicted saltation threshold wind speeds on Titan.
Burr, Devon M; Bridges, Nathan T; Marshall, John R; Smith, James K; White, Bruce R; Emery, Joshua P
2015-01-01
Titan, the largest satellite of Saturn, exhibits extensive aeolian, that is, wind-formed, dunes, features previously identified exclusively on Earth, Mars and Venus. Wind tunnel data collected under ambient and planetary-analogue conditions inform our models of aeolian processes on the terrestrial planets. However, the accuracy of these widely used formulations in predicting the threshold wind speeds required to move sand by saltation, or by short bounces, has not been tested under conditions relevant for non-terrestrial planets. Here we derive saltation threshold wind speeds under the thick-atmosphere, low-gravity and low-sediment-density conditions on Titan, using a high-pressure wind tunnel refurbished to simulate the appropriate kinematic viscosity for the near-surface atmosphere of Titan. The experimentally derived saltation threshold wind speeds are higher than those predicted by models based on terrestrial-analogue experiments, indicating the limitations of these models for such extreme conditions. The models can be reconciled with the experimental results by inclusion of the extremely low ratio of particle density to fluid density on Titan. Whereas the density ratio term enables accurate modelling of aeolian entrainment in thick atmospheres, such as those inferred for some extrasolar planets, our results also indicate that for environments with high density ratios, such as in jets on icy satellites or in tenuous atmospheres or exospheres, the correction for low-density-ratio conditions is not required.
Threshold regression to accommodate a censored covariate.
Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A
2018-06-22
In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.
High speed point derivative microseismic detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhl, J.E.; Warpinski, N.R.; Whetten, E.B.
A high speed microseismic event detector constructed in accordance with the present invention uses a point derivative comb to quickly and accurately detect microseismic events. Compressional and shear waves impinging upon microseismic receiver stations disposed to collect waves are converted into digital data and analyzed using a point derivative comb including assurance of quiet periods prior to declaration of microseismic events. If a sufficient number of quiet periods have passed, the square of a two point derivative of the incoming digital signal is compared to a trip level threshold exceeding the determined noise level to declare a valid trial event.more » The squaring of the derivative emphasizes the differences between noise and signal, and the valid event is preferably declared when the trip threshold has been exceeded over a temporal comb width to realize a comb over a given time period. Once a trial event has been declared, the event is verified through a spatial comb, which applies the temporal event comb to additional stations. The detector according to the present invention quickly and accurately detects initial compressional waves indicative of a microseismic event which typically exceed the ambient cultural noise level by a small amount, and distinguishes the waves from subsequent larger amplitude shear waves. 9 figs.« less
Eigenfunctions and Eigenvalues for a Scalar Riemann-Hilbert Problem Associated to Inverse Scattering
NASA Astrophysics Data System (ADS)
Pelinovsky, Dmitry E.; Sulem, Catherine
A complete set of eigenfunctions is introduced within the Riemann-Hilbert formalism for spectral problems associated to some solvable nonlinear evolution equations. In particular, we consider the time-independent and time-dependent Schrödinger problems which are related to the KdV and KPI equations possessing solitons and lumps, respectively. Non-standard scalar products, orthogonality and completeness relations are derived for these problems. The complete set of eigenfunctions is used for perturbation theory and bifurcation analysis of eigenvalues supported by the potentials under perturbations. We classify two different types of bifurcations of new eigenvalues and analyze their characteristic features. One type corresponds to thresholdless generation of solitons in the KdV equation, while the other predicts a threshold for generation of lumps in the KPI equation.
Machine learning based cloud mask algorithm driven by radiative transfer modeling
NASA Astrophysics Data System (ADS)
Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.
2017-12-01
Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.
Thouveny, Nicolas; Bourlès, Didier L.; Valet, Jean‐Pierre; Bassinot, Franck; Ménabréaz, Lucie; Guillou, Valéry; Choy, Sandrine; Beaufort, Luc
2016-01-01
Abstract Geomagnetic dipole moment variations associated with polarity reversals and excursions are expressed by large changes of the cosmogenic nuclide beryllium‐10 (10Be) production rates. Authigenic 10Be/9Be ratios (proxy of atmospheric 10Be production) from oceanic cores therefore complete the classical information derived from relative paleointensity (RPI) records. This study presents new authigenic 10Be/9Be ratio results obtained from cores MD05‐2920 and MD05‐2930 collected in the west equatorial Pacific Ocean. Be ratios from cores MD05‐2920, MD05‐2930 and MD90‐0961 have been stacked and averaged. Variations of the authigenic 10Be/9Be ratio are analyzed and compared with the geomagnetic dipole low series reported from global RPI stacks. The largest 10Be overproduction episodes are related to dipole field collapses (below a threshold of 2 × 1022 Am2) associated with the Brunhes/Matuyama reversal, the Laschamp (41 ka) excursion, and the Iceland Basin event (190 ka). Other significant 10Be production peaks are correlated to geomagnetic excursions reported in literature. The record was then calibrated by using absolute dipole moment values drawn from the Geomagia and Pint paleointensity value databases. The 10Be‐derived geomagnetic dipole moment record, independent from sedimentary paleomagnetic data, covers the Brunhes‐Matuyama transition and the whole Brunhes Chron. It provides new and complementary data on the amplitude and timing of millennial‐scale geomagnetic dipole moment variations and particularly on dipole moment collapses triggering polarity instabilities. PMID:28163989
Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610
Prompt merger collapse and the maximum mass of neutron stars.
Bauswein, A; Baumgarte, T W; Janka, H-T
2013-09-27
We perform hydrodynamical simulations of neutron-star mergers for a large sample of temperature-dependent nuclear equations of state and determine the threshold mass above which the merger remnant promptly collapses to form a black hole. We find that, depending on the equation of state, the threshold mass is larger than the maximum mass of a nonrotating star in isolation by between 30 and 70 percent. Our simulations also show that the ratio between the threshold mass and maximum mass is tightly correlated with the compactness of the nonrotating maximum-mass configuration. We speculate on how this relation can be used to derive constraints on neutron-star properties from future observations.
Paskiet, Diane; Jenke, Dennis; Ball, Douglas; Houston, Christopher; Norwood, Daniel L; Markovic, Ingrid
2013-01-01
The Product Quality Research Institute (PQRI) is a non-profit consortium of organizations working together to generate and share timely, relevant, and impactful information that advances drug product quality and development. The collaborative activities of PQRI participants have, in the case of orally inhaled and nasal drug products (OINDPs), resulted in comprehensive and widely-accepted recommendations for leachables assessments to help ensure patient safety with respect to this class of packaged drug products. These recommendations, which include scientifically justified safety thresholds for leachables, represent a significant milestone towards establishing standardized approaches for safety qualification of leachables in OINDP. To build on the success of the OINDP effort, PQRI's Parenteral and Ophthalmic Drug Products (PODP) Leachables and Extractables Working Group was formed to extrapolate the OINDP threshold concepts and best practice recommendations to other dosage forms with high concern for interaction with packaging/delivery systems. This article considers the general aspects of leachables and their safety assessment, introduces the PODP Work Plan and initial study Protocol, discusses the laboratory studies being conducted by the PODP Chemistry Team, outlines the strategy being developed by the PODP Toxicology Team for the safety qualification of PODP leachables, and considers the issues associated with application of the safety thresholds, particularly with respect to large-volume parenterals. Lastly, the unique leachables issues associated with biologics are described. The Product Quality Research Institute (PQRI) is a non-profit consortium involving industry organizations, academia, and regulatory agencies that together provide recommendations in support of regulatory guidance to advance drug product quality. The collaborative activities of the PQRI Orally Inhaled and Nasal Drug Products Leachables and Extractables Working Group resulted in a systematic and science-based approach to identify and qualify leachables, including the concept of safety thresholds. Concepts from this widely accepted approach, formally publicized in 2006, are being extrapolated to parenteral and ophthalmic drug products. This article provides an overview of extractables and leachables in drug products and biologics and discusses the PQRI Work Plan and Protocols developed by the PQRI Parenteral and Ophthalmic Drug Products Leachables and Extractables Working Group.
Differential Higgs production at N3LO beyond threshold
NASA Astrophysics Data System (ADS)
Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea
2018-01-01
We present several key steps towards the computation of differential Higgs boson cross sections at N3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions. We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.
Differential Higgs production at N 3LO beyond threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea
We present several key steps towards the computation of differential Higgs boson cross sections at N 3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N 3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions.more » We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N 3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.« less
Differential Higgs production at N 3LO beyond threshold
Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea
2018-01-29
We present several key steps towards the computation of differential Higgs boson cross sections at N 3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N 3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions.more » We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N 3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.« less
76 FR 4349 - Revised Jurisdictional Thresholds for Section 8 of the Clayton Act
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... that no corporation is covered if the competitive sales of either corporation are less than $1,000,000... change in gross national product. The new thresholds, which take effect immediately, are $26,867,000 for...
75 FR 3469 - Revised Jurisdictional Thresholds For Section 8 of the Clayton Act
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-21
... that no corporation is covered if the competitive sales of either corporation are less than $1,000,000... change in gross national product. The new thresholds, which take effect immediately, are $25,841,000 for...
Incorporating biological control into IPM decision making
USDA-ARS?s Scientific Manuscript database
Of the many ways biological control can be incorporated into Integrated Pest Management (IPM) programs, natural enemy thresholds are arguably most easily adopted by stakeholders. Integration of natural enemy thresholds into IPM programs requires ecological and cost/benefit crop production data, thr...
Morignat, Eric; Gay, Emilie; Vinard, Jean-Luc; Calavas, Didier; Hénaux, Viviane
2015-07-01
In the context of climate change, the frequency and severity of extreme weather events are expected to increase in temperate regions, and potentially have a severe impact on farmed cattle through production losses or deaths. In this study, we used distributed lag non-linear models to describe and quantify the relationship between a temperature-humidity index (THI) and cattle mortality in 12 areas in France. THI incorporates the effects of both temperature and relative humidity and was already used to quantify the degree of heat stress on dairy cattle because it does reflect physical stress deriving from extreme conditions better than air temperature alone. Relationships between daily THI and mortality were modeled separately for dairy and beef cattle during the 2003-2006 period. Our general approach was to first determine the shape of the THI-mortality relationship in each area by modeling THI with natural cubic splines. We then modeled each relationship assuming a three-piecewise linear function, to estimate the critical cold and heat THI thresholds, for each area, delimiting the thermoneutral zone (i.e. where the risk of death is at its minimum), and the cold and heat effects below and above these thresholds, respectively. Area-specific estimates of the cold or heat effects were then combined in a hierarchical Bayesian model to compute the pooled effects of THI increase or decrease on dairy and beef cattle mortality. A U-shaped relationship, indicating a mortality increase below the cold threshold and above the heat threshold was found in most of the study areas for dairy and beef cattle. The pooled estimate of the mortality risk associated with a 1°C decrease in THI below the cold threshold was 5.0% for dairy cattle [95% posterior interval: 4.4, 5.5] and 4.4% for beef cattle [2.0, 6.5]. The pooled mortality risk associated with a 1°C increase above the hot threshold was estimated to be 5.6% [5.0, 6.2] for dairy and 4.6% [0.9, 8.7] for beef cattle. Knowing the thermoneutral zone and temperature effects outside this zone is of primary interest for farmers because it can help determine when to implement appropriate preventive and mitigation measures. Copyright © 2015 Elsevier Inc. All rights reserved.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L
2013-01-01
Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation.
NASA Technical Reports Server (NTRS)
Ricko, Martina; Adler, Robert F.; Huffman, George J.
2016-01-01
Climatology and variations of recent mean and intense precipitation over a near-global (50 deg. S 50 deg. N) domain on a monthly and annual time scale are analyzed. Data used to derive daily precipitation to examine the effects of spatial and temporal coverage of intense precipitation are from the current Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 version 7 precipitation product, with high spatial and temporal resolution during 1998 - 2013. Intense precipitation is defined by several different parameters, such as a 95th percentile threshold of daily precipitation, a mean precipitation that exceeds that percentile, or a fixed threshold of daily precipitation value [e.g., 25 and 50 mm day(exp -1)]. All parameters are used to identify the main characteristics of spatial and temporal variation of intense precipitation. High correlations between examined parameters are observed, especially between climatological monthly mean precipitation and intense precipitation, over both tropical land and ocean. Among the various parameters examined, the one best characterizing intense rainfall is a fraction of daily precipitation Great than or equal to 25 mm day(exp. -1), defined as a ratio between the intense precipitation above the used threshold and mean precipitation. Regions that experience an increase in mean precipitation likely experience a similar increase in intense precipitation, especially during the El Nino Southern Oscillation (ENSO) events. Improved knowledge of this intense precipitation regime and its strong connection to mean precipitation given by the fraction parameter can be used for monitoring of intense rainfall and its intensity on a global to regional scale.
NASA Astrophysics Data System (ADS)
Weiss, S. B.; Micheli, L.; Flint, L. E.; Flint, A. L.; Thorne, J. H.
2014-12-01
Assessment of climate change resilience, vulnerability, and adaptation options require downscaling of GCM outputs to local scales, and conversion of temperature and precipitation forcings into hydrologic and ecological responses. Recent work in the San Francisco Bay Area, and California demonstrate a practical approach to this process. First, climate futures (GCM x Emissions Scenario) are screened using cluster analysis for seasonal precipitation and temperature, to select a tractable subset of projections that still represent the range of climate projections. Second, monthly climate projections are downscaled to 270m and the Basin Characterization Model (BCM) applied, to generate fine-scale recharge, runoff, actual evapotranspiration (AET), and climatic water deficit (CWD) accounting for soils, bedrock geology, topography, and local climate. Third, annual time-series are used to derive 30-year climatologies and recurrence intervals of extreme events (including multi-year droughts) at the scale of small watersheds and conservation parcels/networks. We take a "scenario-neutral" approach where thresholds are defined for system "failure," such as water supply shortfalls or drought mortality/vegetation transitions, and the time-window for hitting those thresholds is evaluated across all selected climate projections. San Francisco Bay Area examples include drought thresholds (CWD) for specific vegetation-types that identify leading/trailing edges and local refugia, evaluation of hydrologic resources (recharge and runoff) provided by conservation lands, and productivity of rangelands (AET). BCM outputs for multiple futures are becoming available to resource managers through on-line data extraction tools. This approach has wide applicability to numerous resource management issues.
Constraints on the ωπ Form Factor from Analyticity and Unitarity
NASA Astrophysics Data System (ADS)
Ananthanarayan, B.; Caprini, Irinel; Kubis, Bastian
Form factors are important low-energy quantities and an accurate knowledge of these sheds light on the strong interactions. A variety of methods based on general principles have been developed to use information known in different energy regimes to constrain them in regions where experimental information needs to be tested precisely. Here we review our recent work on the electromagnetic ωπ form factor in a model-independent framework known as the method of unitarity bounds, partly motivated by the discre-pancies noted recently between the theoretical calculations of the form factor based on dispersion relations and certain experimental data measured from the decay ω → π0γ*. We have applied a modified dispersive formalism, which uses as input the discontinuity of the ωπ form factor calculated by unitarity below the ωπ threshold and an integral constraint on the square of its modulus above this threshold. The latter constraint was obtained by exploiting unitarity and the positivity of the spectral function of a QCD correlator, computed on the spacelike axis by operator product expansion and perturbative QCD. An alternative constraint is obtained by using data available at higher energies for evaluating an integral of the modulus squared with a suitable weight function. From these conditions we derived upper and lower bounds on the modulus of the ωπ form factor in the region below the ωπ threshold. The results confirm the existence of a disagreement between dispersion theory and experimental data on the ωπ form factor around 0:6 GeV, including those from NA60 published in 2016.
Constraints on the ωπ form factor from analyticity and unitarity
NASA Astrophysics Data System (ADS)
Ananthanarayan, B.; Caprini, Irinel; Kubis, Bastian
2016-05-01
Form factors are important low-energy quantities and an accurate knowledge of these sheds light on the strong interactions. A variety of methods based on general principles have been developed to use information known in different energy regimes to constrain them in regions where experimental information needs to be tested precisely. Here we review our recent work on the electromagnetic ωπ form factor in a model-independent framework known as the method of unitarity bounds, partly motivated by the discrepancies noted recently between the theoretical calculations of the form factor based on dispersion relations and certain experimental data measured from the decay ω → π0γ∗. We have applied a modified dispersive formalism, which uses as input the discontinuity of the ωπ form factor calculated by unitarity below the ωπ threshold and an integral constraint on the square of its modulus above this threshold. The latter constraint was obtained by exploiting unitarity and the positivity of the spectral function of a QCD correlator, computed on the spacelike axis by operator product expansion and perturbative QCD. An alternative constraint is obtained by using data available at higher energies for evaluating an integral of the modulus squared with a suitable weight function. From these conditions we derived upper and lower bounds on the modulus of the ωπ form factor in the region below the ωπ threshold. The results confirm the existence of a disagreement between dispersion theory and experimental data on the ωπ form factor around 0.6 GeV, including those from NA60 published in 2016.
NASA Astrophysics Data System (ADS)
Brigandı, G.; Aronica, G. T.; Basile, G.; Pasotti, L.; Panebianco, M.
2012-04-01
On November 2011 a thunderstorms became almost exceptional over the North-East part of the Sicily Region (Italy) producing local heavy rainfall, mud-debris flow and flash flooding. The storm was concentrated on the Tyrrhenian sea coast near the city of Barcellona within the Longano catchment. Main focus of the paper is to present an experimental operative system for alerting extreme hydrometeorological events by using a methodology based on the combined use of rainfall thresholds, soil moisture indexes and quantitative precipitation forecasting. As matter of fact, shallow landslide and flash flood warning is a key element to improve the Civil Protection achievements to mitigate damages and safeguard the security of people. It is a rather complicated task, particularly in those catchments with flashy response where even brief anticipations are important and welcomed. It is well known how the triggering of shallow landslides is strongly influenced by the initial soil moisture conditions of catchments. Therefore, the early warning system here applied is based on the combined use of rainfall thresholds, derived both for flash flood and for landslide, and soil moisture conditions; the system is composed of several basic component related to antecedent soil moisture conditions, real-time rainfall monitoring and antecedent rainfall. Soil moisture conditions were estimated using an Antecedent Precipitation Index (API), similar to this widely used for defining soil moisture conditions via Antecedent Moisture conditions index AMC. Rainfall threshold for landslides were derived using historical and statistical analysis. Finally, rainfall thresholds for flash flooding were derived using an Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall. After the implementation and calibration of the model, a testing phase was carried out by using real data collected for the November 2001 event in the Longano catchment. Moreover, in order to test the capability of the system to forecast thise event, Quantitative Precipitation Forecasting provided by the SILAM (Sicily Limited Area Model), a meteorological model run by SIAS (Sicilian Agrometeorological Service) with a forecast horizon up to 144 hours, have been used to run the system.
Yang, Yi; Maxwell, Andrew; Zhang, Xiaowei; Wang, Nan; Perkins, Edward J; Zhang, Chaoyang; Gong, Ping
2013-01-01
Pathway alterations reflected as changes in gene expression regulation and gene interaction can result from cellular exposure to toxicants. Such information is often used to elucidate toxicological modes of action. From a risk assessment perspective, alterations in biological pathways are a rich resource for setting toxicant thresholds, which may be more sensitive and mechanism-informed than traditional toxicity endpoints. Here we developed a novel differential networks (DNs) approach to connect pathway perturbation with toxicity threshold setting. Our DNs approach consists of 6 steps: time-series gene expression data collection, identification of altered genes, gene interaction network reconstruction, differential edge inference, mapping of genes with differential edges to pathways, and establishment of causal relationships between chemical concentration and perturbed pathways. A one-sample Gaussian process model and a linear regression model were used to identify genes that exhibited significant profile changes across an entire time course and between treatments, respectively. Interaction networks of differentially expressed (DE) genes were reconstructed for different treatments using a state space model and then compared to infer differential edges/interactions. DE genes possessing differential edges were mapped to biological pathways in databases such as KEGG pathways. Using the DNs approach, we analyzed a time-series Escherichia coli live cell gene expression dataset consisting of 4 treatments (control, 10, 100, 1000 mg/L naphthenic acids, NAs) and 18 time points. Through comparison of reconstructed networks and construction of differential networks, 80 genes were identified as DE genes with a significant number of differential edges, and 22 KEGG pathways were altered in a concentration-dependent manner. Some of these pathways were perturbed to a degree as high as 70% even at the lowest exposure concentration, implying a high sensitivity of our DNs approach. Findings from this proof-of-concept study suggest that our approach has a great potential in providing a novel and sensitive tool for threshold setting in chemical risk assessment. In future work, we plan to analyze more time-series datasets with a full spectrum of concentrations and sufficient replications per treatment. The pathway alteration-derived thresholds will also be compared with those derived from apical endpoints such as cell growth rate.
Dobie, Robert A; Wojcik, Nancy C
2015-01-01
Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804
Dobie, Robert A; Wojcik, Nancy C
2015-07-13
The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Gene Expression Profiling in Human Lung Cells Exposed to Isoprene-Derived Secondary Organic Aerosol.
Lin, Ying-Hsuan; Arashiro, Maiko; Clapp, Phillip W; Cui, Tianqu; Sexton, Kenneth G; Vizuete, William; Gold, Avram; Jaspers, Ilona; Fry, Rebecca C; Surratt, Jason D
2017-07-18
Secondary organic aerosol (SOA) derived from the photochemical oxidation of isoprene contributes a substantial mass fraction to atmospheric fine particulate matter (PM 2.5 ). The formation of isoprene SOA is influenced largely by anthropogenic emissions through multiphase chemistry of its multigenerational oxidation products. Considering the abundance of isoprene SOA in the troposphere, understanding mechanisms of adverse health effects through inhalation exposure is critical to mitigating its potential impact on public health. In this study, we assessed the effects of isoprene SOA on gene expression in human airway epithelial cells (BEAS-2B) through an air-liquid interface exposure. Gene expression profiling of 84 oxidative stress and 249 inflammation-associated human genes was performed. Our results show that the expression levels of 29 genes were significantly altered upon isoprene SOA exposure under noncytotoxic conditions (p < 0.05), with the majority (22/29) of genes passing a false discovery rate threshold of 0.3. The most significantly affected genes belong to the nuclear factor (erythroid-derived 2)-like 2 (Nrf2) transcription factor network. The Nrf2 function is confirmed through a reporter cell line. Together with detailed characterization of SOA constituents, this study reveals the impact of isoprene SOA exposure on lung responses and highlights the importance of further understanding its potential health outcomes.
Analytical Deriving of the Field Capacity through Soil Bundle Model
NASA Astrophysics Data System (ADS)
Arnone, E.; Viola, F.; Antinoro, C.; Noto, L. V.
2015-12-01
The concept of field capacity as soil hydraulic parameter is widely used in many hydrological applications. Althought its recurring usage, its definition is not univocal. Traditionally, field capacity has been related to the amount of water that remains in the soil after the excess water has drained away and the water downward movement experiences a significant decresase. Quantifying the drainage of excess of water may be vague and several definitions, often subjective, have been proposed. These definitions are based on fixed thresholds either of time, pressure, or flux to which the field capacity condition is associated. The flux-based definition identifies the field capacity as the soil moisture value corresponding to an arbitrary fixed threshold of free drainage flux. Recently, many works have investigated the flux-based definition by varying either the drainage threshold, the geometry setting and mainly the description of the drainage flux. Most of these methods are based on the simulation of the flux through a porous medium by using the Darcy's law or Richard's equation. Using the above-mentioned flux-based definition, in this work we propose an alternative analytical approach for deriving the field capacity based on a bundle-of-tubes model. The pore space of a porous medium is conceptualized as a bundle of capillary tubes of given length of different radii, derived from a known distribution. The drainage from a single capillary tube is given by the analytical solution of the differential equation describing the water height evolution within the capillary tube. This equation is based on the Poiseuille's law and describes the drainage flux with time as a function of tube radius. The drainage process is then integrated for any portion of soil taking into account the tube radius distribution which in turns depends on the soil type. This methodology allows to analytically derive the dynamics of drainage water flux for any soil type and consequently to define the soil field capacity as the latter reachs a given threshold value. The theoretical model also accounts for the tortuosity which characterizes the water pathways in real soils, but neglects the voids mutual interconnections.
A framework for optimizing phytosanitary thresholds in seed systems
USDA-ARS?s Scientific Manuscript database
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps ...
NASA Astrophysics Data System (ADS)
Wang, Zhe; Wang, Zhenhui; Cao, Xiaozhong; Tao, Fa
2018-01-01
Clouds are currently observed by both ground-based and satellite remote sensing techniques. Each technique has its own strengths and weaknesses depending on the observation method, instrument performance and the methods used for retrieval. It is important to study synergistic cloud measurements to improve the reliability of the observations and to verify the different techniques. The FY-2 geostationary orbiting meteorological satellites continuously observe the sky over China. Their cloud top temperature product can be processed to retrieve the cloud top height (CTH). The ground-based millimeter wavelength cloud radar can acquire information about the vertical structure of clouds-such as the cloud base height (CBH), CTH and the cloud thickness-and can continuously monitor changes in the vertical profiles of clouds. The CTHs were retrieved using both cloud top temperature data from the FY-2 satellites and the cloud radar reflectivity data for the same time period (June 2015 to May 2016) and the resulting datasets were compared in order to evaluate the accuracy of CTH retrievals using FY-2 satellites. The results show that the concordance rate of cloud detection between the two datasets was 78.1%. Higher consistencies were obtained for thicker clouds with larger echo intensity and for more continuous clouds. The average difference in the CTH between the two techniques was 1.46 km. The difference in CTH between low- and mid-level clouds was less than that for high-level clouds. An attenuation threshold of the cloud radar for rainfall was 0.2 mm/min; a rainfall intensity below this threshold had no effect on the CTH. The satellite CTH can be used to compensate for the attenuation error in the cloud radar data.
Magrì, Damiano; Agostoni, Piergiuseppe; Corrà, Ugo; Passino, Claudio; Scrutinio, Domenico; Perrone-Filardi, Pasquale; Correale, Michele; Cattadori, Gaia; Metra, Marco; Girola, Davide; Piepoli, Massimo F; Iorio, AnnaMaria; Emdin, Michele; Raimondo, Rosa; Re, Federica; Cicoira, Mariantonietta; Belardinelli, Romualdo; Guazzi, Marco; Limongelli, Giuseppe; Clemenza, Francesco; Parati, Gianfranco; Frigerio, Maria; Casenghi, Matteo; Scardovi, Angela B; Ferraironi, Alessandro; Di Lenarda, Andrea; Bussotti, Maurizio; Apostolo, Anna; Paolillo, Stefania; La Gioia, Rocco; Gargiulo, Paola; Palermo, Pietro; Minà, Chiara; Farina, Stefania; Battaia, Elisa; Maruotti, Antonello; Pacileo, Giuseppe; Contini, Mauro; Oliva, Fabrizio; Ricci, Roberto; Sinagra, Gianfranco
2015-08-01
Oxygen uptake at the anaerobic threshold (VO2AT), a submaximal exercise-derived variable, independent of patients' motivation, is a marker of outcome in heart failure (HF). However, previous evidence of VO2AT values paradoxically higher in HF patients with permanent atrial fibrillation (AF) than in those with sinus rhythm (SR) raised uncertainties. We tested the prognostic role of VO2AT in a large cohort of systolic HF patients, focusing on possible differences between SR and AF. Altogether 2976 HF patients (2578 with SR and 398 with AF) were prospectively followed. Besides a clinical examination, each patient underwent a maximal cardiopulmonary exercise test (CPET). The follow-up was analysed for up to 1500 days. Cardiovascular death or urgent cardiac transplantation occurred in 303 patients (250 (9.6%) patients with SR and 53 (13.3%) patients with AF, p = 0.023). In the entire population, multivariate analysis including peak oxygen uptake (VO2) showed a prognostic capacity (C-index) similar to that obtained including VO2AT (0.76 vs 0.72). Also, left ventricular ejection fraction, ventilation vs carbon dioxide production slope, β-blocker and digoxin therapy proved to be significant prognostic indexes. The receiver-operating characteristic (ROC) curves analysis showed that the best predictive VO2AT cut-off for the SR group was 11.7 ml/kg/min, while it was 12.8 ml/kg/min for the AF group. VO2AT, a submaximal CPET-derived parameter, is reliable for long-term cardiovascular mortality prognostication in stable systolic HF. However, different VO2AT cut-off values between SR and AF HF patients should be adopted. © The European Society of Cardiology 2014.
Evaluation of runaway-electron effects on plasma-facing components for NET
NASA Astrophysics Data System (ADS)
Bolt, H.; Calén, H.
1991-03-01
Runaway electrons which are generated during disruptions can cause serious damage to plasma facing components in a next generation device like NET. A study was performed to quantify the response of NET plasma facing components to runaway-electron impact. For the determination of the energy deposition in the component materials Monte Carlo computations were performed. Since the subsurface metal structures can be strongly heated under runaway-electron impact from the computed results damage threshold values for the thermal excursions were derived. These damage thresholds are strongly dependent on the materials selection and the component design. For a carbonmolybdenum divertor with 10 and 20 mm carbon armour thickness and 1 degree electron incidence the damage thresholds are 100 MJ/m 2 and 220 MJ/m 2. The thresholds for a carbon-copper divertor under the same conditions are about 50% lower. On the first wall damage is anticipated for energy depositions above 180 MJ/m 2.
Effect of threshold disorder on the quorum percolation model
NASA Astrophysics Data System (ADS)
Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel
2016-07-01
We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.
Wetzel, Hermann
2006-01-01
In a large number of mostly retrospective association studies, a statistical relationship between volume and quality of health care has been reported. However, the relevance of these results is frequently limited by methodological shortcomings. In this article, criteria for the evidence and definition of thresholds for volume-outcome relations are proposed, e.g. the specification of relevant outcomes for quality indicators, analysis of volume as a continuous variable with an adequate case-mix and risk adjustment, accounting for cluster effects and considering mathematical models for the derivation of cut-off values. Moreover, volume thresholds are regarded as surrogate parameters for the indirect classification of the quality of care, whose diagnostic validity and effectiveness in improving health care quality need to be evaluated in prospective studies.
Vehicle lift-off modelling and a new rollover detection criterion
NASA Astrophysics Data System (ADS)
Mashadi, Behrooz; Mostaghimi, Hamid
2017-05-01
The modelling and development of a general criterion for the prediction of rollover threshold is the main purpose of this work. Vehicle dynamics models after the wheels lift-off and when the vehicle moves on the two wheels are derived and the governing equations are used to develop the rollover threshold. These models include the properties of the suspension and steering systems. In order to study the stability of motion, the steady-state solutions of the equations of motion are carried out. Based on the stability analyses, a new relation is obtained for the rollover threshold in terms of measurable response parameters. The presented criterion predicts the best time for the prevention of the vehicle rollover by applying a correcting moment. It is shown that the introduced threshold of vehicle rollover is a proper state of vehicle motion that is best for stabilising the vehicle with a low energy requirement.
Mathematical Model of Naive T Cell Division and Survival IL-7 Thresholds.
Reynolds, Joseph; Coles, Mark; Lythe, Grant; Molina-París, Carmen
2013-01-01
We develop a mathematical model of the peripheral naive T cell population to study the change in human naive T cell numbers from birth to adulthood, incorporating thymic output and the availability of interleukin-7 (IL-7). The model is formulated as three ordinary differential equations: two describe T cell numbers, in a resting state and progressing through the cell cycle. The third is introduced to describe changes in IL-7 availability. Thymic output is a decreasing function of time, representative of the thymic atrophy observed in aging humans. Each T cell is assumed to possess two interleukin-7 receptor (IL-7R) signaling thresholds: a survival threshold and a second, higher, proliferation threshold. If the IL-7R signaling strength is below its survival threshold, a cell may undergo apoptosis. When the signaling strength is above the survival threshold, but below the proliferation threshold, the cell survives but does not divide. Signaling strength above the proliferation threshold enables entry into cell cycle. Assuming that individual cell thresholds are log-normally distributed, we derive population-average rates for apoptosis and entry into cell cycle. We have analyzed the adiabatic change in homeostasis as thymic output decreases. With a parameter set representative of a healthy individual, the model predicts a unique equilibrium number of T cells. In a parameter range representative of persistent viral or bacterial infection, where naive T cell cycle progression is impaired, a decrease in thymic output may result in the collapse of the naive T cell repertoire.
The locking and unlocking thresholds for tearing modes in a cylindrical tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Wenlong; Zhu, Ping, E-mail: pzhu@ustc.edu.cn; Department of Engineering Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706
2016-03-15
The locking and unlocking thresholds for tearing modes are in general different. In this work, the physics origin for this difference is illustrated from theory analysis, and a numerical procedure is developed to find both locking and unlocking thresholds. In particular, a new scaling law for the unlocking threshold that is valid in both weak and strong rotation regimes has been derived from the lowest amplitude of the RMP (resonant magnetic perturbation) allowed for the locked-mode solution. Above the unlocking threshold, the criterion for the phase-flip instability is extended to identify the entire locked-mode states. Two different regimes of themore » RMP amplitude in terms of the accessibility of the locked-mode states have been found. In the first regime, the locked-mode state may or may not be accessible depending on the initial conditions of an evolving island. In the second regime, the locked-mode state can always be reached regardless of the initial conditions of the tearing mode. The lowest RMP amplitude for the second regime is determined to be the mode-locking threshold. The different characteristics of the two regimes above the unlocking threshold reveal the underlying physics for the gap between the locking and unlocking thresholds and provide an explanation for the closely related and widely observed hysteresis phenomena in island evolution during the sweeping process of the RMP amplitude up and down across that threshold gap.« less
Liu, Yang; Hoppe, Brenda O; Convertino, Matteo
2018-04-10
Emergency risk communication (ERC) programs that activate when the ambient temperature is expected to cross certain extreme thresholds are widely used to manage relevant public health risks. In practice, however, the effectiveness of these thresholds has rarely been examined. The goal of this study is to test if the activation criteria based on extreme temperature thresholds, both cold and heat, capture elevated health risks for all-cause and cause-specific mortality and morbidity in the Minneapolis-St. Paul Metropolitan Area. A distributed lag nonlinear model (DLNM) combined with a quasi-Poisson generalized linear model is used to derive the exposure-response functions between daily maximum heat index and mortality (1998-2014) and morbidity (emergency department visits; 2007-2014). Specific causes considered include cardiovascular, respiratory, renal diseases, and diabetes. Six extreme temperature thresholds, corresponding to 1st-3rd and 97th-99th percentiles of local exposure history, are examined. All six extreme temperature thresholds capture significantly increased relative risks for all-cause mortality and morbidity. However, the cause-specific analyses reveal heterogeneity. Extreme cold thresholds capture increased mortality and morbidity risks for cardiovascular and respiratory diseases and extreme heat thresholds for renal disease. Percentile-based extreme temperature thresholds are appropriate for initiating ERC targeting the general population. Tailoring ERC by specific causes may protect some but not all individuals with health conditions exacerbated by hazardous ambient temperature exposure. © 2018 Society for Risk Analysis.
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
A Universal Threshold for the Assessment of Load and Output Residuals of Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new universal residual threshold for the detection of load and gage output residual outliers of wind tunnel strain{gage balance data was developed. The threshold works with both the Iterative and Non{Iterative Methods that are used in the aerospace testing community to analyze and process balance data. It also supports all known load and gage output formats that are traditionally used to describe balance data. The threshold's definition is based on an empirical electrical constant. First, the constant is used to construct a threshold for the assessment of gage output residuals. Then, the related threshold for the assessment of load residuals is obtained by multiplying the empirical electrical constant with the sum of the absolute values of all first partial derivatives of a given load component. The empirical constant equals 2.5 microV/V for the assessment of balance calibration or check load data residuals. A value of 0.5 microV/V is recommended for the evaluation of repeat point residuals because, by design, the calculation of these residuals removes errors that are associated with the regression analysis of the data itself. Data from a calibration of a six-component force balance is used to illustrate the application of the new threshold definitions to real{world balance calibration data.
Wang, Boshuo; Aberra, Aman S; Grill, Warren M; Peterchev, Angel V
2018-04-01
We present a theory and computational methods to incorporate transverse polarization of neuronal membranes into the cable equation to account for the secondary electric field generated by the membrane in response to transverse electric fields. The effect of transverse polarization on nonlinear neuronal activation thresholds is quantified and discussed in the context of previous studies using linear membrane models. The response of neuronal membranes to applied electric fields is derived under two time scales and a unified solution of transverse polarization is given for spherical and cylindrical cell geometries. The solution is incorporated into the cable equation re-derived using an asymptotic model that separates the longitudinal and transverse dimensions. Two numerical methods are proposed to implement the modified cable equation. Several common neural stimulation scenarios are tested using two nonlinear membrane models to compare thresholds of the conventional and modified cable equations. The implementations of the modified cable equation incorporating transverse polarization are validated against previous results in the literature. The test cases show that transverse polarization has limited effect on activation thresholds. The transverse field only affects thresholds of unmyelinated axons for short pulses and in low-gradient field distributions, whereas myelinated axons are mostly unaffected. The modified cable equation captures the membrane's behavior on different time scales and models more accurately the coupling between electric fields and neurons. It addresses the limitations of the conventional cable equation and allows sound theoretical interpretations. The implementation provides simple methods that are compatible with current simulation approaches to study the effect of transverse polarization on nonlinear membranes. The minimal influence by transverse polarization on axonal activation thresholds for the nonlinear membrane models indicates that predictions of stronger effects in linear membrane models with a fixed activation threshold are inaccurate. Thus, the conventional cable equation works well for most neuroengineering applications, and the presented modeling approach is well suited to address the exceptions.
Dual processing model of medical decision-making
2012-01-01
Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. Methods We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. Results We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. Conclusions We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories). PMID:22943520
Derivation of critical rainfall thresholds for landslide in Sicily
NASA Astrophysics Data System (ADS)
Caracciolo, Domenico; Arnone, Elisa; Noto, Leonardo V.
2015-04-01
Rainfall is the primary trigger of shallow landslides that can cause fatalities, damage to properties and economic losses in many areas of the world. For this reason, determining the rainfall amount/intensity responsible for landslide occurrence is important, and may contribute to mitigate the related risk and save lives. Efforts have been made in different countries to investigate triggering conditions in order to define landslide-triggering rainfall thresholds. The rainfall thresholds are generally described by a functional relationship of power in terms of cumulated or intensity event rainfall-duration, whose parameters are estimated empirically from the analysis of historical rainfall events that triggered landslides. The aim of this paper is the derivation of critical rainfall thresholds for landslide occurrence in Sicily, southern Italy, by focusing particularly on the role of the antecedent wet conditions. The creation of the appropriate landslide-rainfall database likely represents one of main efforts in this type of analysis. For this work, historical landslide events occurred in Sicily from 1919 to 2001 were selected from the archive of the Sistema Informativo sulle Catastrofi Idrogeologiche, developed under the project Aree Vulnerabili Italiane. The corresponding triggering precipitations were screened from the raingauges network in Sicily, maintained by the Osservatorio delle Acque - Agenzia Regionale per i Rifiuti e le Acque. In particular, a detailed analysis was carried out to identify and reconstruct the hourly rainfall events that caused the selected landslides. A bootstrapping statistical technique has been used to determine the uncertainties associated with the threshold parameters. The rainfall thresholds at different exceedance probability levels, from 1% to 10%, were defined in terms of cumulated event rainfall, E, and rainfall duration, D. The role of rainfall prior to the damaging events was taken into account by including in the analysis the rainfall fallen 6, 15 and 30 days before each landslide. The antecedent rainfall turned out to be particularly important in triggering landslides. The rainfall thresholds obtained for the Sicily were compared with the regional curves proposed by various authors confirming a good agreement with these.
Higgs boson gluon-fusion production beyond threshold in N 3LO QCD
Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...
2015-03-18
In this study, we compute the gluon fusion Higgs boson cross-section at N 3LO through the second term in the threshold expansion. This calculation constitutes a major milestone towards the full N 3LO cross section. Our result has the best formal accuracy in the threshold expansion currently available, and includes contributions from collinear regions besides subleading corrections from soft and hard regions, as well as certain logarithmically enhanced contributions for general kinematics. We use our results to perform a critical appraisal of the validity of the threshold approximation at N 3LO in perturbative QCD.
Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.
2004-05-01
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.
Detection of dominant runoff generation processes in flood frequency analysis
NASA Astrophysics Data System (ADS)
Iacobellis, Vito; Fiorentino, Mauro; Gioia, Andrea; Manfreda, Salvatore
2010-05-01
The investigation on hydrologic similarity represents one of the most exciting challenges faced by hydrologists in the last few years, in order to reduce uncertainty on flood prediction in ungauged basins (e.g., IAHS Decade on Predictions in Ungauged Basins (PUB) - Sivapalan et al., 2003). In perspective, the identification of dominant runoff generation mechanisms may provide a strategy for catchment classification and identification hydrologically omogeneous regions. In this context, we exploited the framework of theoretically derived flood probability distributions, in order to interpret the physical behavior of real basins. Recent developments on theoretically derived distributions have highlighted that in a given basin different runoff processes may coexistence and modify or affect the shape of flood distributions. The identification of dominant runoff generation mechanisms represents a key signatures of flood distributions providing an insight in hydrologic similarity. Iacobellis and Fiorentino (2000) introduced a novel distribution of flood peak annual maxima, the "IF" distribution, which exploited the variable source area concept, coupled with a runoff threshold having scaling properties. More recently, Gioia et al (2008) introduced the Two Component-IF (TCIF) distribution, generalizing the IF distribution, based on two different threshold mechanisms, associated respectively to ordinary and extraordinary events. Indeed, ordinary floods are mostly due to rainfall events exceeding a threshold infiltration rate in a small source area, while the so-called outlier events, often responsible of the high skewness of flood distributions, are triggered by severe rainfalls exceeding a threshold storage in a large portion of the basin. Within this scheme, we focused on the application of both models (IF and TCIF) over a considerable number of catchments belonging to different regions of Southern Italy. In particular, we stressed, as a case of strong general interest in the field of statistical hydrology, the role of procedures for parameters estimation and techniques for model selection in the case of nested distributions. References Gioia, A., V. Iacobellis, S. Manfreda, M. Fiorentino, Runoff thresholds in derived flood frequency distributions, Hydrol. Earth Syst. Sci., 12, 1295-1307, 2008. Iacobellis, V., and M. Fiorentino (2000), Derived distribution of floods based on the concept of partial area coverage with a climatic appeal, Water Resour. Res., 36(2), 469-482. Sivapalan, M., Takeuchi, K., Franks, S. W., Gupta, V. K., Karambiri, H., Lakshmi, V., Liang, X., McDonnell, J. J., Mendiondo, E. M., O'Connell, P. E., Oki, T., Pomeroy, J. W., Schertzer, D., Uhlenbrook, S. and Zehe, E.: IAHS Decade on Predictions in Ungauged Basins (PUB), 2003-2012: Shaping an exciting future for the hydrological sciences, Hydrol. Sci. J., 48(6), 857-880, 2003.
NASA Astrophysics Data System (ADS)
Magnani, Federico; Dewar, Roderick C.; Borghetti, Marco
2009-04-01
Leakage (spillover) refers to the unintended negative (positive) consequences of forest carbon (C) management in one area on C storage elsewhere. For example, the local C storage benefit of less intensive harvesting in one area may be offset, partly or completely, by intensified harvesting elsewhere in order to meet global timber demand. We present the results of a theoretical study aimed at identifying the key factors determining leakage and spillover, as a prerequisite for more realistic numerical studies. We use a simple model of C storage in managed forest ecosystems and their wood products to derive approximate analytical expressions for the leakage induced by decreasing the harvesting frequency of existing forest, and the spillover induced by establishing new plantations, assuming a fixed total wood production from local and remote (non-local) forests combined. We find that leakage and spillover depend crucially on the growth rates, wood product lifetimes and woody litter decomposition rates of local and remote forests. In particular, our results reveal critical thresholds for leakage and spillover, beyond which effects of forest management on remote C storage exceed local effects. Order of magnitude estimates of leakage indicate its potential importance at global scales.
A local PDE model of aggregation formation in bacterial colonies
NASA Astrophysics Data System (ADS)
Chavy-Waddy, Paul-Christopher; Kolokolnikov, Theodore
2016-10-01
We study pattern formation in a model of cyanobacteria motion recently proposed by Galante, Wisen, Bhaya and Levy. By taking a continuum limit of their model, we derive a novel fourth-order nonlinear parabolic PDE equation that governs the behaviour of the model. This PDE is {{u}t}=-{{u}xx}-{{u}xxxx}+α {{≤ft(\\frac{{{u}x}{{u}xx}}{u}\\right)}x} . We then derive the instability thresholds for the onset of pattern formation. We also compute analytically the spatial profiles of the steady state aggregation density. These profiles are shown to be of the form \\text{sec}{{\\text{h}}p} where the exponent p is related to the parameters of the model. Full numerical simulations give a favorable comparison between the continuum and the underlying discrete system, and show that the aggregation profiles are stable above the critical threshold.
Tluczkiewicz, I; Kühne, R; Ebert, R-U; Batke, M; Schüürmann, G; Mangelsdorf, I; Escher, S E
2016-07-01
The present publication describes an integrative grouping concept to derive threshold values for inhalation exposure. The classification scheme starts with differences in toxicological potency and develops criteria to group compounds into two potency classes, namely toxic (T-group) or low toxic (L-group). The TTC concept for inhalation exposure is based on the TTC RepDose data set, consisting of 296 organic compounds with 608 repeated-dose inhalation studies. Initially, 21 structural features (SFs) were identified as being characteristic for compounds of either high or low NOEC values (Schüürmann et al., 2016). In subsequent analyses these SF groups were further refined by taking into account structural homogeneity, type of toxicological effect observed, differences in absorption, metabolism and mechanism of action (MoA), to better define their structural and toxicological boundaries. Differentiation of a local or systemic mode of action did not improve the classification scheme. Finally, 28 groups were discriminated: 19 T-groups and 9 L-groups. Clearly distinct thresholds were derived for the T- and L-toxicity groups, being 2 × 10(-5) ppm (2 μg/person/day) and 0.05 ppm (4260 μg/person/day), respectively. The derived thresholds and the classification are compared to the initial mainly structure driven grouping (Schüürmann et al., 2016) and to the Cramer classification. Copyright © 2016 Elsevier Inc. All rights reserved.
A Framework for Optimizing Phytosanitary Thresholds in Seed Systems.
Choudhury, Robin Alan; Garrett, Karen A; Klosterman, Steven J; Subbarao, Krishna V; McRoberts, Neil
2017-10-01
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps limit the movement of pathogen inoculum through seed, but the costs associated with rejected seed lots can be prohibitive for voluntary implementation of phytosanitary thresholds. In this paper, we outline a framework to optimize thresholds for seedborne pathogens, balancing the cost of rejected seed lots and benefit of reduced inoculum levels. The method requires relatively small amounts of data, and the accuracy and robustness of the analysis improves over time as data accumulate from seed testing. We demonstrate the method first and illustrate it with a case study of seedborne oospores of Peronospora effusa, the causal agent of spinach downy mildew. A seed lot threshold of 0.23 oospores per seed could reduce the overall number of oospores entering the production system by 90% while removing 8% of seed lots destined for distribution. Alternative mitigation strategies may result in lower economic losses to seed producers, but have uncertain efficacy. We discuss future challenges and prospects for implementing this approach.
Concerns Around Budget Impact Thresholds: Not All Drugs Are the Same.
Ciarametaro, Michael; Abedi, Susan; Sohn, Adam; Ge, Colin Fan; Odedara, Neel; Dubois, Robert
2017-02-01
The use of budget thresholds is a recent development in the United States (e.g., the Institute for Clinical and Economic Review drug assessments). Budget thresholds establish limits that require some type of budgetary action if exceeded. This research focused on the advisability of using product-level budget thresholds as fixed spending caps by examining whether they are likely to improve or worsen market efficiency over status quo. The aim of this study was to determine whether fixed product-level spending caps are advisable for biopharmaceuticals. We systematically examined 5-year, postlaunch revenue for drugs that launched in the United States between 2003 and 2014 using the IMS MIDAS database. For products launched between 2011 and 2014, we used historical revenue as the baseline and trended out 60 months postlaunch based on exponential smoothing. Forecasted fifth-year revenue was compared to analyst reports. Fifth-year revenue was compared against a hypothetical $904 million spending cap to determine the amount of annual spending that might require reallocation. Descriptive statistics of 5-year, postlaunch revenue and annual spending requiring reallocation were calculated. Adhering to a $904 million product-level spending cap requires that approximately one-third of new drug spending be reallocated to other goods and services that have the potential to be less cost-effective due to significant barriers. Fixed product-level spending caps have the potential to reduce market efficiency due to their independence from value and the presence of important operational challenges. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Spatial patterns of methylmercury risks to common loons and piscivorous fish in Canada.
Depew, David C; Burgess, Neil M; Campbell, Linda M
2013-11-19
Deposition of inorganic mercury (Hg) from the atmosphere remains the principle source of Hg contamination for most aquatic ecosystems. Inorganic Hg is readily converted to toxic methylmercury (MeHg) that bioaccumulates in aquatic food webs and may pose a risk to piscivorous fish and wildlife. We conducted a screening-level risk assessment to evaluate the extent of risk to top aquatic piscivores: the common loon (Gavia immer), walleye (Sander vitreus), and northern pike (Esox lucius). Risk quotients (RQs) were calculated on the basis of a dietary Hg exposure indicator (HgPREY) modeled from over 230,000 observations of fish Hg concentrations at over 1900 locations across Canada and dietary Hg exposure screening benchmarks derived specifically for this assessment. HgPREY exceeded benchmark thresholds related to impaired productivity and behavior in adult loons at 10% and 36% of sites, respectively, and exceeded benchmark thresholds for impaired reproduction and health in fishes at 82% and 73% of sites, respectively. The ecozones of southeastern Canada characterized by extensive forest cover, elevated Hg deposition, and poorly buffered soils had the greatest proportion of RQs > 1.0. Results of this assessment suggest that common loons and piscivorous fishes would likely benefit from reductions in Hg deposition, especially in southeastern Canada.
Testing for a Debt-Threshold Effect on Output Growth.
Lee, Sokbae; Park, Hyunmin; Seo, Myung Hwan; Shin, Youngki
2017-12-01
Using the Reinhart-Rogoff dataset, we find a debt threshold not around 90 per cent but around 30 per cent, above which the median real gross domestic product (GDP) growth falls abruptly. Our work is the first to formally test for threshold effects in the relationship between public debt and median real GDP growth. The null hypothesis of no threshold effect is rejected at the 5 per cent significance level for most cases. While we find no evidence of a threshold around 90 per cent, our findings from the post-war sample suggest that the debt threshold for economic growth may exist around a relatively small debt-to-GDP ratio of 30 per cent. Furthermore, countries with debt-to-GDP ratios above 30 per cent have GDP growth that is 1 percentage point lower at the median.
Testing for a Debt‐Threshold Effect on Output Growth†
Lee, Sokbae; Park, Hyunmin; Seo, Myung Hwan; Shin, Youngki
2017-01-01
Abstract Using the Reinhart–Rogoff dataset, we find a debt threshold not around 90 per cent but around 30 per cent, above which the median real gross domestic product (GDP) growth falls abruptly. Our work is the first to formally test for threshold effects in the relationship between public debt and median real GDP growth. The null hypothesis of no threshold effect is rejected at the 5 per cent significance level for most cases. While we find no evidence of a threshold around 90 per cent, our findings from the post‐war sample suggest that the debt threshold for economic growth may exist around a relatively small debt‐to‐GDP ratio of 30 per cent. Furthermore, countries with debt‐to‐GDP ratios above 30 per cent have GDP growth that is 1 percentage point lower at the median. PMID:29263562
Canady, Richard; Lane, Richard; Paoli, Greg; Wilson, Margaret; Bialk, Heidi; Hermansky, Steven; Kobielush, Brent; Lee, Ji-Eun; Llewellyn, Craig; Scimeca, Joseph
2013-01-01
Threshold of Toxicological Concern (TTC) decision-support methods present a pragmatic approach to using data from well-characterized chemicals and protective estimates of exposure in a stepwise fashion to inform decisions regarding low-level exposures to chemicals for which few data exist. It is based on structural and functional categorizations of chemicals derived from decades of animal testing with a wide variety of chemicals. Expertise is required to use the TTC methods, and there are situations in which its use is clearly inappropriate or not currently supported. To facilitate proper use of the TTC, this paper describes issues to be considered by risk managers when faced with the situation of an unexpected substance in food. Case studies are provided to illustrate the implementation of these considerations, demonstrating the steps taken in deciding whether it would be appropriate to apply the TTC approach in each case. By appropriately applying the methods, employing the appropriate scientific expertise, and combining use with the conservative assumptions embedded within the derivation of the thresholds, the TTC can realize its potential to protect public health and to contribute to efficient use of resources in food safety risk management. PMID:24090142
Consensus sediment quality guidelines for polycyclic aromatic hydrocarbon mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swartz, R.C.
1999-04-01
Sediment quality guidelines (SQGs) for polycyclic aromatic hydrocarbons (PAHs) have been derived from a variety of laboratory, field, and theoretical foundations. They include the screening level concentration, effects ranges-low and -median, equilibrium partitioning concentrations, apparent effects threshold, {Sigma}PAH model, and threshold and probable effects levels. The resolution of controversial differences among the PAH SQGs lies in an understanding of the effects of mixtures. Polycyclic aromatic hydrocarbons virtually always occur in field-collected sediment as a complex mixture of covarying compounds. When expressed as a mixture concentration, that is, total PAH (TPAH), the guidelines form three clusters that were intended in theirmore » original derivations to represent threshold (TEC = 290 {micro}g/g organic carbon [OC]), median (MEC = 1,800 {micro}g/g OC), and extreme (EEC = 10,000 {micro}g/g OC) effects concentrations. The TEC/MEC/EEC consensus guidelines provide a unifying synthesis of other SQGs, reflect causal rather than correlative effects, account for mixtures, and predict sediment toxicity and benthic community perturbations at sites of PAH contamination. The TEC offers the most useful SQG because PAH mixtures are unlikely to cause adverse effects on benthic ecosystems below the TEC.« less
A. Dennis Lemly
1997-01-01
This paper describes a method for deriving site-specific water quality criteria for selenium using a two-step process: (1) gather information on selenium residues and biological effects at the site and in down-gradient systems and (2) examine criteria based on the degree of bioaccumulation, the relationship between mea-sured residues and threshold concentrations for...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dance, M; Chera, B; Falchook, A
2015-06-15
Purpose: Validate the consistency of a gradient-based segmentation tool to facilitate accurate delineation of PET/CT-based GTVs in head and neck cancers by comparing against hybrid PET/MR-derived GTV contours. Materials and Methods: A total of 18 head and neck target volumes (10 primary and 8 nodal) were retrospectively contoured using a gradient-based segmentation tool by two observers. Each observer independently contoured each target five times. Inter-observer variability was evaluated via absolute percent differences. Intra-observer variability was examined by percentage uncertainty. All target volumes were also contoured using the SUV percent threshold method. The thresholds were explored case by case so itsmore » derived volume matched with the gradient-based volume. Dice similarity coefficients (DSC) were calculated to determine overlap of PET/CT GTVs and PET/MR GTVs. Results: The Levene’s test showed there was no statistically significant difference of the variances between the observer’s gradient-derived contours. However, the absolute difference between the observer’s volumes was 10.83%, with a range from 0.39% up to 42.89%. PET-avid regions with qualitatively non-uniform shapes and intensity levels had a higher absolute percent difference near 25%, while regions with uniform shapes and intensity levels had an absolute percent difference of 2% between observers. The average percentage uncertainty between observers was 4.83% and 7%. As the volume of the gradient-derived contours increased, the SUV threshold percent needed to match the volume decreased. Dice coefficients showed good agreement of the PET/CT and PET/MR GTVs with an average DSC value across all volumes at 0.69. Conclusion: Gradient-based segmentation of PET volume showed good consistency in general but can vary considerably for non-uniform target shapes and intensity levels. PET/CT-derived GTV contours stemming from the gradient-based tool show good agreement with the anatomically and metabolically more accurate PET/MR-derived GTV contours, but tumor delineation accuracy can be further improved with the use PET/MR.« less
Constraints on Models for the Higgs Boson with Exotic Spin and Parity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Emily Hannah
The production of a Higgs boson in association with a vector boson at the Tevatron offers a unique opportunity to study models for the Higgs boson with exotic spin J and parity P assignments. At the Tevatron the V H system is produced near threshold. Different JP assignments of the Higgs boson can be distinguished by examining the behavior of the cross section near threshold. The relatively low backgrounds at the Tevatron compared to the LHC put us in a unique position to study the direct decay of the Higgs boson to fermions. If the Higgs sector is more complexmore » than predicted, studying the spin and parity of the Higgs boson in all decay modes is important. In this Thesis we will examine the WH → ℓνb¯b production and decay mode using 9.7 fb -1 of data collected by the D0 experiment in an attempt to derive constraints on models containing exotic values for the spin and parity of the Higgs boson. In particular, we will examine models for a Higgs boson with JP = 0- and JP = 2+. We use a likelihood ratio to quantify the degree to which our data are incompatible with exotic JP predictions for a range of possible production rates. Assuming the production cross section times branching ratio of the signals in the models considered is equal to the standard model prediction, the WH → ℓνb¯b mode alone is unable to reject either exotic model considered. We will also discuss the combination of the ZH → ℓℓb¯b, WH → ℓνb¯b, and V H → ννb¯b production modes at the D0 experiment and with the CDF experiment. When combining all three production modes at the D0 experiment we reject the JP = 0- and JP = 2+ hypotheses at the 97.6% CL and at the 99.0% CL, respectively, when assuming the signal production cross section times branching ratio is equal to the standard model predicted value. When combining with the CDF experiment we reject the JP = 0- and JP = 2+ hypotheses with significances of 5.0 standard deviations and 4.9 standard deviations, respectively.abstract« less
Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.
2014-01-01
Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020
Coupled channel effects on resonance states of positronic alkali atom
NASA Astrophysics Data System (ADS)
Yamashita, Takuma; Kino, Yasushi
2018-01-01
S-wave Feshbach resonance states belonging to dipole series in positronic alkali atoms (e+Li, e+Na, e+K, e+Rb and e+Cs) are studied by coupled-channel calculations within a three-body model. Resonance energies and widths below a dissociation threshold of alkali-ion and positronium are calculated with a complex scaling method. Extended model potentials that provide positronic pseudo-alkali-atoms are introduced to investigate the relationship between the resonance states and dissociation thresholds based on a three-body dynamics. Resonances of the dipole series below a dissociation threshold of alkali-atom and positron would have some associations with atomic energy levels that results in longer resonance lifetimes than the prediction of the analytical law derived from the ion-dipole interaction.
Algorithmic detectability threshold of the stochastic block model
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Zheng, Lei; Zhang, Yizhang; Yan, Zhenguang; Zhang, Juan; Li, Linlin; Zhu, Yan; Zhang, Yahui; Zheng, Xin; Wu, Jiangyue; Liu, Zhengtao
2017-08-01
Atrazine (ATZ) is an herbicide most commonly used in China and other regions of the world. It is reported toxic to aquatic organisms, and frequently occurs at relatively high concentrations. Currently, ATZ has been proved to affect reproduction of aquatic species at much lower levels. So it is controversial to perform ecological risk assessment using predicted no-effect concentrations (PENCs) derived from traditional endpoints, which fail to provide adequate protection to aquatic organisms. In this study, PNECs of ATZ were derived based on six endpoints of survival, growth, behavior, biochemistry, genetics and reproduction. The PNEC derived from reproductive lesion was 0.044μg ATZ L -1 , which was obviously lower than that derived from other endpoints. In addition, a tiered ecological risk assessment was conducted in the Taizi River based on six PNECs derived from six categories of toxicity endpoints. Results of these two methods of ecological risk assessment were consistent with each other, and the risk level of ATZ to aquatic organisms reached highest as taking reproductive fitness into account. The joint probability indicated that severe ecological risk rooting in reproduction might exist 93.9% and 99.9% of surface water in the Taizi River, while 5% threshold (HC 5 ) and 1% threshold (HC 1 ) were set up to protect aquatic organisms, respectively. We hope the present work could provide valuable information to manage and control ATZ pollution. Copyright © 2017 Elsevier Inc. All rights reserved.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi
2018-06-01
Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity. Cancer Epidemiol Biomarkers Prev; 27(6); 704-9. ©2018 AACR . ©2018 American Association for Cancer Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elhadj, Selim; Yoo, Jae-hyuck; Negres, Raluca A.
The optical damage performance of electrically conductive gallium nitride (GaN) and indium tin oxide (ITO) films is addressed using large area, high power laser beam exposures at 1064 nm sub-bandgap wavelength. Analysis of the laser damage process assumes that onset of damage (threshold) is determined by the absorption and heating of a nanoscale region of a characteristic size reaching a critical temperature. We use this model to rationalize semi-quantitatively the pulse width scaling of the damage threshold from picosecond to nanosecond timescales, along with the pulse width dependence of the damage threshold probability derived by fitting large beam damage densitymore » data. Multi-shot exposures were used to address lifetime performance degradation described by an empirical expression based on the single exposure damage model. A damage threshold degradation of at least 50% was observed for both materials. Overall, the GaN films tested had 5-10 × higher optical damage thresholds than the ITO films tested for comparable transmission and electrical conductivity. This route to optically robust, large aperture transparent electrodes and power optoelectronics may thus involve use of next generation widegap semiconductors such as GaN.« less
Universal phase transition in community detectability under a stochastic block model.
Chen, Pin-Yu; Hero, Alfred O
2015-03-01
We prove the existence of an asymptotic phase-transition threshold on community detectability for the spectral modularity method [M. E. J. Newman, Phys. Rev. E 74, 036104 (2006) and Proc. Natl. Acad. Sci. (USA) 103, 8577 (2006)] under a stochastic block model. The phase transition on community detectability occurs as the intercommunity edge connection probability p grows. This phase transition separates a subcritical regime of small p, where modularity-based community detection successfully identifies the communities, from a supercritical regime of large p where successful community detection is impossible. We show that, as the community sizes become large, the asymptotic phase-transition threshold p* is equal to √[p1p2], where pi(i=1,2) is the within-community edge connection probability. Thus the phase-transition threshold is universal in the sense that it does not depend on the ratio of community sizes. The universal phase-transition phenomenon is validated by simulations for moderately sized communities. Using the derived expression for the phase-transition threshold, we propose an empirical method for estimating this threshold from real-world data.
Education-Adjusted Normality Thresholds for FDG-PET in the Diagnosis of Alzheimer Disease.
Mainta, Ismini C; Trombella, Sara; Morbelli, Silvia; Frisoni, Giovanni B; Garibotto, Valentina
2018-06-05
A corollary of the reserve hypothesis is that what is regarded as pathological cortical metabolism in patients might vary according to education. The aim of this study is to assess the incremental diagnostic value of education-adjusted over unadjusted thresholds on the diagnostic accuracy of FDG-PET as a biomarker for Alzheimer disease (AD). We compared cortical metabolism in 90 healthy controls and 181 AD patients from the Alzheimer Disease Neuroimaging Initiative (ADNI) database. The AUC of the ROC curve did not differ significantly between the whole group and the higher-education patients or the lower-education subjects. The threshold of wMetaROI values providing 80% sensitivity was lower in higher-education patients and higher in the lower-education patients, compared to the standard threshold derived over the whole AD collective, without, however, significant changes in sensitivity and specificity. These data show that education, as a proxy of reserve, is not a major confounder in the diagnostic accuracy of FDG-PET in AD and the adoption of education-adjusted thresholds is not required in daily practice. © 2018 S. Karger AG, Basel.
Polynomial sequences for bond percolation critical thresholds
Scullard, Christian R.
2011-09-22
In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less
Sparse image reconstruction for molecular imaging.
Ting, Michael; Raich, Raviv; Hero, Alfred O
2009-06-01
The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.
Tsaneva, L
1993-01-01
The results from the investigation of the threshold of discomfort in 385 operators from firm "Kremikovtsi" are discussed. The most expressed changes are found in operators with increased tonal auditory threshold up to 45 and above 50 dB, in high confidential probability. The observed changes in the threshold of discomfort are classified into 3 groups: 1). Raised tonal auditory threshold (up to 30 dB) without decrease in the threshold of discomfort; 2). Decreased threshold of discomfort (with about 15-20 dB) in raised tonal auditory threshold (up to 45 dB); 3). Decreased threshold of discomfort on the background of raised (above 50 dB) tonal auditory threshold. On 4 figures are represented audiograms, illustrating the state of tonal auditory threshold, the field of hearing and the threshold of discomfort. The field of hearing of the operators from the III and IV groups is narrowed, and in the latter also deformed. The explanation of this pathophysiological phenomenon is related to the increased effect of the sound irritation and the presence of recruitment phenomenon with possible engagement of the central end of the auditory analyser. It is underlined, that the threshold of discomfort is sensitive index for the state of the individual norms of each operator for the speech-sound-noise discomfort.(ABSTRACT TRUNCATED AT 250 WORDS)
Lazy workers are necessary for long-term sustainability in insect societies
Hasegawa, Eisuke; Ishii, Yasunori; Tada, Koichiro; Kobayashi, Kazuya; Yoshimura, Jin
2016-01-01
Optimality theory predicts the maximization of productivity in social insect colonies, but many inactive workers are found in ant colonies. Indeed, the low short-term productivity of ant colonies is often the consequence of high variation among workers in the threshold to respond to task-related stimuli. Why is such an inefficient strategy among colonies maintained by natural selection? Here, we show that inactive workers are necessary for the long-term sustainability of a colony. Our simulation shows that colonies with variable thresholds persist longer than those with invariable thresholds because inactive workers perform the critical function of replacing active workers when they become fatigued. Evidence of the replacement of active workers by inactive workers has been found in ant colonies. Thus, the presence of inactive workers increases the long-term persistence of the colony at the expense of decreasing short-term productivity. Inactive workers may represent a bet-hedging strategy in response to environmental stochasticity. PMID:26880339
Cunningham, K.J.; Carlson, J.I.; Hurley, N.F.
2004-01-01
Vuggy porosity is gas- or fluid-filled openings in rock matrix that are large enough to be seen with the unaided eye. Well-connected vugs can form major conduits for flow of ground water, especially in carbonate rocks. This paper presents a new method for quantification of vuggy porosity calculated from digital borehole images collected from 47 test coreholes that penetrate the karstic Pleistocene limestone of the Biscayne aquifer, southeastern Florida. Basically, the method interprets vugs and background based on the grayscale color of each in digital borehole images and calculates a percentage of vuggy porosity. Development of the method was complicated because environmental conditions created an uneven grayscale contrast in the borehole images that makes it difficult to distinguish vugs from background. The irregular contrast was produced by unbalanced illumination of the borehole wall, which was a result of eccentering of the borehole-image logging tool. Experimentation showed that a simple, single grayscale threshold would not realistically differentiate between the grayscale contrast of vugs and background. Therefore, an equation was developed for an effective subtraction of the changing grayscale contrast, due to uneven illumination, to produce a grayscale threshold that successfully identifies vugs. In the equation, a moving average calculated around the circumference of the borehole and expressed as the background grayscale intensity is defined as a baseline from which to identify a grayscale threshold for vugs. A constant was derived empirically by calibration with vuggy porosity values derived from digital images of slabbed-core samples and used to make the subtraction from the background baseline to derive the vug grayscale threshold as a function of azimuth. The method should be effective in estimating vuggy porosity in any carbonate aquifer. ?? 2003 Published by Elsevier B.V.
Two-nucleon S10 amplitude zero in chiral effective field theory
NASA Astrophysics Data System (ADS)
Sánchez, M. Sánchez; Yang, C.-J.; Long, Bingwei; van Kolck, U.
2018-02-01
We present a new rearrangement of short-range interactions in the S10 nucleon-nucleon channel within chiral effective field theory. This is intended to address the slow convergence of Weinberg's scheme, which we attribute to its failure to reproduce the amplitude zero (scattering momentum ≃340 MeV) at leading order. After the power counting scheme is modified to accommodate the zero at leading order, it includes subleading corrections perturbatively in a way that is consistent with renormalization-group invariance. Systematic improvement is shown at next-to-leading order, and we obtain results that fit empirical phase shifts remarkably well all the way up to the pion-production threshold. An approach in which pions have been integrated out is included, which allows us to derive analytic results that also fit phenomenology surprisingly well.
Black, R.W.; Moran, P.W.; Frankforter, J.D.
2011-01-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).
Black, Robert W; Moran, Patrick W; Frankforter, Jill D
2011-04-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.
On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?
Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro
2016-01-01
Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
15 CFR 400.31 - Manufacturing and processing activity; criteria.
Code of Federal Regulations, 2010 CFR
2010-01-01
... consider the contributory effect zone savings have as an incremental part of cost effectiveness programs... criteria—(1) Threshold factors. It is the policy of the Board to authorize zone activity only when it is... and as components of imported products. (2) Economic factors. After its review of threshold factors...
Fitness Load and Exercise Time in Secondary Physical Education Classes.
ERIC Educational Resources Information Center
Li, Xiao Jun; Dunham, Paul, Jr.
1993-01-01
Investigates the effect of secondary school physical education on fitness load: the product of the mean heart rate above threshold (144 bpm) and the time duration of heart rate above that threshold. Highly and moderately skilled students achieved fitness load more frequently than their lower skilled colleagues. (GLR)
No minimum threshold for ozone-induced changes in soybean canopy fluxes
USDA-ARS?s Scientific Manuscript database
Tropospheric ozone concentrations [O3] are increasing at rates that exceed any other pollutant. This highly reactive gas drives reductions in plant productivity and canopy water use while also increasing canopy temperature and sensible heat flux. It is not clear whether a minimum threshold of ozone ...
Wenzel, Tim; Stillhart, Cordula; Kleinebudde, Peter; Szepes, Anikó
2017-08-01
Drug load plays an important role in the development of solid dosage forms, since it can significantly influence both processability and final product properties. The percolation threshold of the active pharmaceutical ingredient (API) corresponds to a critical concentration, above which an abrupt change in drug product characteristics can occur. The objective of this study was to identify the percolation threshold of a poorly water-soluble drug with regard to the dissolution behavior from immediate release tablets. The influence of the API particle size on the percolation threshold was also studied. Formulations with increasing drug loads were manufactured via roll compaction using constant process parameters and subsequent tableting. Drug dissolution was investigated in biorelevant medium. The percolation threshold was estimated via a model dependent and a model independent method based on the dissolution data. The intragranular concentration of mefenamic acid had a significant effect on granules and tablet characteristics, such as particle size distribution, compactibility and tablet disintegration. Increasing the intragranular drug concentration of the tablets resulted in lower dissolution rates. A percolation threshold of approximately 20% v/v could be determined for both particle sizes of the API above which an abrupt decrease of the dissolution rate occurred. However, the increasing drug load had a more pronounced effect on dissolution rate of tablets containing the micronized API, which can be attributed to the high agglomeration tendency of micronized substances during manufacturing steps, such as roll compaction and tableting. Both methods that were applied for the estimation of percolation threshold provided comparable values.
van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L.
2013-01-01
Background Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. Methodology/Principal Findings We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45–87.96% forest cover for persistence and 50.82–91.02% for extinction dynamics. Conclusions/Significance Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation. PMID:23409106
NASA Astrophysics Data System (ADS)
Knoblauch, S.
2009-04-01
Both the potential water consumption of plants and their ability to withdraw soil water are necessary in order to estimate actual evapotranspiration and to predict irrigation timing and amount. In relating to root water uptake the threshold value at which plants reducing evapotranspiration is an important parameter. Since transpiration is linearly correlated to dry matter production, under the condition that the AET/PET-Quotient is smaller than 1.0 (de Wit 1958, Tanner & Sinclair 1983), the dry matter production begins to decline too. Plants respond to drought with biochemical, physiological and morphological modifications in order to avoid damages, for instance by increasing the root water uptake. The objective of the study is to determine threshold values of soil water content and pressure head respectively for different field and vegetable plants with lysimeter measurements and to derive so called reduction functions. Both parameter, potenzial water demand in several growth stages and threshold value of soil water content or pressure head can be determined with weighable field lysimeter. The threshold value is reached, when the evapotranspiration under natural rainfall condition (AET) drop clearly (0.8 PET) below the value under well watered condition (PET). Basis for the presented results is the lysimeter plant Buttelstedt of the Thuringian State Institute of Agriculture. It consist of two lysimeter cellars, each with two weighable monolithic lysimeters. The lysimeter are 2.5 m deep with a surface area of 2 m2 to allow a non-restrictive root growth and to arrange a representative number of plants. The weighing accuracy amounts to 0.05 mm. The percolating water is collected by ceramic suction cups with suction up to 0.3 MPa at a depth of 2.3 m. The soil water content is measured by using neutron probe. One of the two lysimeter cellars represents the will irrigated, the other one the non irrigated and/or reduced irrigated part of field. The soil is a Haplic Phaeozem with silt-loamy texture developed from loess (water content at wilting point amounts between 0.167 and 0.270 cm3/cm3 and at field capacity (0.03 MPa) between 0.286 and 0.342 cm3/cm3). The mean annual temperature is 8.2°C and the mean annual precipitation is 550 mm. Results are as follows: Winter wheat begins to reduce evapotranspiration when the water content in the root zone to a depth of 2.0 m is smaller than 25 % of the available water holding capacity (AWC). That is equal to an amount of soil water of 171 mm. The threshold value of potatoes is 40 % of the AWC to a rooting depth of 0.6 m (49 mm soil water amount). The corresponding value for cabbage is 40 % of the AWC relating to a rooting depth of 1.2 m, for cauli flower 60 % of the AWC relating to a depth of 1.0 m and for onion 80 % of the AWC to a rooting depth of 0.3 m (90, 50 and 5 mm soil water amount). Nevertheless onion attain a maximum rooting depth of 0.9 m. The maximum rooting depths of winter wheat, potatoes, cabbage and cawli flower are 2.0, 1.0, 1.5 und 1.5 m. The date on which the threshold is reached is different, for winter wheat and cabbage just before harvest and for onion in a few days after 8-leaf-stage. However, it is assumed that these values are also the influence of weather reflect, particulary with regard to the transpiration demand of the atmosphere and the amount of rain fall during earlier growth stages which can prefer the development of adaptation mechanism. Although there are great differences between the plant species concerning root water uptake to avoid a decline of biomass production due to drought.
Fujii, Shinya; Schlaug, Gottfried
2013-01-01
Humans have the abilities to perceive, produce, and synchronize with a musical beat, yet there are widespread individual differences. To investigate these abilities and to determine if a dissociation between beat perception and production exists, we developed the Harvard Beat Assessment Test (H-BAT), a new battery that assesses beat perception and production abilities. H-BAT consists of four subtests: (1) music tapping test (MTT), (2) beat saliency test (BST), (3) beat interval test (BIT), and (4) beat finding and interval test (BFIT). MTT measures the degree of tapping synchronization with the beat of music, whereas BST, BIT, and BFIT measure perception and production thresholds via psychophysical adaptive stair-case methods. We administered the H-BAT on thirty individuals and investigated the performance distribution across these individuals in each subtest. There was a wide distribution in individual abilities to tap in synchrony with the beat of music during the MTT. The degree of synchronization consistency was negatively correlated with thresholds in the BST, BIT, and BFIT: a lower degree of synchronization was associated with higher perception and production thresholds. H-BAT can be a useful tool in determining an individual's ability to perceive and produce a beat within a single session.
Fujii, Shinya; Schlaug, Gottfried
2013-01-01
Humans have the abilities to perceive, produce, and synchronize with a musical beat, yet there are widespread individual differences. To investigate these abilities and to determine if a dissociation between beat perception and production exists, we developed the Harvard Beat Assessment Test (H-BAT), a new battery that assesses beat perception and production abilities. H-BAT consists of four subtests: (1) music tapping test (MTT), (2) beat saliency test (BST), (3) beat interval test (BIT), and (4) beat finding and interval test (BFIT). MTT measures the degree of tapping synchronization with the beat of music, whereas BST, BIT, and BFIT measure perception and production thresholds via psychophysical adaptive stair-case methods. We administered the H-BAT on thirty individuals and investigated the performance distribution across these individuals in each subtest. There was a wide distribution in individual abilities to tap in synchrony with the beat of music during the MTT. The degree of synchronization consistency was negatively correlated with thresholds in the BST, BIT, and BFIT: a lower degree of synchronization was associated with higher perception and production thresholds. H-BAT can be a useful tool in determining an individual's ability to perceive and produce a beat within a single session. PMID:24324421
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Amplitude and polarization asymmetries in a ring laser
NASA Technical Reports Server (NTRS)
Campbell, L. L.; Buholz, N. E.
1971-01-01
Asymmetric amplitude effects between the oppositely directed traveling waves in a He-Ne ring laser are analyzed both theoretically and experimentally. These effects make it possible to detect angular orientations of an inner-cavity bar with respect to the plane of the ring cavity. The amplitude asymmetries occur when a birefringent bar is placed in the three-mirror ring cavity, and an axial magnetic field is applied to the active medium. A simplified theoretical analysis is performed by using a first order perturbation theory to derive an expression for the polarization of the active medium, and a set of self-consistent equations are derived to predict threshold conditions. Polarization asymmetries between the oppositely directed waves are also predicted. Amplitude asymmetries similar in nature to those predicted at threshold occur when the laser is operating in 12-15 free-running modes, and polarization asymmetry occurs simultaneously.
Generation of cyclotron harmonic waves in the ionospheric modification experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janabi, A.H.A.; Kumar, A.; Sharma, R.P.
1994-02-01
In the present paper, the parametric decay instability of the pump X-mode into electron Bernstein wave (EBW) near second harmonics of electron cyclotron frequency and IBW at different harmonics ([omega] < n[omega][sub ci];n = 2, 3, 4) is examined. Expressions are derived for homogeneous threshold, growth rate and convective threshold for this instability. Applications and relevances of the present investigation to ionospheric modification experiment in the F-layer of the ionosphere as well as during intense electron cyclotron resonance heating in the upcoming MTX tokamak have been given.
Rainfall estimation for real time flood monitoring using geostationary meteorological satellite data
NASA Astrophysics Data System (ADS)
Veerakachen, Watcharee; Raksapatcharawong, Mongkol
2015-09-01
Rainfall estimation by geostationary meteorological satellite data provides good spatial and temporal resolutions. This is advantageous for real time flood monitoring and warning systems. However, a rainfall estimation algorithm developed in one region needs to be adjusted for another climatic region. This work proposes computationally-efficient rainfall estimation algorithms based on an Infrared Threshold Rainfall (ITR) method calibrated with regional ground truth. Hourly rain gauge data collected from 70 stations around the Chao-Phraya river basin were used for calibration and validation of the algorithms. The algorithm inputs were derived from FY-2E satellite observations consisting of infrared and water vapor imagery. The results were compared with the Global Satellite Mapping of Precipitation (GSMaP) near real time product (GSMaP_NRT) using the probability of detection (POD), root mean square error (RMSE) and linear correlation coefficient (CC) as performance indices. Comparison with the GSMaP_NRT product for real time monitoring purpose shows that hourly rain estimates from the proposed algorithm with the error adjustment technique (ITR_EA) offers higher POD and approximately the same RMSE and CC with less data latency.
Deuterium isotope effects in polymerization of benzene under pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Weizhao; Dunuwille, Mihindra; He, Jiangang
The enormous versatility in the properties of carbon materials depends on the content of the sp 2 and sp 3 covalent bonds. Under compression, if intermolecular distances cross a critical threshold, then unsaturated hydrocarbons gradually transform to saturated carbon polymers. However, the mechanism of polymerization, even for benzene, the simplest aromatic hydrocarbon, is still not understood. We used high-pressure synchrotron X-ray, neutron diffraction, and micro-Raman spectroscopy together with density functional calculations to investigate the isotope effects in benzene isotopologues C 6H 6 and C 6D 6 up to 46.0 GPa. Raman spectra of polymeric products recovered from comparable pressures showmore » the progression of polymerization exhibiting a pronounced kinetic isotope effect. Kinetically retarded reactions in C 6D 6 shed light on the mechanism of polymerization of benzene. Lastly, we find that C 6D 6-derived products recovered from P < 35 GPa actively react with moisture, forming polymers with higher sp 3 hydrogen contents. Significant isotopic shift (≥7 GPa) in persistence of Bragg reflections of C 6D 6 is observed.« less
Experimental study of pp{eta} dynamics with WASA-at-COSY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Neha
2011-10-24
To investigate the interaction of {eta}-meson with the nucleons, its production, near the kinematical threshold, in proton-proton collisions has been studied with the WASA detector at COSY storage ring in Juelich, Germany. The data has been taken at beam energy 1400 MeV (corresponding to excess energy (Q = 57 MeV). The {eta}-meson was detected via its 3{pi}{sup 0} decay in nearly 4{pi} detector and two protons were measured in forward direction. The determination of four vectors of both protons and the {eta}-meson in the final state allowed to derive complete kinematical information of the pp{eta}-system. The analysis resulted in 9x10{supmore » 6} events of {eta}{yields}3{pi}{sup 0} giving total production cross-section (8.87{+-}0.03{sub stat}{+-}2.57{sub sys}){mu}b. The angular distribution of {eta}-meson in the center of mass frame is anisotropic and squared invariant mass distributions for proton-proton and proton-{eta} shows deviation from pure phase space.« less
Deuterium isotope effects in polymerization of benzene under pressure
Cai, Weizhao; Dunuwille, Mihindra; He, Jiangang; ...
2017-04-10
The enormous versatility in the properties of carbon materials depends on the content of the sp 2 and sp 3 covalent bonds. Under compression, if intermolecular distances cross a critical threshold, then unsaturated hydrocarbons gradually transform to saturated carbon polymers. However, the mechanism of polymerization, even for benzene, the simplest aromatic hydrocarbon, is still not understood. We used high-pressure synchrotron X-ray, neutron diffraction, and micro-Raman spectroscopy together with density functional calculations to investigate the isotope effects in benzene isotopologues C 6H 6 and C 6D 6 up to 46.0 GPa. Raman spectra of polymeric products recovered from comparable pressures showmore » the progression of polymerization exhibiting a pronounced kinetic isotope effect. Kinetically retarded reactions in C 6D 6 shed light on the mechanism of polymerization of benzene. Lastly, we find that C 6D 6-derived products recovered from P < 35 GPa actively react with moisture, forming polymers with higher sp 3 hydrogen contents. Significant isotopic shift (≥7 GPa) in persistence of Bragg reflections of C 6D 6 is observed.« less
Effects of irregular cerebrospinal fluid production rate in human brain ventricular system
NASA Astrophysics Data System (ADS)
Hadzri, Edi Azali; Shamsudin, Amir Hamzah; Osman, Kahar; Abdul Kadir, Mohammed Rafiq; Aziz, Azian Abd
2012-06-01
Hydrocephalus is an abnormal accumulation of fluid in the ventricles and cavities in the brain. It occurs when the cerebrospinal fluid (CSF) flow or absorption is blocked or when excessive CSF is secreted. The excessive accumulation of CSF results in an abnormal widening of the ventricles. This widening creates potentially harmful pressure on the tissues of the brain. In this study, flow analysis of CSF was conducted on a three-dimensional model of the third ventricle and aqueduct of Sylvius, derived from MRI scans. CSF was modeled as Newtonian Fluid and its flow through the region of interest (ROI) was done using EFD. Lab software. Different steady flow rates through the Foramen of Monro, classified by normal and hydrocephalus cases, were modeled to investigate its effects. The results show that, for normal and hydrocephalus cases, the pressure drop of CSF flow across the third ventricle was observed to be linearly proportionally to the production rate increment. In conclusion, flow rates that cause pressure drop of 5 Pa was found to be the threshold for the initial sign of hydrocephalus.
NASA Astrophysics Data System (ADS)
Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.
2015-03-01
We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.
Describing temporal variation in reticuloruminal pH using continuous monitoring data.
Denwood, M J; Kleen, J L; Jensen, D B; Jonsson, N N
2018-01-01
Reticuloruminal pH has been linked to subclinical disease in dairy cattle, leading to considerable interest in identifying pH observations below a given threshold. The relatively recent availability of continuously monitored data from pH boluses gives new opportunities for characterizing the normal patterns of pH over time and distinguishing these from abnormal patterns using more sensitive and specific methods than simple thresholds. We fitted a series of statistical models to continuously monitored data from 93 animals on 13 farms to characterize normal variation within and between animals. We used a subset of the data to relate deviations from the normal pattern to the productivity of 24 dairy cows from a single herd. Our findings show substantial variation in pH characteristics between animals, although animals within the same farm tended to show more consistent patterns. There was strong evidence for a predictable diurnal variation in all animals, and up to 70% of the observed variation in pH could be explained using a simple statistical model. For the 24 animals with available production information, there was also a strong association between productivity (as measured by both milk yield and dry matter intake) and deviations from the expected diurnal pattern of pH 2 d before the productivity observation. In contrast, there was no association between productivity and the occurrence of observations below a threshold pH. We conclude that statistical models can be used to account for a substantial proportion of the observed variability in pH and that future work with continuously monitored pH data should focus on deviations from a predictable pattern rather than the frequency of observations below an arbitrary pH threshold. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Carreon, Leah Y; Glassman, Steven D; Ghogawala, Zoher; Mummaneni, Praveen V; McGirt, Matthew J; Asher, Anthony L
2016-06-01
OBJECTIVE Transforaminal lumbar interbody fusion (TLIF) has become the most commonly used fusion technique for lumbar degenerative disorders. This suggests an expectation of better clinical outcomes with this technique, but this has not been validated consistently. How surgical variables and choice of health utility measures drive the cost-effectiveness of TLIF relative to posterolateral fusion (PSF) has not been established. The authors used health utility values derived from Short Form-6D (SF-6D) and EQ-5D and different cost-effectiveness thresholds to evaluate the relative cost-effectiveness of TLIF compared with PSF. METHODS From the National Neurosurgery Quality and Outcomes Database (N(2)QOD), 101 patients with spondylolisthesis who underwent PSF were propensity matched to patients who underwent TLIF. Health-related quality of life measures and perioperative parameters were compared. Because health utility values derived from the SF-6D and EQ-5D questionnaires have been shown to vary in patients with low-back pain, quality-adjusted life years (QALYs) were derived from both measures. On the basis of these matched cases, a sensitivity analysis for the relative cost per QALY of TLIF versus PSF was performed in a series of cost-assumption models. RESULTS Operative time, blood loss, hospital stay, and 30-day and 90-day readmission rates were similar for the TLIF and PSF groups. Both TLIF and PSF significantly improved back and leg pain, Oswestry Disability Index (ODI) scores, and EQ-5D and SF-6D scores at 3 and 12 months postoperatively. At 12 months postoperatively, patients who had undergone TLIF had greater improvements in mean ODI scores (30.4 vs 21.1, p = 0.001) and mean SF-6D scores (0.16 vs 0.11, p = 0.001) but similar improvements in mean EQ-5D scores (0.25 vs 0.22, p = 0.415) as patients treated with PSF. At a cost per QALY threshold of $100,000 and using SF-6D-based QALYs, the authors found that TLIF would be cost-prohibitive compared with PSF at a surgical cost of $4830 above that of PSF. However, with EQ-5D-based QALYs, TLIF would become cost-prohibitive at an increased surgical cost of $2960 relative to that of PSF. With the 2014 US per capita gross domestic product of $53,042 as a more stringent cost-effectiveness threshold, TLIF would become cost-prohibitive at surgical costs $2562 above that of PSF with SF-6D-based QALYs or at a surgical cost exceeding that of PSF by $1570 with EQ-5D-derived QALYs. CONCLUSIONS As with all cost-effectiveness studies, cost per QALY depended on the measure of health utility selected, durability of the intervention, readmission rates, and the accuracy of the cost assumptions.
Icing detection from geostationary satellite data using machine learning approaches
NASA Astrophysics Data System (ADS)
Lee, J.; Ha, S.; Sim, S.; Im, J.
2015-12-01
Icing can cause a significant structural damage to aircraft during flight, resulting in various aviation accidents. Icing studies have been typically performed using two approaches: one is a numerical model-based approach and the other is a remote sensing-based approach. The model based approach diagnoses aircraft icing using numerical atmospheric parameters such as temperature, relative humidity, and vertical thermodynamic structure. This approach tends to over-estimate icing according to the literature. The remote sensing-based approach typically uses meteorological satellite/ground sensor data such as Geostationary Operational Environmental Satellite (GOES) and Dual-Polarization radar data. This approach detects icing areas by applying thresholds to parameters such as liquid water path and cloud optical thickness derived from remote sensing data. In this study, we propose an aircraft icing detection approach which optimizes thresholds for L1B bands and/or Cloud Optical Thickness (COT) from Communication, Ocean and Meteorological Satellite-Meteorological Imager (COMS MI) and newly launched Himawari-8 Advanced Himawari Imager (AHI) over East Asia. The proposed approach uses machine learning algorithms including decision trees (DT) and random forest (RF) for optimizing thresholds of L1B data and/or COT. Pilot Reports (PIREPs) from South Korea and Japan were used as icing reference data. Results show that RF produced a lower false alarm rate (1.5%) and a higher overall accuracy (98.8%) than DT (8.5% and 75.3%), respectively. The RF-based approach was also compared with the existing COMS MI and GOES-R icing mask algorithms. The agreements of the proposed approach with the existing two algorithms were 89.2% and 45.5%, respectively. The lower agreement with the GOES-R algorithm was possibly due to the high uncertainty of the cloud phase product from COMS MI.
Influence of the sediment transport threshold on a river network (Invited)
NASA Astrophysics Data System (ADS)
Devauchelle, O.; Petroff, A.; Seybold, H. F.; Rothman, D.
2010-12-01
In order to transport sediment as bedload, a river must impose a sufficient shear stress on its bed. Conversely, a river far above the threshold for bedload would quickly erode its bed and decrease its slope, thus returning towards the threshold. In 1961, F. M. Henderson first used this hypothesis to derive theoretically Lacey's law (which states that the width of a river scales with the square root of its discharge). His reasoning can be extended to demonstrate that, under similar conditions, the product of the water discharge with the square of its slope is constant (Q S2 = const.), the value of this constant depending on the sediment properties. The steephead ravines of the Florida panhandle, formed by seepage erosion in a homogeneous sand plateau, fall remarkably close to Henderson's equilibrium. Thanks to the uniformity of the sediment and to the steady input of groundwater, the hundreds of streams which drain this landscape are ideal field cases to understand how the quasi-equilibrium hypothesis constrains the network structure. Indeed, both Lacey's equation and the above discharge-slope relation are satisfied in the field. The slope-discharge relation Q S2 = const. is a boundary condition for both the aquifer and the landscape itself, as it relates the flux of water drained by the streams to their longitudinal profile. A direct illustration of this coupling is the shape of the longitudinal profile of rivers in the neighborhood of their springs, which we predict theoretically. The boundary condition Q S2 = const. also holds further downstream, and raises delicate theoretical questions concerning the architecture of the entire network. In particular, we address the limitation of the distance between a spring and the first confluence of a stream.
Total and dissociative photoionization cross sections of N2 from threshold to 107 eV
NASA Technical Reports Server (NTRS)
Samson, James A. R.; Masuoka, T.; Pareek, P. N.; Angel, G. C.
1986-01-01
The absolute cross sections for the production of N(+) and N2(+) were measured from the dissociative ionization threshold of 115 A. In addition, the absolute photoabsorption and photoionization cross sections were tabulated between 114 and 796 A. The ionization efficiencies were also given at several discrete wave lengths between 660 and 790 A. The production of N(+) fragment ions are discussed in terms of the doubly excited N2(+) states with binding energies in the range of 24 to 44 eV.
Total and dissociative photoionization cross sections of N2 from threshold to 107 eV
NASA Technical Reports Server (NTRS)
Samson, James A. R.; Masuoka, T.; Pareek, P. N.; Angel, G. C.
1987-01-01
The absolute cross sections for the production of N(+) and N2(+) have been measured from the dissociative ionization threshold to 115 A. In addition, the absolute photoabsorption and photoionization cross sections are tabulated between 114 and 796 A. The ionization efficiencies are also given at several discrete wavelengths between 660 and 790 A. The production of N(+) fragment ions are discussed in terms of the doubly excited N2(+) states with binding energies in the range 24 to 44 eV.
Gauging the likelihood of stable cavitation from ultrasound contrast agents
NASA Astrophysics Data System (ADS)
Bader, Kenneth B.; Holland, Christy K.
2013-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
A ratiometric threshold for determining presence of cancer during fluorescence-guided surgery.
Warram, Jason M; de Boer, Esther; Moore, Lindsay S; Schmalbach, Cecelia E; Withrow, Kirk P; Carroll, William R; Richman, Joshua S; Morlandt, Anthony B; Brandwein-Gensler, Margaret; Rosenthal, Eben L
2015-07-01
Fluorescence-guided imaging to assist in identification of malignant margins has the potential to dramatically improve oncologic surgery. However, a standardized method for quantitative assessment of disease-specific fluorescence has not been investigated. Introduced here is a ratiometric threshold derived from mean fluorescent tissue intensity that can be used to semi-quantitatively delineate tumor from normal tissue. Open-field and a closed-field imaging devices were used to quantify fluorescence in punch biopsy tissues sampled from primary tumors collected during a phase 1 trial evaluating the safety of cetuximab-IRDye800 in patients (n = 11) undergoing surgical intervention for head and neck cancer. Fluorescence ratios were calculated using mean fluorescence intensity (MFI) from punch biopsy normalized by MFI of patient-matched tissues. Ratios were compared to pathological assessment and a ratiometric threshold was established to predict presence of cancer. During open-field imaging using an intraoperative device, the threshold for muscle normalized tumor fluorescence was found to be 2.7, which produced a sensitivity of 90.5% and specificity of 78.6% for delineating disease tissue. The skin-normalized threshold generated greater sensitivity (92.9%) and specificity (81.0%). Successful implementation of a semi-quantitative threshold can provide a scientific methodology for delineating disease from normal tissue during fluorescence-guided resection of cancer. © 2015 Wiley Periodicals, Inc.
Assessment of the Anticonvulsant Potency of Ursolic Acid in Seizure Threshold Tests in Mice.
Nieoczym, Dorota; Socała, Katarzyna; Wlaź, Piotr
2018-05-01
Ursolic acid (UA) is a plant derived compound which is also a component of the standard human diet. It possesses a wide range of pharmacological properties, i.e., antioxidant, anti-inflammatory, antimicrobial and antitumor, which have been used in folk medicine for centuries. Moreover, influence of UA on central nervous system-related processes, i.e., pain, anxiety and depression, was proved in experimental studies. UA also revealed anticonvulsant properties in animal models of epilepsy and seizures. The aim of the present study was to investigate the influence of UA on seizure thresholds in three acute seizure models in mice, i.e., the 6 Hz-induced psychomotor seizure threshold test, the maximal electroshock threshold (MEST) test and the timed intravenous pentylenetetrazole (iv PTZ) infusion test. We also examined its effect on the muscular strength (assessed in the grip strength test) and motor coordination (estimated in the chimney test) in mice. UA at doses of 50 and 100 mg/kg significantly increased the seizure thresholds in the 6 Hz and MEST tests. The studied compound did not influence the seizure thresholds in the iv PTZ test. Moreover, UA did not affect the motor coordination and muscular strength in mice. UA displays only a weak anticonvulsant potential which is dependent on the used seizure model.
Gauging the likelihood of stable cavitation from ultrasound contrast agents.
Bader, Kenneth B; Holland, Christy K
2013-01-07
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Liu, Zhihua; Yang, Jian; He, Hong S.
2013-01-01
The relative importance of fuel, topography, and weather on fire spread varies at different spatial scales, but how the relative importance of these controls respond to changing spatial scales is poorly understood. We designed a “moving window” resampling technique that allowed us to quantify the relative importance of controls on fire spread at continuous spatial scales using boosted regression trees methods. This quantification allowed us to identify the threshold value for fire size at which the dominant control switches from fuel at small sizes to weather at large sizes. Topography had a fluctuating effect on fire spread across the spatial scales, explaining 20–30% of relative importance. With increasing fire size, the dominant control switched from bottom-up controls (fuel and topography) to top-down controls (weather). Our analysis suggested that there is a threshold for fire size, above which fires are driven primarily by weather and more likely lead to larger fire size. We suggest that this threshold, which may be ecosystem-specific, can be identified using our “moving window” resampling technique. Although the threshold derived from this analytical method may rely heavily on the sampling technique, our study introduced an easily implemented approach to identify scale thresholds in wildfire regimes. PMID:23383247
Gauging the likelihood of stable cavitation from ultrasound contrast agents
Bader, Kenneth B; Holland, Christy K
2015-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs. PMID:23221109
Damage threshold from large retinal spot size repetitive-pulse laser exposures.
Lund, Brian J; Lund, David J; Edsall, Peter R
2014-10-01
The retinal damage thresholds for large spot size, multiple-pulse exposures to a Q-switched, frequency doubled Nd:YAG laser (532 nm wavelength, 7 ns pulses) have been measured for 100 μm and 500 μm retinal irradiance diameters. The ED50, expressed as energy per pulse, varies only weakly with the number of pulses, n, for these extended spot sizes. The previously reported threshold for a multiple-pulse exposure for a 900 μm retinal spot size also shows the same weak dependence on the number of pulses. The multiple-pulse ED50 for an extended spot-size exposure does not follow the n dependence exhibited by small spot size exposures produced by a collimated beam. Curves derived by using probability-summation models provide a better fit to the data.
Stimulated Raman scattering of sub-millimeter waves in bismuth
NASA Astrophysics Data System (ADS)
Kumar, Pawan; Tripathi, V. K.
2007-12-01
A high-power sub-millimeter wave propagating through bismuth, a semimetal with non-spherical energy surfaces, parametrically excites a space-charge mode and a back-scattered electromagnetic wave. The free carrier density perturbation associated with the space-charge wave couples with the oscillatory velocity due to the pump to derive the scattered wave. The scattered and pump waves exert a pondermotive force on electrons and holes, driving the space-charge wave. The collisional damping of the decay waves determines the threshold for the parametric instability. The threshold intensity for 20 μm wavelength pump turns out to be ˜2×1012 W/cm2. Above the threshold, the growth rate scales increase with ωo, attain a maximum around ωo=6.5ωp, and, after this, falls off.
NASA Astrophysics Data System (ADS)
Abancó, Clàudia; Hürlimann, Marcel; Moya, José; Berenguer, Marc
2016-10-01
Torrential flows like debris flows or debris floods are fast movements formed by a mix of water and different amounts of unsorted solid material. They generally occur in steep torrents and pose high risk in mountainous areas. Rainfall is their most common triggering factor and the analysis of the critical rainfall conditions is a fundamental research task. Due to their wide use in warning systems, rainfall thresholds for the triggering of torrential flows are an important outcome of such analysis and are empirically derived using data from past events. In 2009, a monitoring system was installed in the Rebaixader catchment, Central Pyrenees (Spain). Since then, rainfall data of 25 torrential flows (;TRIG rainfalls;) were recorded, with a 5-min sampling frequency. Other 142 rainfalls that did not trigger torrential flows (;NonTRIG rainfalls;) were also collected and analyzed. The goal of this work was threefold: (i) characterize rainfall episodes in the Rebaixader catchment and compare rainfall data that triggered torrential flows and others that did not; (ii) define and test Intensity-Duration (ID) thresholds using rainfall data measured inside the catchment by with different techniques; (iii) analyze how the criterion used for defining the rainfall duration and the spatial variability of rainfall influences the value obtained for the thresholds. The statistical analysis of the rainfall characteristics showed that the parameters that discriminate better the TRIG and NonTRIG rainfalls are the rainfall intensities, the mean rainfall and the total rainfall amount. The antecedent rainfall was not significantly different between TRIG and NonTRIG rainfalls, as it can be expected when the source material is very pervious (a sandy glacial soil in the study site). Thresholds were derived from data collected at one rain gauge located inside the catchment. Two different methods were applied to calculate the duration and intensity of rainfall: (i) using total duration, Dtot, and mean intensity, Imean, of the rainfall event, and (ii) using floating durations, D, and intensities, Ifl, based on the maximum values over floating periods of different duration. The resulting thresholds are considerably different (Imean = 6.20 Dtot-0.36 and Ifl_90% = 5.49 D-0.75, respectively) showing a strong dependence on the applied methodology. On the other hand, the definition of the thresholds is affected by several types of uncertainties. Data from both rain gauges and weather radar were used to analyze the uncertainty associated with the spatial variability of the triggering rainfalls. The analysis indicates that the precipitation recorded by the nearby rain gauges can introduce major uncertainties, especially for convective summer storms. Thus, incorporating radar rainfall can significantly improve the accuracy of the measured triggering rainfall. Finally, thresholds were also derived according to three different criteria for the definition of the duration of the triggering rainfall: (i) the duration until the peak intensity, (ii) the duration until the end of the rainfall; and, (iii) the duration until the trigger of the torrential flow. An important contribution of this work is the assessment of the threshold relationships obtained using the third definition of duration. Moreover, important differences are observed in the obtained thresholds, showing that ID relationships are significantly dependent on the applied methodology.
NASA Astrophysics Data System (ADS)
Engle, J. W.; Kelsey, C. T.; Bach, H.; Ballard, B. D.; Fassbender, M. E.; John, K. D.; Birnbaum, E. R.; Nortier, F. M.
2012-12-01
In order to ascertain the potential for radioisotope production and material science studies using the Isotope Production Facility at Los Alamos National Lab, a two-pronged investigation has been initiated. The Monte Carlo for Neutral Particles eXtended (MCNPX) code has been used in conjunction with the CINDER 90 burnup code to predict neutron flux energy distributions as a result of routine irradiations and to estimate yields of radioisotopes of interest for hypothetical irradiation conditions. A threshold foil activation experiment is planned to study the neutron flux using measured yields of radioisotopes, quantified by HPGe gamma spectroscopy, from representative nuclear reactions with known thresholds up to 50 MeV.
Threshold and Other Properties of U Particle Production in e{sup +}e{sup -} Annihilation
DOE R&D Accomplishments Database
Perl, M. L.
1976-05-01
The anomalous e..mu.. events produced in e{sup +}e{sup -} annihilation, e{sup +} + e{sup -} ..-->.. e/sup + -/ + ..mu..{sup - +} + missing energy, (1) was explained as the decay products of a pair of U particles produced in the reaction e{sup +} + e{sup -} ..-->.. U{sup +} + U{sup -}. (2) New data is presented on the U particles in the energy region just above their production threshold and results of a study of the nature of the particles carrying off the missing energy in Eq. (1). While presenting these new results the present status of knowledge of the anomalous e..mu.. events and their U particle explanation is briefly reviewed. (JFP)
Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa
2017-01-01
Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio (C/N0) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C/N0 can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis. PMID:29207546
Ultrasonically triggered ignition at liquid surfaces.
Simon, Lars Hendrik; Meyer, Lennart; Wilkens, Volker; Beyer, Michael
2015-01-01
Ultrasound is considered to be an ignition source according to international standards, setting a threshold value of 1mW/mm(2) [1] which is based on theoretical estimations but which lacks experimental verification. Therefore, it is assumed that this threshold includes a large safety margin. At the same time, ultrasound is used in a variety of industrial applications where it can come into contact with explosive atmospheres. However, until now, no explosion accidents have been reported in connection with ultrasound, so it has been unclear if the current threshold value is reasonable. Within this paper, it is shown that focused ultrasound coupled into a liquid can in fact ignite explosive atmospheres if a specific target positioned at a liquid's surface converts the acoustic energy into a hot spot. Based on ignition tests, conditions could be derived that are necessary for an ultrasonically triggered explosion. These conditions show that the current threshold value can be significantly augmented. Copyright © 2014 Elsevier B.V. All rights reserved.
Security of a semi-quantum protocol where reflections contribute to the secret key
NASA Astrophysics Data System (ADS)
Krawec, Walter O.
2016-05-01
In this paper, we provide a proof of unconditional security for a semi-quantum key distribution protocol introduced in a previous work. This particular protocol demonstrated the possibility of using X basis states to contribute to the raw key of the two users (as opposed to using only direct measurement results) even though a semi-quantum participant cannot directly manipulate such states. In this work, we provide a complete proof of security by deriving a lower bound of the protocol's key rate in the asymptotic scenario. Using this bound, we are able to find an error threshold value such that for all error rates less than this threshold, it is guaranteed that A and B may distill a secure secret key; for error rates larger than this threshold, A and B should abort. We demonstrate that this error threshold compares favorably to several fully quantum protocols. We also comment on some interesting observations about the behavior of this protocol under certain noise scenarios.
Jin, Tian; Yuan, Heliang; Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa
2017-12-04
Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio ( C / N ₀) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C / N ₀ can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis.
Inter-comparison of the EUMETSAT H-SAF and NASA PPS precipitation products over Western Europe.
NASA Astrophysics Data System (ADS)
Kidd, Chris; Panegrossi, Giulia; Ringerud, Sarah; Stocker, Erich
2017-04-01
The development of precipitation retrieval techniques utilising passive microwave satellite observations has achieved a good degree of maturity through the use of physically-based schemes. The DMSP Special Sensor Microwave Imager/Sounder (SSMIS) has been the mainstay of passive microwave observations over the last 13 years forming the basis of many satellite precipitation products, including NASA's Precipitation Processing System (PPS) and EUMETSAT's Hydrological Satellite Application Facility (H-SAF). The NASA PPS product utilises the Goddard Profiling (GPROF; currently 2014v2-0) retrieval scheme that provides a physically consistent retrieval scheme through the use of coincident active/passive microwave retrievals from the Global Precipitation Measurement (GPM) mission core satellite. The GPM combined algorithm retrieves hydrometeor profiles optimized for consistency with both Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI); these profiles form the basis of the GPROF database which can be utilized for any constellation radiometer within the framework a Bayesian retrieval scheme. The H-SAF product (PR-OBS-1 v1.7) is based on a physically-based Bayesian technique where the a priori information is provided by a Cloud Dynamic Radiation Database (CDRD). Meteorological parameter constraints, derived from synthetic dynamical-thermodynamical-hydrological meteorological profile variables, are used in conjunction with multi-hydrometeor microphysical profiles and multispectral PMW brightness temperature vectors into a specialized a priori knowledge database underpinning and guiding the algorithm's Bayesian retrieval solver. This paper will present the results of an inter-comparison of the NASA PPS GPROF and EUMETSAT H-SAF PR-OBS-1 products over Western Europe for the period from 1 January 2015 through 31 December 2016. Surface radar is derived from the UKMO-derived Nimrod European radar product, available at 15 minute/5 km resolution. Initial results show that overall the correlations between the two satellite precipitation products and surface radar precipitation estimates are similar, particularly for cases where there is extensive precipitation; however, the H-SAF tends to have poorer correlations in situations where rain is light or limited in extent. Similarly, RMSEs for the GPROF scheme tend to a smaller than those of the H-SAF retrievals. The difference in the performance can be traced to the identification of precipitation; the GPROF2014v2-0 scheme overestimates the occurrence and extent of the precipitation, generating a significant amount of light precipitation. The H-SAF scheme has a lower precipitation threshold of about 0.25 mmh-1 while overestimating moderate and higher precipitation intensities.
Diagrammatic approach to meson production in proton-proton collisions near threshold
NASA Astrophysics Data System (ADS)
Kaiser, Norbert
2000-06-01
We evaluate the threshold T-matrices for the reactions pp→ppπ0, pnπ+, ppη, ppω, pΛK+ and pn→pnη in a relativistic Feynman diagram approach. We employ an effective range approximation to take care of the strong S-wave pN and pΛ final state interaction. We stress that the heavy baryon formalism is not applicable in the NN-system above π-production threshold due to the large external momentum, |p⃗|≃√Mmπ . The magnitudes of the experimental threshold amplitudes extracted from total cross section data, A=(2.7-0.3i) fm4, B=(2.8-1.5i) fm4, |C|=1.32 fm4, |Ω|=0.53 fm4, K=√2|Ks|2+|Kt|2 =0.38 fm4 and |D|=2.3 fm4 can be reproduced by (long-range) one-pion exchange and short-range vector meson exchanges, with the latter giving the largest contributions. Pion loop effects in pp→ppπ0 appear to be small. The presented diagrammatic approach requires further tests via studies of angular distributions and polarization observables.
Vernon, John A; Goldberg, Robert; Golec, Joseph
2009-01-01
In this article we describe how reimbursement cost-effectiveness thresholds, per unit of health benefit, whether set explicitly or observed implicitly via historical reimbursement decisions, serve as a signal to firms about the commercial viability of their R&D projects (including candidate products for in-licensing). Traditional finance methods for R&D project valuations, such as net present value analyses (NPV), incorporate information from these payer reimbursement signals to help determine which R&D projects should be continued and which should be terminated (in the case of the latter because they yield an NPV < 0). Because the influence these signals have for firm R&D investment decisions is so significant, we argue that it is important for reimbursement thresholds to reflect the economic value of the unit of health benefit being considered for reimbursement. Thresholds set too low (below the economic value of the health benefit) will result in R&D investment levels that are too low relative to the economic value of R&D (on the margin). Similarly, thresholds set too high (above the economic value of the health benefit) will result in inefficiently high levels of R&D spending. The US in particular, which represents approximately half of the global pharmaceutical market (based on sales), and which seems poised to begin undertaking cost effectiveness in a systematic way, needs to exert caution in setting policies that explicitly or implicitly establish cost-effectiveness reimbursement thresholds for healthcare products and technologies, such as pharmaceuticals.
Serrao, M; Parisi, L; Pierelli, F; Rossi, P
2001-11-01
To evaluate the contribution of the low-threshold afferents to the production of the cutaneous silent period (CSP) in the upper limbs. The CSP was studied in 10 healthy adults and 4 patients with Friedreich's ataxia. The following neurophysiological aspects were studied: (a) relationship between sensory threshold (ST), sensory action potential (SAP) amplitude and CSP parameters; (b) habituation and recovery cycle of the CSP at different stimulus intensities (2xST and 8xST); (c) pattern of responses in distal and proximal muscles at different stimulus intensities (2xST and 8xST). (a) The CSP occurred at low intensities (1xST and 2xST) and increased abruptly between 3.5xST and 4xST (corresponding to the pain threshold). The SAP amplitude was saturated before CSP saturation. In the patients with Friedreich's ataxia, the CSP appeared only at higher stimulus intensities (6xST-8xST). (b) The CSP evoked at 2xST showed a fast habituation and slow recovery cycle whereas the opposite behaviour was found at 8xST. (c) Low-threshold stimuli induced an inhibitory response restricted to the distal muscles. High-intensity stimulation produced an electromyographic suppression, significantly increasing from proximal to distal muscles. Our findings support the notion that low-threshold afferents participate in the production of the CSP in the upper limbs. The different afferents may activate different central neural networks with separate functional significance.
Loganathan, Tharani; Ng, Chiu-Wan; Lee, Way-Seah; Hutubessy, Raymond C W; Verguet, Stéphane; Jit, Mark
2018-03-01
Cost-effectiveness thresholds (CETs) based on the Commission on Macroeconomics and Health (CMH) are extensively used in low- and middle-income countries (LMICs) lacking locally defined CETs. These thresholds were originally intended for global and regional prioritization, and do not reflect local context or affordability at the national level, so their value for informing resource allocation decisions has been questioned. Using these thresholds, rotavirus vaccines are widely regarded as cost-effective interventions in LMICs. However, high vaccine prices remain a barrier towards vaccine introduction. This study aims to evaluate the cost-effectiveness, affordability and threshold price of universal rotavirus vaccination at various CETs in Malaysia. Cost-effectiveness of Rotarix and RotaTeq were evaluated using a multi-cohort model. Pan American Health Organization Revolving Fund's vaccine prices were used as tender price, while the recommended retail price for Malaysia was used as market price. We estimate threshold prices defined as prices at which vaccination becomes cost-effective, at various CETs reflecting economic theories of human capital, societal willingness-to-pay and marginal productivity. A budget impact analysis compared programmatic costs with the healthcare budget. At tender prices, both vaccines were cost-saving. At market prices, cost-effectiveness differed with thresholds used. At market price, using 'CMH thresholds', Rotarix programmes were cost-effective and RotaTeq were not cost-effective from the healthcare provider's perspective, while both vaccines were cost-effective from the societal perspective. Using other CETs, both vaccines were not cost-effective at market price, from the healthcare provider's and societal perspectives. At tender and cost-effective prices, rotavirus vaccination cost ∼1 and 3% of the public health budget, respectively. Using locally defined thresholds, rotavirus vaccination is cost-effective at vaccine prices in line with international tenders, but not at market prices. Thresholds representing marginal productivity are likely to be lower than those reflecting human capital and individual preference measures, and may be useful in determining affordable vaccine prices. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
USDA-ARS?s Scientific Manuscript database
Production density in excess of a critical threshold can result in a negative relationship between stocking density and fish production. This study was conducted to evaluate production characteristics of juvenile cobia Rachycentron canadum, reared to market size in production-scale recirculating aq...
Revealing W51C as a Cosmic-Ray source using Fermi-LAT data
Jogler, T.; Funk, S.
2016-01-10
Here, supernova remnants (SNRs) are commonly believed to be the primary sources of Galactic cosmic rays. Despite intensive study of the non-thermal emission of many SNRs the identification of the accelerated particle type relies heavily on assumptions of ambient-medium parameters that are only loosely constrained. Compelling evidence of hadronic acceleration can be provided by detecting a strong roll-off in the secondary γ-ray spectrum below themore » $${\\pi }^{0}$$ production threshold energy of about 135 MeV, the so called "pion bump." Here we use five years of Fermi-Large Area Telescope data to study the spectrum above 60 MeV of the middle-aged SNR W51C. A clear break in the power-law γ-ray spectrum at $${E}_{{\\rm{break}}}=290\\pm 20\\;{\\rm{MeV}}$$ is detected with $$9\\sigma $$ significance and we show that this break is most likely associated with the energy production threshold of $${\\pi }^{0}$$mesons. A high-energy break in the γ-ray spectrum at about 2.7 GeV is found with $$7.5\\sigma $$ significance. The spectral index at energies beyond this second break is $${{\\rm{\\Gamma }}}_{2}={2.52}_{-0.07}^{+0.06}$$ and closely matches the spectral index derived by the MAGIC Collaboration above 75 GeV. Therefore our analysis provides strong evidence to explain the γ-ray spectrum of W51C by a single particle population of protons with a momentum spectrum best described by a broken power law with break momentum $${p}_{{\\rm{break}}}\\sim 80\\;{\\rm{G}}{\\rm{e}}{\\rm{V}}/c.$$ W51C is the third middle-aged SNR that displays compelling evidence for cosmic-ray acceleration and thus strengthens the case of SNRs as the main source of Galactic cosmic rays.« less
Holzmeier, Fabian; Fischer, Ingo; Kiendl, Benjamin; Krueger, Anke; Bodi, Andras; Hemberger, Patrick
2016-04-07
We report the determination of the absolute photoionization cross section of cyclopropenylidene, c-C3H2, and the heat of formation of the C3H radical and ion derived by the dissociative ionization of the carbene. Vacuum ultraviolet (VUV) synchrotron radiation as provided by the Swiss Light Source and imaging photoelectron photoion coincidence (iPEPICO) were employed. Cyclopropenylidene was generated by pyrolysis of a quadricyclane precursor in a 1 : 1 ratio with benzene, which enabled us to derive the carbene's near threshold absolute photoionization cross section from the photoionization yield of the two pyrolysis products and the known cross section of benzene. The cross section at 9.5 eV, for example, was determined to be 4.5 ± 1.4 Mb. Upon dissociative ionization the carbene decomposes by hydrogen atom loss to the linear isomer of C3H(+). The appearance energy for this process was determined to be AE(0K)(c-C3H2; l-C3H(+)) = 13.67 ± 0.10 eV. The heat of formation of neutral and cationic C3H was derived from this value via a thermochemical cycle as Δ(f)H(0K)(C3H) = 725 ± 25 kJ mol(-1) and Δ(f)H(0K)(C3H(+)) = 1604 ± 19 kJ mol(-1), using a previously reported ionization energy of C3H.
Valsecchi, Sara; Conti, Daniela; Crebelli, Riccardo; Polesello, Stefano; Rusconi, Marianna; Mazzoni, Michela; Preziosi, Elisabetta; Carere, Mario; Lucentini, Luca; Ferretti, Emanuele; Balzamo, Stefania; Simeone, Maria Gabriella; Aste, Fiorella
2017-02-05
The evidence that in Northern Italy significant sources of perfluoroalkylacids (PFAA) are present induced the Italian government to establish a Working Group on Environmental Quality Standard (EQS) for PFAA in order to include some of them in the list of national specific pollutants for surface water monitoring according to the Water Framework Directive (2000/60/EC). The list of substances included perfluorooctanoate (PFOA) and related short chain PFAA such as perfluorobutanoate (PFBA), perfluoropentanoate (PFPeA), perfluorohexanoate (PFHxA) and perfluorobutanesulfonate (PFBS), which is a substitute of perfluorooctanesulfonate. For each of them a dossier collects available data on regulation, physico-chemical properties, emission and sources, occurrence, acute and chronic toxicity on aquatic species and mammals, including humans. Quality standards (QS) were derived for the different protection objectives (pelagic and benthic communities, predators by secondary poisoning, human health via consumption of fishery products and water) according to the European guideline. The lowest QS is finally chosen as the relevant EQS. For PFOA a QS for biota was derived for protection from secondary poisoning and the corresponding QS for water was back-calculated, obtaining a freshwater EQS of 0.1μgL -1 . For PFBA, PFPeA, PFHxA and PFBS threshold limits proposed for drinking waters were adopted as EQS. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cobourn, K. M.; Peckham, S. D.
2011-12-01
The vulnerability of agri-environmental systems to ecological threshold events depends on the combined influence of economic factors and natural drivers, such as climate and disturbance. This analysis builds an integrated ecologic-economic model to evaluate the behavioral response of agricultural producers to changing and uncertain natural conditions. The model explicitly reflects the effect of producer behavior on the likelihood of a threshold event that threatens the ecological and/or economic sustainability of the agri-environmental system. The foundation of the analysis is a threshold indicator that incorporates the population dynamics of a species that supports economic production and an episodic disturbance regime-in this case rangeland grass that is grazed by livestock and is subject to wildfire. This ecological indicator is integrated into an economic model in which producers choose grazing intensity given the state of the grass population and a set of economic parameters. We examine two model variants that characterize differing economic circumstances. The first characterizes the optimal grazing regime assuming that the system is managed by a single planner whose objective is to maximize the aggregate long-run returns of producers in the system. The second examines the case in which individual producers choose their own stocking rates in order to maximize their private economic benefit. The results from the first model variant illustrate the difference between an ecologic and an economic threshold. Failure to cross an ecological threshold does not necessarily ensure that the system remains economically viable: Economic sustainability, defined as the ability of the system to support optimal production into the infinite future, requires that the net growth rate of the supporting population exceeds the level required for ecological sustainability by an amount that depends on the market price of livestock and grazing efficiency. The results from the second model variant define the circumstances under which a system that is otherwise ecologically sustainable is driven over a threshold by the actions of economic agents. The difference between the two model solutions identifies bounds between which the viability of livestock production over the long-run is uncertain and depends upon the policy setting in which the agri-environmental system operates.
Code of Federal Regulations, 2011 CFR
2011-10-01
... only domestic end products for use outside the United States, and use only domestic construction material for construction to be performed outside the United States, including end products and... simplified acquisition threshold; (2) The end product or particular construction material is— (i) Listed in...
Code of Federal Regulations, 2013 CFR
2013-10-01
... only domestic end products for use outside the United States, and use only domestic construction material for construction to be performed outside the United States, including end products and... simplified acquisition threshold; (2) The end product or particular construction material is— (i) Listed in...
Code of Federal Regulations, 2014 CFR
2014-10-01
... only domestic end products for use outside the United States, and use only domestic construction material for construction to be performed outside the United States, including end products and... simplified acquisition threshold; (2) The end product or particular construction material is— (i) Listed in...
Code of Federal Regulations, 2012 CFR
2012-10-01
... only domestic end products for use outside the United States, and use only domestic construction material for construction to be performed outside the United States, including end products and... simplified acquisition threshold; (2) The end product or particular construction material is— (i) Listed in...
Secondary antiproton production in relativistic plasmas
NASA Technical Reports Server (NTRS)
Dermer, C. D.; Ramaty, R.
1985-01-01
The possibility is investigated that the reported excess low energy antiproton component of the cosmic radiation results from proton-proton (p-p) interactions in relativistic plasmas. Because of both target and projectile motion in such plasmas, the antiproton production threshold in the frame of the plasma is much lower than the threshold of antiproton production in cosmic ray interactions with ambient matter. The spectrum of the resultant antiprotons therefore extends to much lower energy than in the cosmic ray case. The antiproton spectrum is calculated for relativistic thermal plasmas and the spectrum is estimated for relativistic nonthermal plasmas. As possible production sites, matter accreting onto compact objects located in the galaxy is considered. Possible overproduction of gamma rays from associated neutral pion production can be avoided if the site is optically thick to the photons but not to the antiprotons. A possible scenario involves a sufficiently large photon density that the neutral pion gamma rays are absorbed by photon-photon pair production. Escape of the antiprotons to the interstellar medium can be mediated by antineutron production.
NASA Astrophysics Data System (ADS)
Olins, H. C.; Rogers, D.; Scholin, C. A.; Preston, C. J.; Vidoudez, C.; Ussler, W.; Pargett, D.; Jensen, S.; Roman, B.; Birch, J. M.; Girguis, P. R.
2014-12-01
Hydrothermal vents are hotspots of microbial primary productivity often described as "windows into the subsurface biosphere." High temperature vents have received the majority of research attention, but cooler diffuse flows are as, if not more, important a source of heat and chemicals to the overlying ocean. We studied patterns of in situ gene expression and co-registered geochemistry in order to 1) describe the diversity and physiological poise of active microbial communities that span thermal and geochemical gradients from active diffuse flow to background vent field seawater, and 2) determine to what extent seawater or subsurface microbes were active throughout this environment. Analyses of multiple metatranscriptomes from 5 geochemically distinct sites (some from samples preserved in situ) show that proximate diffuse flows showed strikingly different transcription profiles. Specifically, caldera background and some diffuse flows were similar, both dominated by seawater-derived Gammaproteobacteria despite having distinct geochemistries. Intra-field community shows evidence of increased primary productivity throughout the entire vent field and not just at individual diffuse flows. In contrast, a more spatially limited, Epsilonproteobacteria-dominated transcription profile from the most hydrothermally-influenced diffuse flow appeared to be driven by the activity of vent-endemic microbes, likely reflecting subsurface microbial activity. We suggest that the microbial activity within many diffuse flow vents is primarily attributable to seawater derived Gammaproteobacterial sulfur oxidizers, while in certain other flows vent-endemic Epsilonproteobactiera are most active. These data reveal a diversity in microbial activity at diffuse flows that has not previously been recognized, and reshapes our thinking about the relative influence that different microbial communities may have on local processes (such as primary production) and potentially global biogeochemical cycles.
Bråtane, Bernt Tore; Bastan, Birgul; Fisher, Marc; Bouley, James; Henninger, Nils
2009-07-07
Though diffusion weighted imaging (DWI) is frequently used for identifying the ischemic lesion in focal cerebral ischemia, the understanding of spatiotemporal evolution patterns observed with different analysis methods remains imprecise. DWI and calculated apparent diffusion coefficient (ADC) maps were serially obtained in rat stroke models (MCAO): permanent, 90 min, and 180 min temporary MCAO. Lesion volumes were analyzed in a blinded and randomized manner by 2 investigators using (i) a previously validated ADC threshold, (ii) visual determination of hypointense regions on ADC maps, and (iii) visual determination of hyperintense regions on DWI. Lesion volumes were correlated with 24 hour 2,3,5-triphenyltetrazoliumchloride (TTC)-derived infarct volumes. TTC-derived infarct volumes were not significantly different from the ADC and DWI-derived lesion volumes at the last imaging time points except for significantly smaller DWI lesions in the pMCAO model (p=0.02). Volumetric calculation based on TTC-derived infarct also correlated significantly stronger to volumetric calculation based on last imaging time point derived lesions on ADC maps than DWI (p<0.05). Following reperfusion, lesion volumes on the ADC maps significantly reduced but no change was observed on DWI. Visually determined lesion volumes on ADC maps and DWI by both investigators correlated significantly with threshold-derived lesion volumes on ADC maps with the former method demonstrating a stronger correlation. There was also a better interrater agreement for ADC map analysis than for DWI analysis. Ischemic lesion determination by ADC was more accurate in final infarct prediction, rater independent, and provided exclusive information on ischemic lesion reversibility.
On the Appearance of Thresholds in the Dynamical Model of Star Formation
NASA Astrophysics Data System (ADS)
Elmegreen, Bruce G.
2018-02-01
The Kennicutt–Schmidt (KS) relationship between the surface density of the star formation rate (SFR) and the gas surface density has three distinct power laws that may result from one model in which gas collapses at a fixed fraction of the dynamical rate. The power-law slope is 1 when the observed gas has a characteristic density for detection, 1.5 for total gas when the thickness is about constant as in the main disks of galaxies, and 2 for total gas when the thickness is regulated by self-gravity and the velocity dispersion is about constant, as in the outer parts of spirals, dwarf irregulars, and giant molecular clouds. The observed scaling of the star formation efficiency (SFR per unit CO) with the dense gas fraction (HCN/CO) is derived from the KS relationship when one tracer (HCN) is on the linear part and the other (CO) is on the 1.5 part. Observations of a threshold density or column density with a constant SFR per unit gas mass above the threshold are proposed to be selection effects, as are observations of star formation in only the dense parts of clouds. The model allows a derivation of all three KS relations using the probability distribution function of density with no thresholds for star formation. Failed galaxies and systems with sub-KS SFRs are predicted to have gas that is dominated by an equilibrium warm phase where the thermal Jeans length exceeds the Toomre length. A squared relation is predicted for molecular gas-dominated young galaxies.
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten
2018-05-01
Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.
NASA Astrophysics Data System (ADS)
Ross, C.; Ali, G.; Oswald, C. J.; McMillan, H. K.; Walter, K.
2017-12-01
A hydrologic threshold is a critical point in time when runoff behavior rapidly changes, often in response to the activation of specific storage-driven or intensity-driven processes. Hydrologic thresholds can be viewed as characteristic signatures of hydrosystems, which makes them useful for site comparison as long as their presence (or lack thereof) can be evaluated in a standard manner across a range of environments. While several previous studies have successfully identified thresholds at a variety of individual sites, only a limited number have compared dynamics prevailing at the hillslope versus catchment scale, or distinguished the role of storage versus intensity thresholds. The objective of this study was therefore to examine precipitation input thresholds as well as "precipitation minus evapotranspiration" thresholds in environments with contrasted climatic and geographic characteristics. Historical climate and hydrometric datasets were consolidated for one hillslope site located at the Panola Mountain Research Watershed (Southeastern USA) and catchments located in the HJ Andrew's Experimental Forest (Northwestern USA), the Catfish Creek Watershed (Canadian prairies), the Experimental Lakes Area (Canadian boreal ecozone), the Tarrawarra catchment (Australia) and the Mahurangi catchment (New Zealand). Individual precipitation-runoff events were delineated using the newly introduced software HydRun to derive event-specific hydrograph parameters as well surrogate measures of antecedent moisture conditions and evapotranspiration in an automated and consistent manner. Various hydrograph parameters were then plotted against those surrogate measures to detect and evaluate site-specific threshold dynamics. Preliminary results show that a range of threshold shapes (e.g., "hockey stick", heaviside and dirac) were observed across sites. The influence of antecedent precipitation on threshold magnitude and shape also appeared stronger at sites with lower topographic relief and drier climate. Future analyses will focus on the interaction between storage and intensity thresholds in order to evaluate the importance of considering both for comparative hydrological studies.
Marshall, Katie E; Sinclair, Brent J
2018-06-12
Internal ice formation leads to wholesale changes in ionic, osmotic and pH homeostasis, energy metabolism, and mechanical damage, across a small range of temperatures, and is thus an abiotic stressor that acts at a distinct, physiologically-relevant, threshold. Insects that experience repeated freeze-thaw cycles over winter will cross this stressor threshold many times over their lifespan. Here we examine the effect of repeatedly crossing the freezing threshold on short-term physiological parameters (metabolic reserves and cryoprotectant concentration) as well as long-term fitness-related performance (survival and egg production) in the freeze-tolerant goldenrod gall fly Eurosta solidaginis We exposed overwintering prepupae to a series of low temperatures (-10, -15, or -20 °C) with increasing numbers of freezing events (3, 6, or 10) with differing recovery periods between events (1, 5, or 10 days). Repeated freezing increased sorbitol concentration by about 50% relative to a single freezing episode, and prompted prepupae to modify long chain triacylglycerols to acetylated triacylglycerols. Long-term, repeated freezing did not significantly reduce survival, but did reduce egg production by 9.8% relative to a single freezing event. Exposure temperature did not affect any of these measures, suggesting that threshold crossing events may be more important to fitness than the intensity of stress in E. solidaginis overwintering. © 2018. Published by The Company of Biologists Ltd.
Załuska, Katarzyna; Kondrat-Wróbel, Maria W; Łuszczki, Jarogniew J
2018-05-01
The coexistence of seizures and arterial hypertension requires an adequate and efficacious treatment involving both protection from seizures and reduction of high arterial blood pressure. Accumulating evidence indicates that some diuretic drugs (with a well-established position in the treatment of arterial hypertension) also possess anticonvulsant properties in various experimental models of epilepsy. The aim of this study was to assess the anticonvulsant potency of 6 commonly used diuretic drugs (i.e., amiloride, ethacrynic acid, furosemide, hydrochlorothiazide, indapamide, and spironolactone) in the maximal electroshock-induced seizure threshold (MEST) test in mice. Doses of the studied diuretics and their corresponding threshold increases were linearly related, allowing for the determination of doses which increase the threshold for electroconvulsions in drug-treated animals by 20% (TID20 values) over the threshold in control animals. Amiloride, hydrochlorothiazide and indapamide administered systemically (intraperitoneally - i.p.) increased the threshold for maximal electroconvulsions in mice, and the experimentally-derived TID20 values in the maximal electroshock seizure threshold test were 30.2 mg/kg for amiloride, 68.2 mg/kg for hydrochlorothiazide and 3.9 mg/kg for indapamide. In contrast, ethacrynic acid (up to 100 mg/kg), furosemide (up to 100 mg/kg) and spironolactone (up to 50 mg/kg) administered i.p. had no significant impact on the threshold for electroconvulsions in mice. The studied diuretics can be arranged with respect to their anticonvulsant potency in the MEST test as follows: indapamide > amiloride > hydrochlorothiazide. No anticonvulsant effects were observed for ethacrynic acid, furosemide or spironolactone in the MEST test in mice.
Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V.; Esliger, Dale W.; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L.
2016-01-01
Objectives (1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Methods Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. Results For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Conclusions Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD thresholds, in order to accurately separate sedentary behaviours from common motion-based light-intensity physical activities. However, caution should be taken if isolating sedentary behaviours from standing is of particular interest. PMID:27706241
Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V; Esliger, Dale W; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L
2016-01-01
(1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD thresholds, in order to accurately separate sedentary behaviours from common motion-based light-intensity physical activities. However, caution should be taken if isolating sedentary behaviours from standing is of particular interest.
Security of six-state quantum key distribution protocol with threshold detectors
Kato, Go; Tamaki, Kiyoshi
2016-01-01
The security of quantum key distribution (QKD) is established by a security proof, and the security proof puts some assumptions on the devices consisting of a QKD system. Among such assumptions, security proofs of the six-state protocol assume the use of photon number resolving (PNR) detector, and as a result the bit error rate threshold for secure key generation for the six-state protocol is higher than that for the BB84 protocol. Unfortunately, however, this type of detector is demanding in terms of technological level compared to the standard threshold detector, and removing the necessity of such a detector enhances the feasibility of the implementation of the six-state protocol. Here, we develop the security proof for the six-state protocol and show that we can use the threshold detector for the six-state protocol. Importantly, the bit error rate threshold for the key generation for the six-state protocol (12.611%) remains almost the same as the one (12.619%) that is derived from the existing security proofs assuming the use of PNR detectors. This clearly demonstrates feasibility of the six-state protocol with practical devices. PMID:27443610
Wang, Huiliang; Wei, Peng; Li, Yaoxuan; Han, Jeff; Lee, Hye Ryoung; Naab, Benjamin D.; Liu, Nan; Wang, Chenggong; Adijanto, Eric; Tee, Benjamin C.-K.; Morishita, Satoshi; Li, Qiaochu; Gao, Yongli; Cui, Yi; Bao, Zhenan
2014-01-01
Tuning the threshold voltage of a transistor is crucial for realizing robust digital circuits. For silicon transistors, the threshold voltage can be accurately controlled by doping. However, it remains challenging to tune the threshold voltage of single-wall nanotube (SWNT) thin-film transistors. Here, we report a facile method to controllably n-dope SWNTs using 1H-benzoimidazole derivatives processed via either solution coating or vacuum deposition. The threshold voltages of our polythiophene-sorted SWNT thin-film transistors can be tuned accurately and continuously over a wide range. Photoelectron spectroscopy measurements confirmed that the SWNT Fermi level shifted to the conduction band edge with increasing doping concentration. Using this doping approach, we proceeded to fabricate SWNT complementary inverters by inkjet printing of the dopants. We observed an unprecedented noise margin of 28 V at VDD = 80 V (70% of 1/2VDD) and a gain of 85. Additionally, robust SWNT complementary metal−oxide−semiconductor inverter (noise margin 72% of 1/2VDD) and logic gates with rail-to-rail output voltage swing and subnanowatt power consumption were fabricated onto a highly flexible substrate. PMID:24639537
Elhadj, Selim; Yoo, Jae-hyuck; Negres, Raluca A.; ...
2016-12-19
The optical damage performance of electrically conductive gallium nitride (GaN) and indium tin oxide (ITO) films is addressed using large area, high power laser beam exposures at 1064 nm sub-bandgap wavelength. Analysis of the laser damage process assumes that onset of damage (threshold) is determined by the absorption and heating of a nanoscale region of a characteristic size reaching a critical temperature. We use this model to rationalize semi-quantitatively the pulse width scaling of the damage threshold from picosecond to nanosecond timescales, along with the pulse width dependence of the damage threshold probability derived by fitting large beam damage densitymore » data. Multi-shot exposures were used to address lifetime performance degradation described by an empirical expression based on the single exposure damage model. A damage threshold degradation of at least 50% was observed for both materials. Overall, the GaN films tested had 5-10 × higher optical damage thresholds than the ITO films tested for comparable transmission and electrical conductivity. This route to optically robust, large aperture transparent electrodes and power optoelectronics may thus involve use of next generation widegap semiconductors such as GaN.« less
Use of a Principal Components Analysis for the Generation of Daily Time Series.
NASA Astrophysics Data System (ADS)
Dreveton, Christine; Guillou, Yann
2004-07-01
A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.
Holographic Associative Memory System Using A Thresholding Microchannel Spatial Light Modulator
NASA Astrophysics Data System (ADS)
Song, Q. W.; Yu, Francis T.
1989-05-01
Experimental implementation of a holographic optical associative memory system using a thresholding microchannel spatial light modulator (MSLM) is presented. The first part of the system is basically a joint transform correlator, in which a liquid crystal light valve is used as a square-law converter for the inner product of the addressing and input memories. The MSLM is used as an active element to recall the associated data. If the device is properly thresholded, the system is capable of improving the quality of the output image.
Jorgensen, David P.; Hanshaw, Maiana N.; Schmidt, Kevin M.; Laber, Jayme L; Staley, Dennis M.; Kean, Jason W.; Restrepo, Pedro J.
2011-01-01
A portable truck-mounted C-band Doppler weather radar was deployed to observe rainfall over the Station Fire burn area near Los Angeles, California, during the winter of 2009/10 to assist with debris-flow warning decisions. The deployments were a component of a joint NOAA–U.S. Geological Survey (USGS) research effort to improve definition of the rainfall conditions that trigger debris flows from steep topography within recent wildfire burn areas. A procedure was implemented to blend various dual-polarized estimators of precipitation (for radar observations taken below the freezing level) using threshold values for differential reflectivity and specific differential phase shift that improves the accuracy of the rainfall estimates over a specific burn area sited with terrestrial tipping-bucket rain gauges. The portable radar outperformed local Weather Surveillance Radar-1988 Doppler (WSR-88D) National Weather Service network radars in detecting rainfall capable of initiating post-fire runoff-generated debris flows. The network radars underestimated hourly precipitation totals by about 50%. Consistent with intensity–duration threshold curves determined from past debris-flow events in burned areas in Southern California, the portable radar-derived rainfall rates exceeded the empirical thresholds over a wider range of storm durations with a higher spatial resolution than local National Weather Service operational radars. Moreover, the truck-mounted C-band radar dual-polarimetric-derived estimates of rainfall intensity provided a better guide to the expected severity of debris-flow events, based on criteria derived from previous events using rain gauge data, than traditional radar-derived rainfall approaches using reflectivity–rainfall relationships for either the portable or operational network WSR-88D radars. Part of the reason for the improvement was due to siting the radar closer to the burn zone than the WSR-88Ds, but use of the dual-polarimetric variables improved the rainfall estimation by ~12% over the use of traditional Z–R relationships.
Thresholds for the cost-effectiveness of interventions: alternative approaches.
Marseille, Elliot; Larson, Bruce; Kazi, Dhruv S; Kahn, James G; Rosen, Sydney
2015-02-01
Many countries use the cost-effectiveness thresholds recommended by the World Health Organization's Choosing Interventions that are Cost-Effective project (WHO-CHOICE) when evaluating health interventions. This project sets the threshold for cost-effectiveness as the cost of the intervention per disability-adjusted life-year (DALY) averted less than three times the country's annual gross domestic product (GDP) per capita. Highly cost-effective interventions are defined as meeting a threshold per DALY averted of once the annual GDP per capita. We argue that reliance on these thresholds reduces the value of cost-effectiveness analyses and makes such analyses too blunt to be useful for most decision-making in the field of public health. Use of these thresholds has little theoretical justification, skirts the difficult but necessary ranking of the relative values of locally-applicable interventions and omits any consideration of what is truly affordable. The WHO-CHOICE thresholds set such a low bar for cost-effectiveness that very few interventions with evidence of efficacy can be ruled out. The thresholds have little value in assessing the trade-offs that decision-makers must confront. We present alternative approaches for applying cost-effectiveness criteria to choices in the allocation of health-care resources.
NASA Astrophysics Data System (ADS)
Chen, Wen-Shiang; Matula, Thomas J.; Brayman, Andrew A.; Crum, Lawrence A.
2003-01-01
Contrast bubble destruction is important in several new diagnostic and therapeutic applications. The pressure threshold of destruction is determined by the shell material, while the propensity for of the bubbles to undergo inertial cavitation (IC) depends both on the gas and shell properties of the ultrasound contrast agent (UCA). The ultrasonic fragmentation thresholds of three specific UCAs (Optison, Sonazoid, and biSpheres), each with different shell and gas properties, were determined under various acoustic conditions. The acoustic emissions generated by the agents, or their derivatives, characteristic of IC after fragmentation, was also compared, using cumulated broadband-noise emissions (IC ``dose''). Albumin-shelled Optison and surfactant-shelled Sonazoid had low fragmentation thresholds (mean=0.13 and 0.15 MPa at 1.1 MHz, 0.48 and 0.58 MPa at 3.5 MHz, respectively), while polymer-shelled biSpheres had a significant higher threshold (mean=0.19 and 0.23 MPa at 1.1 MHz, 0.73 and 0.96 MPa for thin- and thick-shell biSpheres at 3.5 MHz, respectively, p<0.01). At comparable initial concentrations, surfactant-shelled Sonazoid produced a much larger IC dose after shell destruction than did either biSpheres or Optison (p<0.01). Thick-shelled biSpheres had the highest fragmentation threshold and produced the lowest IC dose. More than two and five acoustic cycles, respectively, were necessary for the thin- and thick-shell biSpheres to reach a steady-state fragmentation threshold.
Mahon, Jeffrey L.; Beam, Craig A.; Marcovina, Santica M.; Boulware, David C.; Palmer, Jerry P.; Winter, William E.; Skyler, Jay S.; Krischer, Jeffrey P.
2018-01-01
Background Detection of below-threshold first-phase insulin release or FPIR (1 + 3 minute insulin concentrations during an intravenous glucose tolerance test [IVGTT]) is important in type 1 diabetes prediction and prevention studies including the TrialNet Oral Insulin Prevention Trial. We assessed whether an insulin immunoenzymometric assay (IEMA) could replace the less practical but current standard of a radioimmunoassay (RIA) for FPIR. Methods One hundred thirty-three islet autoantibody positive relatives of persons with type 1 diabetes underwent 161 IVGTTs. Insulin concentrations were measured by both assays in 1056 paired samples. A rule classifying FPIR (below-threshold, above-threshold, uncertain) by the IEMA was derived and validated against FPIR by the RIA. Results The insulin IEMA-based rule accurately classified below- and above-threshold FPIRs by the RIA in 110/161 (68%) IVGTTs, but was uncertain in 51/161 (32%) tests for which FPIR by RIA is needed. An uncertain FPIR by the IEMA was more likely among below-threshold vs above-threshold FPIRs by the RIA (64% [30/47] vs. 18% [21/114], respectively; p < 0.05). Conclusions An insulin IEMA for FPIR in subjects at risk for type 1 diabetes accurately determined below- and above-threshold FPIRs in 2/3 of tests relative to the current standard of the insulin RIA, but could not reliably classify the remaining FPIRs. TrialNet is limiting the insulin RIA for FPIR to the latter given the practical advantages of the more specific IEMA. PMID:21843518
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Poly I:C-induced fever elevates threshold for shivering but reduces thermosensitivity in rabbits.
Tøien, Ø; Mercer, J B
1995-05-01
Shivering threshold and thermosensitivity were determined in six conscious rabbits at ambient temperature (Ta) 20 and 10 degrees C before and at six different times after saline injection (0.15 ml iv) and polyriboinosinic-polyribocytidylic acid (poly I:C)-induced fever (5 micrograms/kg iv). Thermosensitivity was calculated by regression of metabolic heat production (M) and hypothalamic temperature (Thypo) during short periods (5-10 min) of square-wave cooling. Heat was extracted with a chronically implanted intravascular heat exchanger. Shivering threshold was calculated as the Thypo at which the thermosensitivity line crossed resting M as measured in afebrile animals at Ta 20 degrees C. There were negligible changes in shivering threshold and thermosensitivity in saline-injected rabbits. In the febrile animals, shivering threshold generally followed the shape of the biphasic fever response. At Ta 20 degrees C, shivering threshold was higher than regulated Thypo during the initial rising phase of fever and was lower during recovery. At Ta 10 degrees C the shivering thresholds were always higher than regulated Thypo except during recovery. Thermosensitivity was reduced by 30-41% during fever.
The impact of cochlear fine structure on hearing thresholds and DPOAE levels
NASA Astrophysics Data System (ADS)
Lee, Jungmee; Long, Glenis; Talmadge, Carrick L.
2004-05-01
Although otoacoustic emissions (OAE) are used as clinical and research tools, the correlation between OAE behavioral estimates of hearing status is not large. In normal-hearing individuals, the level of OAEs can vary as much as 30 dB when the frequency is changed less than 5%. These pseudoperiodic variations of OAE level with frequency are known as fine structure. Hearing thresholds measured with high-frequency resolution reveals a similar (up to 15 dB) fine structure. We examine the impact of OAE and threshold fine structures on the prediction of auditory thresholds from OAE levels. Distortion product otoacoustic emissions (DPOAEs) were measured with sweeping primary tones. Psychoacoustic detection thresholds were measured using pure tones, sweep tones, FM tones, and narrow-band noise. Sweep DPOAE and narrow-band threshold estimates provide estimates that are less influenced by cochlear fine structure and should lead to a higher correlation between OAE levels and psychoacoustic thresholds. [Research supported by PSC CUNY, NIDCD, National Institute on Disability and Rehabilitation Research in U.S. Department of Education, and The Ministry of Education in Korea.
Seixas, N S; Kujawa, S G; Norton, S; Sheppard, L; Neitzel, R; Slee, A
2004-11-01
To examine the relations between noise exposure and other risk factors with hearing function as measured by audiometric thresholds and distortion product otoacoustic emissions. A total of 456 subjects were studied (393 apprentices in construction trades and 63 graduate students). Hearing and peripheral auditory function were quantified using standard, automated threshold audiometry, tympanometry, and distortion product otoacoustic emissions (DPOAEs). The analysis addressed relations of noise exposure history and other risk factors with hearing threshold levels (HTLs) and DPOAEs at the baseline test for the cohort. The cohort had a mean age of 27 (7) years. The construction apprentices reported more noise exposure than students in both their occupational and non-occupational exposure histories. A strong effect of age and years of work in construction was observed at 4, 6, and 8 kHz for both HTLs and DPOAEs. Each year of construction work reported prior to baseline was associated with a 0.7 dB increase in HTL or 0.2 dB decrease DPOAE amplitude. Overall, there was a very similar pattern of effects between the HTLs and DPOAEs. This analysis shows a relatively good correspondence between the associations of noise exposures and other risk factors with DPOAEs and the associations observed with pure-tone audiometric thresholds in a young adult working population. The results provide further evidence that DPOAEs can be used to assess damage to hearing from a variety of exposures including noise. Clarifying advantages of DPOAEs or HTLs in terms of sensitivity to early manifestations of noise insults, or their utility in predicting future loss in hearing will require longitudinal follow up.
Rejection Thresholds in Chocolate Milk: Evidence for Segmentation
Harwood, Meriel L.; Ziegler, Gregory R.; Hayes, John E.
2012-01-01
Bitterness is generally considered a negative attribute in food, yet many individuals enjoy some bitterness in products like coffee or chocolate. In chocolate, bitterness arises from naturally occurring alkaloids and phenolics found in cacao. Fermentation and roasting help develop typical chocolate flavor and reduce the intense bitterness of raw cacao by modifying these bitter compounds. As it becomes increasingly common to fortify chocolate with `raw' cacao to increase the amount of healthful phytonutrients, it is important to identify the point at which the concentration of bitter compounds becomes objectionable, even to those who enjoy some bitterness. Classical threshold methods focus on the presence or absence of a sensation rather than acceptability or hedonics. A new alternative, the rejection threshold, was recently described in the literature. Here, we sought to quantify and compare differences in Rejection Thresholds (RjT) and Detection Thresholds (DT) in chocolate milk spiked with a food safe bitterant (sucrose octaacetate). In experiment 1, a series of paired preference tests was used to estimate the RjT for bitterness in chocolate milk. In a new group of participants (experiment 2), we determined the RjT and DT using the forced choice ascending method of limits. In both studies, participants were segmented on the basis of self-declared preference for milk or dark solid chocolate. Based on sigmoid fits of the indifference-preference function, the RjT was ~2.3 times higher for those preferring dark chocolate than the RjT for those preferring milk chocolate in both experiments. In contrast, the DT for both groups was functionally identical, suggesting that differential effects of bitterness on liking of chocolate products are not based on the ability to detect bitterness in these products. PMID:22754143
Rejection Thresholds in Chocolate Milk: Evidence for Segmentation.
Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E
2012-10-01
Bitterness is generally considered a negative attribute in food, yet many individuals enjoy some bitterness in products like coffee or chocolate. In chocolate, bitterness arises from naturally occurring alkaloids and phenolics found in cacao. Fermentation and roasting help develop typical chocolate flavor and reduce the intense bitterness of raw cacao by modifying these bitter compounds. As it becomes increasingly common to fortify chocolate with `raw' cacao to increase the amount of healthful phytonutrients, it is important to identify the point at which the concentration of bitter compounds becomes objectionable, even to those who enjoy some bitterness. Classical threshold methods focus on the presence or absence of a sensation rather than acceptability or hedonics. A new alternative, the rejection threshold, was recently described in the literature. Here, we sought to quantify and compare differences in Rejection Thresholds (RjT) and Detection Thresholds (DT) in chocolate milk spiked with a food safe bitterant (sucrose octaacetate). In experiment 1, a series of paired preference tests was used to estimate the RjT for bitterness in chocolate milk. In a new group of participants (experiment 2), we determined the RjT and DT using the forced choice ascending method of limits. In both studies, participants were segmented on the basis of self-declared preference for milk or dark solid chocolate. Based on sigmoid fits of the indifference-preference function, the RjT was ~2.3 times higher for those preferring dark chocolate than the RjT for those preferring milk chocolate in both experiments. In contrast, the DT for both groups was functionally identical, suggesting that differential effects of bitterness on liking of chocolate products are not based on the ability to detect bitterness in these products.
Loehman, Rachel A.; Elias, Joran; Douglass, Richard J.; Kuenzi, Amy J.; Mills, James N.; Wagoner, Kent
2013-01-01
Deer mice (Peromyscus maniculatus) are the main reservoir host for Sin Nombre virus, the primary etiologic agent of hantavirus pulmonary syndrome in North America. Sequential changes in weather and plant productivity (trophic cascades) have been noted as likely catalysts of deer mouse population irruptions, and monitoring and modeling of these phenomena may allow for development of early-warning systems for disease risk. Relationships among weather variables, satellite-derived vegetation productivity, and deer mouse populations were examined for a grassland site east of the Continental Divide and a sage-steppe site west of the Continental Divide in Montana, USA. We acquired monthly deer mouse population data for mid-1994 through 2007 from long-term study sites maintained for monitoring changes in hantavirus reservoir populations, and we compared these with monthly bioclimatology data from the same period and gross primary productivity data from the Moderate Resolution Imaging Spectroradiometer sensor for 2000–06. We used the Random Forests statistical learning technique to fit a series of predictive models based on temperature, precipitation, and vegetation productivity variables. Although we attempted several iterations of models, including incorporating lag effects and classifying rodent density by seasonal thresholds, our results showed no ability to predict rodent populations using vegetation productivity or weather data. We concluded that trophic cascade connections to rodent population levels may be weaker than originally supposed, may be specific to only certain climatic regions, or may not be detectable using remotely sensed vegetation productivity measures, although weather patterns and vegetation dynamics were positively correlated. PMID:22493110
Northwest Manufacturing Initiative
2012-03-27
crack growth and threshold stress corrosion cracking evaluation. Threshold stress corrosion cracking was done using the rising step load method with...Group Technology methods to establish manufacturing cells for production efficiency, to develop internal Lean Champions, and to implement rapid... different levels, advisory, core, etc. VI. Core steering committee composed of members that have a significant vested interest. Action Item: Draft
Extraction of hadron interactions above inelastic threshold in lattice QCD.
Aoki, Sinya; Ishii, Noriyoshi; Doi, Takumi; Hatsuda, Tetsuo; Ikeda, Yoichi; Inoue, Takashi; Murano, Keiko; Nemura, Hidekatsu; Sasaki, Kenji
2011-01-01
We propose a new method to extract hadron interactions above inelastic threshold from the Nambu-Bethe-Salpeter amplitude in lattice QCD. We consider the scattering such as A + B → C + D, where A, B, C, D are names of different 1-particle states. An extension to cases where particle productions occur during scatterings is also discussed.
Strong Evidence for Nucleon Resonances near 1900 MeV
Anisovich, A. V.; Burkert, V.; Hadžimehmedović, M.; ...
2017-08-11
Data on the reaction yp→K +A from the CLAS experiments are used to derive the leading multipoles, E 0+, M 1-, E 1+, and M 1+, from the production threshold to 2180 MeV in 24 slices of the invariant mass. The four multipoles are determined without any constraints. The multipoles are fitted using a multichannel L+P model that allows us to search for singularities and to extract the positions of poles on the complex energy plane in an almost model-independent method. The multipoles are also used as additional constraints in an energy-dependent analysis of a large body of pion andmore » photoinduced reactions within the Bonn-Gatchina partial wave analysis. The study confirms the existence of poles due to nucleon resonances with spin parity J P=1/2 -, 1/2 +, and 3/2 + in the region at about 1.9 GeV.« less
Strong Evidence for Nucleon Resonances near 1900 MeV
NASA Astrophysics Data System (ADS)
Anisovich, A. V.; Burkert, V.; Hadžimehmedović, M.; Ireland, D. G.; Klempt, E.; Nikonov, V. A.; Omerović, R.; Osmanović, H.; Sarantsev, A. V.; Stahov, J.; Švarc, A.; Thoma, U.
2017-08-01
Data on the reaction γ p →K+Λ from the CLAS experiments are used to derive the leading multipoles, E0 +, M1 -, E1 +, and M1 +, from the production threshold to 2180 MeV in 24 slices of the invariant mass. The four multipoles are determined without any constraints. The multipoles are fitted using a multichannel L +P model that allows us to search for singularities and to extract the positions of poles on the complex energy plane in an almost model-independent method. The multipoles are also used as additional constraints in an energy-dependent analysis of a large body of pion and photoinduced reactions within the Bonn-Gatchina partial wave analysis. The study confirms the existence of poles due to nucleon resonances with spin parity JP=1 /2- , 1 /2+ , and 3 /2+ in the region at about 1.9 GeV.