Sample records for deriving thresholds developing

  1. Thresholds of sea-level rise rate and sea-level acceleration rate in a vulnerable coastal wetland

    NASA Astrophysics Data System (ADS)

    Wu, W.; Biber, P.; Bethel, M.

    2017-12-01

    Feedback among inundation, sediment trapping, and vegetation productivity help maintain coastal wetlands facing sea-level rise (SLR). However, when the SLR rate exceeds a threshold, coastal wetlands can collapse. Understanding the threshold help address the key challenge in ecology - nonlinear response of ecosystems to environmental change, and promote communication between ecologists and policy makers. We studied the threshold of SLR rate and developed a new threshold of SLR acceleration rate on sustainability of coastal wetlands as SLR is likely to accelerate due to the enhanced anthropogenic forces. We developed a mechanistic model to simulate wetland change and derived the SLR thresholds for Grand Bay, MS, a micro-tidal estuary with limited upland freshwater and sediment input in the northern Gulf of Mexico. The new SLR acceleration rate threshold complements the threshold of SLR rate and can help explain the temporal lag before the rapid decline of wetland area becomes evident after the SLR rate threshold is exceeded. Deriving these two thresholds depends on the temporal scale, the interaction of SLR with other environmental factors, and landscape metrics, which have not been fully accounted for before this study. The derived SLR rate thresholds range from 7.3 mm/yr to 11.9 mm/yr. The thresholds of SLR acceleration rate are 3.02×10-4 m/yr2 and 9.62×10-5 m/yr2 for 2050 and 2100 respectively. Based on the thresholds developed, predicted SLR that will adversely impact the coastal wetlands in Grand Bay by 2100 will fall within the likely range of SLR under a high warming scenario (RCP8.5), and beyond the very likely range under a low warming scenario (RCP2.6 or 3), highlighting the need to avoid the high warming scenario in the future if these marshes are to be preserved.

  2. Threshold Capabilities: Threshold Concepts and Knowledge Capability Linked through Variation Theory

    ERIC Educational Resources Information Center

    Baillie, Caroline; Bowden, John A.; Meyer, Jan H. F.

    2013-01-01

    The Threshold Capability Integrated Theoretical Framework (TCITF) is presented as a framework for the design of university curricula, aimed at developing graduates' capability to deal with previously unseen situations in their professional, social, and personal lives. The TCITF is a new theoretical framework derived from, and heavily dependent…

  3. On the expected discounted penalty functions for two classes of risk processes under a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Lu, Zhaoyang; Xu, Wei; Sun, Decai; Han, Weiguo

    2009-10-01

    In this paper, the discounted penalty (Gerber-Shiu) functions for a risk model involving two independent classes of insurance risks under a threshold dividend strategy are developed. We also assume that the two claim number processes are independent Poisson and generalized Erlang (2) processes, respectively. When the surplus is above this threshold level, dividends are paid at a constant rate that does not exceed the premium rate. Two systems of integro-differential equations for discounted penalty functions are derived, based on whether the surplus is above this threshold level. Laplace transformations of the discounted penalty functions when the surplus is below the threshold level are obtained. And we also derive a system of renewal equations satisfied by the discounted penalty function with initial surplus above the threshold strategy via the Dickson-Hipp operator. Finally, analytical solutions of the two systems of integro-differential equations are presented.

  4. Determination and validation of soil thresholds for cadmium based on food quality standard and health risk assessment.

    PubMed

    Ding, Changfeng; Ma, Yibing; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang

    2018-04-01

    Cadmium (Cd) is an environmental toxicant with high rates of soil-plant transfer. It is essential to establish an accurate soil threshold for the implementation of soil management practices. This study takes root vegetable as an example to derive soil thresholds for Cd based on the food quality standard as well as health risk assessment using species sensitivity distribution (SSD). A soil type-specific bioconcentration factor (BCF, ratio of Cd concentration in plant to that in soil) generated from soil with a proper Cd concentration gradient was calculated and applied in the derivation of soil thresholds instead of a generic BCF value to minimize the uncertainty. The sensitivity variations of twelve root vegetable cultivars for accumulating soil Cd and the empirical soil-plant transfer model were investigated and developed in greenhouse experiments. After normalization, the hazardous concentrations from the fifth percentile of the distribution based on added Cd (HC5 add ) were calculated from the SSD curves fitted by Burr Type III distribution. The derived soil thresholds were presented as continuous or scenario criteria depending on the combination of soil pH and organic carbon content. The soil thresholds based on food quality standard were on average 0.7-fold of those based on health risk assessment, and were further validated to be reliable using independent data from field survey and published articles. The results suggested that deriving soil thresholds for Cd using SSD method is robust and also applicable to other crops as well as other trace elements that have the potential to cause health risk issues. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A Continuous Threshold Expectile Model.

    PubMed

    Zhang, Feipeng; Li, Qunhua

    2017-12-01

    Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .

  6. Estimating economic thresholds for pest control: an alternative procedure.

    PubMed

    Ramirez, O A; Saunders, J L

    1999-04-01

    An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.

  7. Dual processing model of medical decision-making.

    PubMed

    Djulbegovic, Benjamin; Hozo, Iztok; Beckstead, Jason; Tsalatsanis, Athanasios; Pauker, Stephen G

    2012-09-03

    Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. We show that physician's beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker's threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories).

  8. Model for Predicting Passage of Invasive Fish Species Through Culverts

    NASA Astrophysics Data System (ADS)

    Neary, V.

    2010-12-01

    Conservation efforts to promote or inhibit fish passage include the application of simple fish passage models to determine whether an open channel flow allows passage of a given fish species. Derivations of simple fish passage models for uniform and nonuniform flow conditions are presented. For uniform flow conditions, a model equation is developed that predicts the mean-current velocity threshold in a fishway, or velocity barrier, which causes exhaustion at a given maximum distance of ascent. The derivation of a simple expression for this exhaustion-threshold (ET) passage model is presented using kinematic principles coupled with fatigue curves for threatened and endangered fish species. Mean current velocities at or above the threshold predict failure to pass. Mean current velocities below the threshold predict successful passage. The model is therefore intuitive and easily applied to predict passage or exclusion. The ET model’s simplicity comes with limitations, however, including its application only to uniform flow, which is rarely found in the field. This limitation is addressed by deriving a model that accounts for nonuniform conditions, including backwater profiles and drawdown curves. Comparison of these models with experimental data from volitional swimming studies of fish indicates reasonable performance, but limitations are still present due to the difficulty in predicting fish behavior and passage strategies that can vary among individuals and different fish species.

  9. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  10. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  11. Vehicle lift-off modelling and a new rollover detection criterion

    NASA Astrophysics Data System (ADS)

    Mashadi, Behrooz; Mostaghimi, Hamid

    2017-05-01

    The modelling and development of a general criterion for the prediction of rollover threshold is the main purpose of this work. Vehicle dynamics models after the wheels lift-off and when the vehicle moves on the two wheels are derived and the governing equations are used to develop the rollover threshold. These models include the properties of the suspension and steering systems. In order to study the stability of motion, the steady-state solutions of the equations of motion are carried out. Based on the stability analyses, a new relation is obtained for the rollover threshold in terms of measurable response parameters. The presented criterion predicts the best time for the prevention of the vehicle rollover by applying a correcting moment. It is shown that the introduced threshold of vehicle rollover is a proper state of vehicle motion that is best for stabilising the vehicle with a low energy requirement.

  12. Using generalized additive modeling to empirically identify thresholds within the ITERS in relation to toddlers' cognitive development.

    PubMed

    Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana

    2013-04-01

    Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints, in the child care quality measures used in these benchmarking efforts that differentiate between different levels of children's cognitive functioning. To date, research has provided little guidance to policymakers as to where these thresholds should be set. Using the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B) data set, this study explores the use of generalized additive modeling (GAM) as a method of identifying thresholds on the Infant/Toddler Environment Rating Scale (ITERS) in relation to toddlers' performance on the Mental Development subscale of the Bayley Scales of Infant Development (the Bayley Mental Development Scale Short Form-Research Edition, or BMDSF-R). The present findings suggest that simple linear models do not always correctly depict the relationships between ITERS scores and BMDSF-R scores and that GAM-derived thresholds were more effective at differentiating among children's performance levels on the BMDSF-R. Additionally, the present findings suggest that there is a minimum threshold on the ITERS that must be exceeded before significant improvements in children's cognitive development can be expected. There may also be a ceiling threshold on the ITERS, such that beyond a certain level, only marginal increases in children's BMDSF-R scores are observed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  13. Absolute Cerebral Blood Flow Infarction Threshold for 3-Hour Ischemia Time Determined with CT Perfusion and 18F-FFMZ-PET Imaging in a Porcine Model of Cerebral Ischemia

    PubMed Central

    Cockburn, Neil; Kovacs, Michael

    2016-01-01

    CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia time by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155–180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877

  14. Threshold for extinction and survival in stochastic tumor immune system

    NASA Astrophysics Data System (ADS)

    Li, Dongxi; Cheng, Fangjuan

    2017-10-01

    This paper mainly investigates the stochastic character of tumor growth and extinction in the presence of immune response of a host organism. Firstly, the mathematical model describing the interaction and competition between the tumor cells and immune system is established based on the Michaelis-Menten enzyme kinetics. Then, the threshold conditions for extinction, weak persistence and stochastic persistence of tumor cells are derived by the rigorous theoretical proofs. Finally, stochastic simulation are taken to substantiate and illustrate the conclusion we have derived. The modeling results will be beneficial to understand to concept of immunoediting, and develop the cancer immunotherapy. Besides, our simple theoretical model can help to obtain new insight into the complexity of tumor growth.

  15. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    PubMed

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  16. Development and evaluation of consensus-based sediment effect concentrations for polychlorinated biphenyls

    USGS Publications Warehouse

    MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.

    2000-01-01

    Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.

  17. Development of an epiphyte indicator of nutrient enrichment ...

    EPA Pesticide Factsheets

    Metrics of epiphyte load on macrophytes were evaluated for use as quantitative biological indicators for nutrient impacts in estuarine waters, based on review and analysis of the literature on epiphytes and macrophytes, primarily seagrasses, but including some brackish and freshwater rooted macrophyte species. An approach is presented that empirically derives threshold epiphyte loads which are likely to cause specified levels of decrease in macrophyte response metrics such as biomass, shoot density, percent cover, production and growth. Data from 36 studies of 10 macrophyte species were pooled to derive relationships between epiphyte load and -25 and -50% seagrass response levels, which are proposed as the primary basis for establishment of critical threshold values. Given multiple sources of variability in the response data, threshold ranges based on the range of values falling between the median and the 75th quantiles of observations at a given seagrass response level are proposed rather than single, critical point values. Four epiphyte load threshold categories - low, moderate, high, very high, are proposed. Comparison of values of epiphyte loads associated with 25 and 50% reductions in light to macrophytes suggest that the threshold ranges are realistic both in terms of the principle mechanism of impact to macrophytes and in terms of the magnitude of resultant impacts expressed by the macrophytes. Some variability in response levels was observed among

  18. Nonlinear Dynamic Modeling of Neuron Action Potential Threshold During Synaptically Driven Broadband Intracellular Activity

    PubMed Central

    Roach, Shane M.; Song, Dong; Berger, Theodore W.

    2012-01-01

    Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947

  19. Dual processing model of medical decision-making

    PubMed Central

    2012-01-01

    Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. Methods We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. Results We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. Conclusions We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories). PMID:22943520

  20. Flood and landslide warning based on rainfall thresholds and soil moisture indexes: the HEWS (Hydrohazards Early Warning System) for Sicily

    NASA Astrophysics Data System (ADS)

    Brigandì, Giuseppina; Tito Aronica, Giuseppe; Bonaccorso, Brunella; Gueli, Roberto; Basile, Giuseppe

    2017-09-01

    The main focus of the paper is to present a flood and landslide early warning system, named HEWS (Hydrohazards Early Warning System), specifically developed for the Civil Protection Department of Sicily, based on the combined use of rainfall thresholds, soil moisture modelling and quantitative precipitation forecast (QPF). The warning system is referred to 9 different Alert Zones in which Sicily has been divided into and based on a threshold system of three different increasing critical levels: ordinary, moderate and high. In this system, for early flood warning, a Soil Moisture Accounting (SMA) model provides daily soil moisture conditions, which allow to select a specific set of three rainfall thresholds, one for each critical level considered, to be used for issue the alert bulletin. Wetness indexes, representative of the soil moisture conditions of a catchment, are calculated using a simple, spatially-lumped rainfall-streamflow model, based on the SCS-CN method, and on the unit hydrograph approach, that require daily observed and/or predicted rainfall, and temperature data as input. For the calibration of this model daily continuous time series of rainfall, streamflow and air temperature data are used. An event based lumped rainfall-runoff model has been, instead, used for the derivation of the rainfall thresholds for each catchment in Sicily characterised by an area larger than 50 km2. In particular, a Kinematic Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall was developed for this purpose. For rainfall-induced shallow landslide warning, empirical rainfall thresholds provided by Gariano et al. (2015) have been included in the system. They were derived on an empirical basis starting from a catalogue of 265 shallow landslides in Sicily in the period 2002-2012. Finally, Delft-FEWS operational forecasting platform has been applied to link input data, SMA model and rainfall threshold models to produce warning on a daily basis for the entire region.

  1. Spatially Varying Spectrally Thresholds for MODIS Cloud Detection

    NASA Technical Reports Server (NTRS)

    Haines, S. L.; Jedlovec, G. J.; Lafontaine, F.

    2004-01-01

    The EOS science team has developed an elaborate global MODIS cloud detection procedure, and the resulting MODIS product (MOD35) is used in the retrieval process of several geophysical parameters to mask out clouds. While the global application of the cloud detection approach appears quite robust, the product has some shortcomings on the regional scale, often over determining clouds in a variety of settings, particularly at night. This over-determination of clouds can cause a reduction in the spatial coverage of MODIS derived clear-sky products. To minimize this problem, a new regional cloud detection method for use with MODIS data has been developed at NASA's Global Hydrology and Climate Center (GHCC). The approach is similar to that used by the GHCC for GOES data over the continental United States. Several spatially varying thresholds are applied to MODIS spectral data to produce a set of tests for detecting clouds. The thresholds are valid for each MODIS orbital pass, and are derived from 20-day composites of GOES channels with similar wavelengths to MODIS. This paper and accompanying poster will introduce the GHCC MODIS cloud mask, provide some examples, and present some preliminary validation.

  2. On the dynamic readout characteristic of nonlinear super-resolution optical storage

    NASA Astrophysics Data System (ADS)

    Wei, Jingsong

    2013-03-01

    Researchers have developed nonlinear super-resolution optical storage for the past twenty years. However, several concerns remain, including (1) the presence of readout threshold power; (2) the increase of threshold power with the reduction of the mark size, and (3) the increase of the carrier-to-noise ratio (CNR) at the initial stage and then decrease with the increase of readout laser power or laser irradiation time. The present work calculates and analyzes the super-resolution spot formed by the thin film masks and the readout threshold power characteristic according to the derived formula and based on the nonlinear saturable absorption characteristic and threshold of structural change. The obtained theoretical calculation and experimental data answer the concerns regarding the dynamic readout threshold characteristic and CNR dependence on laser power and irradiation time. The near-field optical spot scanning experiment further verifies the super-resolution spot formation produced through the nonlinear thin film masks.

  3. Thresholds of Toxicological Concern for cosmetics-related substances: New database, thresholds, and enrichment of chemical space.

    PubMed

    Yang, Chihae; Barlow, Susan M; Muldoon Jacobs, Kristi L; Vitcheva, Vessela; Boobis, Alan R; Felter, Susan P; Arvidson, Kirk B; Keller, Detlef; Cronin, Mark T D; Enoch, Steven; Worth, Andrew; Hollnagel, Heli M

    2017-11-01

    A new dataset of cosmetics-related chemicals for the Threshold of Toxicological Concern (TTC) approach has been compiled, comprising 552 chemicals with 219, 40, and 293 chemicals in Cramer Classes I, II, and III, respectively. Data were integrated and curated to create a database of No-/Lowest-Observed-Adverse-Effect Level (NOAEL/LOAEL) values, from which the final COSMOS TTC dataset was developed. Criteria for study inclusion and NOAEL decisions were defined, and rigorous quality control was performed for study details and assignment of Cramer classes. From the final COSMOS TTC dataset, human exposure thresholds of 42 and 7.9 μg/kg-bw/day were derived for Cramer Classes I and III, respectively. The size of Cramer Class II was insufficient for derivation of a TTC value. The COSMOS TTC dataset was then federated with the dataset of Munro and colleagues, previously published in 1996, after updating the latter using the quality control processes for this project. This federated dataset expands the chemical space and provides more robust thresholds. The 966 substances in the federated database comprise 245, 49 and 672 chemicals in Cramer Classes I, II and III, respectively. The corresponding TTC values of 46, 6.2 and 2.3 μg/kg-bw/day are broadly similar to those of the original Munro dataset. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Protective effects of brain-derived neurotrophic factor on the noise-damaged cochlear spiral ganglion.

    PubMed

    Zhai, S-Q; Guo, W; Hu, Y-Y; Yu, N; Chen, Q; Wang, J-Z; Fan, M; Yang, W-Y

    2011-05-01

    To explore the protective effects of brain-derived neurotrophic factor on the noise-damaged cochlear spiral ganglion. Recombinant adenovirus brain-derived neurotrophic factor vector, recombinant adenovirus LacZ and artificial perilymph were prepared. Guinea pigs with audiometric auditory brainstem response thresholds of more than 75 dB SPL, measured seven days after four hours of noise exposure at 135 dB SPL, were divided into three groups. Adenovirus brain-derived neurotrophic factor vector, adenovirus LacZ and perilymph were infused into the cochleae of the three groups, variously. Eight weeks later, the cochleae were stained immunohistochemically and the spiral ganglion cells counted. The auditory brainstem response threshold recorded before and seven days after noise exposure did not differ significantly between the three groups. However, eight weeks after cochlear perfusion, the group receiving brain-derived neurotrophic factor had a significantly decreased auditory brainstem response threshold and increased spiral ganglion cell count, compared with the adenovirus LacZ and perilymph groups. When administered via cochlear infusion following noise damage, brain-derived neurotrophic factor appears to improve the auditory threshold, and to have a protective effect on the spiral ganglion cells.

  5. A flash flood early warning system based on rainfall thresholds and daily soil moisture indexes

    NASA Astrophysics Data System (ADS)

    Brigandì, Giuseppina; Tito Aronica, Giuseppe

    2015-04-01

    Main focus of the paper is to present a flash flood early warning system, developed for Civil Protection Agency for the Sicily Region, for alerting extreme hydrometeorological events by using a methodology based on the combined use of rainfall thresholds and soil moisture indexes. As matter of fact, flash flood warning is a key element to improve the Civil Protection achievements to mitigate damages and safeguard the security of people. It is a rather complicated task, particularly in those catchments with flashy response where even brief anticipations are important and welcomed. In this context, some kind of hydrological precursors can be considered to improve the effectiveness of the emergency actions (i.e. early flood warning). Now, it is well known how soil moisture is an important factor in flood formation, because the runoff generation is strongly influenced by the antecedent soil moisture conditions of the catchment. The basic idea of the work here presented is to use soil moisture indexes derived in a continuous form to define a first alert phase in a flash flood forecasting chain and then define a unique rainfall threshold for a given day for the subsequent alarm phases activation, derived as a function of the soil moisture conditions at the beginning of the day. Daily soil moisture indexes, representative of the moisture condition of the catchment, were derived by using a parsimonious and simply to use approach based on the IHACRES model application in a modified form developed by the authors. It is a simple, spatially-lumped rainfall-streamflow model, based on the SCS-CN method and on the unit hydrograph approach that requires only rainfall, streamflow and air temperature data. It consists of two modules. In the first a non linear loss model, based on the SCS-CN method, was used to transform total rainfall into effective rainfall. In the second, a linear convolution of effective rainfall was performed using a total unit hydrograph with a configuration of one parallel channel and reservoir, thereby corresponding to 'quick' and 'slow' components of runoff. In the non linear model a wetness/soil moisture index, varying from 0 to 1, was derived to define daily soil moisture catchment conditions and then conveniently linked to a corresponding CN value to use as input to derive the corresponding rainfall threshold for a given day. Finally, rainfall thresholds for flash flooding were derived using an Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall. Application of the proposed methodology was carried out with reference to a river basin in Sicily, Italy.

  6. Influence of the chemical structure on odor qualities and odor thresholds of halogenated guaiacol-derived odorants

    NASA Astrophysics Data System (ADS)

    Juhlke, Florian; Lorber, Katja; Wagenstaller, Maria; Buettner, Andrea

    2017-12-01

    Chlorinated guaiacol derivatives are found in waste water of pulp mills using chlorine in the bleaching process of wood pulp. They can also be detected in fish tissue, possibly causing off-odors. To date, there is no systematic investigation on the odor properties of halogenated guaiacol derivatives. To close this gap, odor thresholds in air and odor qualities of 14 compounds were determined by gas chromatography-olfactometry. Overall, the investigated compounds elicited smells that are characteristic for guaiacol, namely smoky, sweet, vanilla-like, but also medicinal and plaster-like. Their odor thresholds in air were, however, very low, ranging from 0.00072 to 23 ng/Lair. The lowest thresholds were found for 5-chloro- and 5-bromoguaiacol, followed by 4,5-dichloro- and 6-chloroguaiacol. Moreover, some inter-individual differences in odor threshold values could be observed, with the highest variations having been recorded for the individual values of 5-iodo- and 4-bromoguaiacol.

  7. A systematic review of intervention thresholds based on FRAX : A report prepared for the National Osteoporosis Guideline Group and the International Osteoporosis Foundation

    PubMed Central

    Kanis, John A; Harvey, Nicholas C; Cooper, Cyrus; Johansson, Helena; Odén, Anders; McCloskey, Eugene V

    2016-01-01

    In most assessment guidelines, treatment for osteoporosis is recommended in individuals with prior fragility fractures, especially fractures at spine and hip. However, for those without prior fractures, the intervention thresholds can be derived using different methods. The aim of this report was to undertake a systematic review of the available information on the use of FRAX® in assessment guidelines, in particular the setting of thresholds and their validation. We identified 120 guidelines or academic papers that incorporated FRAX of which 38 provided no clear statement on how the fracture probabilities derived are to be used in decision-making in clinical practice. The remainder recommended a fixed intervention threshold (n=58), most commonly as a component of more complex guidance (e.g. bone mineral density (BMD) thresholds) or an age-dependent threshold (n=22). Two guidelines have adopted both age-dependent and fixed thresholds. Fixed probability thresholds have ranged from 4 to 20 % for a major fracture and 1.3-5 % for hip fracture. More than one half (39) of the 58 publications identified utilized a threshold probability of 20 % for a major osteoporotic fracture, many of which also mention a hip fracture probability of 3 % as an alternative intervention threshold. In nearly all instances, no rationale is provided other than that this was the threshold used by the National Osteoporosis Foundation of the US. Where undertaken, fixed probability thresholds have been determined from tests of discrimination (Hong Kong), health economic assessment (US, Switzerland), to match the prevalence of osteoporosis (China) or to align with pre-existing guidelines or reimbursement criteria (Japan, Poland). Age-dependent intervention thresholds, first developed by the National Osteoporosis Guideline Group (NOGG), are based on the rationale that if a woman with a prior fragility fracture is eligible for treatment, then, at any given age, a man or woman with the same fracture probability but in the absence of a previous fracture (i.e. at the ‘fracture threshold’) should also be eligible. Under current NOGG guidelines, based on age-dependent probability thresholds, inequalities in access to therapy arise especially at older ages (≥ 70 years) depending on the presence or absence of a prior fracture. An alternative threshold using a hybrid model reduces this disparity. The use of FRAX (fixed or age-dependent thresholds) as the gateway to assessment identifies individuals at high risk more effectively than the use of BMD. However, the setting of intervention thresholds need to be country-specific. PMID:27465509

  8. New method for quantification of vuggy porosity from digital optical borehole images as applied to the karstic Pleistocene limestone of the Biscayne aquifer, southeastern Florida

    USGS Publications Warehouse

    Cunningham, K.J.; Carlson, J.I.; Hurley, N.F.

    2004-01-01

    Vuggy porosity is gas- or fluid-filled openings in rock matrix that are large enough to be seen with the unaided eye. Well-connected vugs can form major conduits for flow of ground water, especially in carbonate rocks. This paper presents a new method for quantification of vuggy porosity calculated from digital borehole images collected from 47 test coreholes that penetrate the karstic Pleistocene limestone of the Biscayne aquifer, southeastern Florida. Basically, the method interprets vugs and background based on the grayscale color of each in digital borehole images and calculates a percentage of vuggy porosity. Development of the method was complicated because environmental conditions created an uneven grayscale contrast in the borehole images that makes it difficult to distinguish vugs from background. The irregular contrast was produced by unbalanced illumination of the borehole wall, which was a result of eccentering of the borehole-image logging tool. Experimentation showed that a simple, single grayscale threshold would not realistically differentiate between the grayscale contrast of vugs and background. Therefore, an equation was developed for an effective subtraction of the changing grayscale contrast, due to uneven illumination, to produce a grayscale threshold that successfully identifies vugs. In the equation, a moving average calculated around the circumference of the borehole and expressed as the background grayscale intensity is defined as a baseline from which to identify a grayscale threshold for vugs. A constant was derived empirically by calibration with vuggy porosity values derived from digital images of slabbed-core samples and used to make the subtraction from the background baseline to derive the vug grayscale threshold as a function of azimuth. The method should be effective in estimating vuggy porosity in any carbonate aquifer. ?? 2003 Published by Elsevier B.V.

  9. Generation of Comprehensive Surrogate Kinetic Models and Validation Databases for Simulating Large Molecular Weight Hydrocarbon Fuels

    DTIC Science & Technology

    2012-10-25

    of hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean molecular weight (MWave) of...diffusive soot extinction configurations. Matching the “real fuel combustion property targets” of hydrogen/ carbon molar ratio (H/C), derived cetane number...combustion property targets - hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean

  10. An approach to derive groundwater and stream threshold values for total nitrogen and ensure good ecological status of associated aquatic ecosystems - example from a coastal catchment to a vulnerable Danish estuary.

    NASA Astrophysics Data System (ADS)

    Hinsby, Klaus; Markager, Stiig; Kronvang, Brian; Windolf, Jørgen; Sonnenborg, Torben; Sørensen, Lærke

    2015-04-01

    Nitrate, which typically makes up the major part (~>90%) of dissolved inorganic nitrogen in groundwater and surface water, is the most frequent pollutant responsible for European groundwater bodies failing to meet the good status objectives of the European Water Framework Directive generally when comparing groundwater monitoring data with the nitrate quality standard of the Groundwater Directive (50 mg/l = the WHO drinking water standard). Still, while more than 50 % of the European surface water bodies do not meet the objective of good ecological status "only" 25 % of groundwater bodies do not meet the objective of good chemical status according to the river basin management plans reported by the EU member states. However, based on a study on interactions between groundwater, streams and a Danish estuary we argue that nitrate threshold values for aerobic groundwater often need to be significantly below the nitrate quality standard to ensure good ecological status of associated surface water bodies, and hence that the chemical status of European groundwater is worse than indicated by the present assessments. Here we suggest a methodology for derivation of groundwater and stream threshold values for total nitrogen ("nitrate") in a coastal catchment based on assessment of maximum acceptable nitrogen loadings (thresholds) to the associated vulnerable estuary. The applied method use existing information on agricultural practices and point source emissions in the catchment, groundwater, stream quantity and quality monitoring data that all feed data to an integrated groundwater and surface water modelling tool enabling us to conduct an assessment of total nitrogen loads and threshold concentrations derived to ensure/restore good ecological status of the investigated estuary. For the catchment to the Horsens estuary in Denmark we estimate the stream and groundwater thresholds for total nitrogen to be about 13 and 27 mg/l (~ 12 and 25 mg/l of nitrate). The shown example of deriving nitrogen threshold concentrations is for groundwater and streams in a coastal catchment discharging to a vulnerable estuary in Denmark, but the principles may be applied to large river basins with sub-catchments in several countries such as e.g. the Danube or the Rhine. In this case the relevant countries need to collaborate on derivation of nitrogen thresholds based on e.g. maximum acceptable nitrogen loadings to the Black Sea / the North Sea, and finally agree on thresholds for different parts of the river basin. Phosphorus is another nutrient which frequently results in or contributes to the eutrophication of surface waters. The transport and retention processes of total phosphorus (TP) is more complex than for nitrate (or alternatively total N), and presently we are able to establish TP thresholds for streams but not for groundwater. Derivation of TP thresholds is covered in an accompanying paper by Kronvang et al.

  11. Comparability of children's sedentary time estimates derived from wrist worn GENEActiv and hip worn ActiGraph accelerometer thresholds.

    PubMed

    Boddy, Lynne M; Noonan, Robert J; Kim, Youngwon; Rowlands, Alex V; Welk, Greg J; Knowles, Zoe R; Fairclough, Stuart J

    2018-03-28

    To examine the comparability of children's free-living sedentary time (ST) derived from raw acceleration thresholds for wrist mounted GENEActiv accelerometer data, with ST estimated using the waist mounted ActiGraph 100count·min -1 threshold. Secondary data analysis. 108 10-11-year-old children (n=43 boys) from Liverpool, UK wore one ActiGraph GT3X+ and one GENEActiv accelerometer on their right hip and left wrist, respectively for seven days. Signal vector magnitude (SVM; mg) was calculated using the ENMO approach for GENEActiv data. ST was estimated from hip-worn ActiGraph data, applying the widely used 100count·min -1 threshold. ROC analysis using 10-fold hold-out cross-validation was conducted to establish a wrist-worn GENEActiv threshold comparable to the hip ActiGraph 100count·min -1 threshold. GENEActiv data were also classified using three empirical wrist thresholds and equivalence testing was completed. Analysis indicated that a GENEActiv SVM value of 51mg demonstrated fair to moderate agreement (Kappa: 0.32-0.41) with the 100count·min -1 threshold. However, the generated and empirical thresholds for GENEActiv devices were not significantly equivalent to ActiGraph 100count·min -1 . GENEActiv data classified using the 35.6mg threshold intended for ActiGraph devices generated significantly equivalent ST estimates as the ActiGraph 100count·min -1 . The newly generated and empirical GENEActiv wrist thresholds do not provide equivalent estimates of ST to the ActiGraph 100count·min -1 approach. More investigation is required to assess the validity of applying ActiGraph cutpoints to GENEActiv data. Future studies are needed to examine the backward compatibility of ST data and to produce a robust method of classifying SVM-derived ST. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  12. Threshold regression to accommodate a censored covariate.

    PubMed

    Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A

    2018-06-22

    In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.

  13. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  14. A Universal Threshold for the Assessment of Load and Output Residuals of Strain-Gage Balance Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new universal residual threshold for the detection of load and gage output residual outliers of wind tunnel strain{gage balance data was developed. The threshold works with both the Iterative and Non{Iterative Methods that are used in the aerospace testing community to analyze and process balance data. It also supports all known load and gage output formats that are traditionally used to describe balance data. The threshold's definition is based on an empirical electrical constant. First, the constant is used to construct a threshold for the assessment of gage output residuals. Then, the related threshold for the assessment of load residuals is obtained by multiplying the empirical electrical constant with the sum of the absolute values of all first partial derivatives of a given load component. The empirical constant equals 2.5 microV/V for the assessment of balance calibration or check load data residuals. A value of 0.5 microV/V is recommended for the evaluation of repeat point residuals because, by design, the calculation of these residuals removes errors that are associated with the regression analysis of the data itself. Data from a calibration of a six-component force balance is used to illustrate the application of the new threshold definitions to real{world balance calibration data.

  15. Matrix-calibrated LC-MS/MS quantitation and sensory evaluation of oak Ellagitannins and their transformation products in red wines.

    PubMed

    Stark, Timo; Wollmann, Nadine; Wenker, Kerstin; Lösch, Sofie; Glabasnia, Arne; Hofmann, Thomas

    2010-05-26

    Aimed at investigating the concentrations and taste contribution of the oak-derived ellagitannins castalagin and vescalagin as well as their transformation products acutissimin A/B, epiacutissimin A/B, and beta-1-O-ethylvescalagin in red wine, a highly sensitive and accurate quantification method was developed on the basis of LC-MS/MS-MRM analysis with matrix calibration. Method validation showed good recovery rates ranging from 102.4 +/- 5.9% (vescalagin) to 113.7 +/- 15.2% (epiacutissimin A). In oak-matured wines, castalagin was found as the predominant ellagitannin, followed by beta-1-O-ethylvescalagin, whereas the flavano-C-ellagitannins (epi)acutissimin A/B were present in significantly lower amounts. In contrast to the high threshold concentration levels (600-1000 micromol/L) and the puckering astringent orosensation induced by flavan-3-ols, all of the ellagitannin derivatives were found to induce a smooth and velvety astringent oral sensation at rather low threshold concentrations ranging from 0.9 to 2.8 micromol/L. Dose/activity considerations demonstrated that, among all the ellagitannins investigated, castalagin exclusively exceeded its threshold concentration in various oak-matured wine samples.

  16. Serum ferritin thresholds for the diagnosis of iron deficiency in pregnancy: a systematic review.

    PubMed

    Daru, J; Allotey, J; Peña-Rosas, J P; Khan, K S

    2017-06-01

    The aim of this review was to understand the landscape of serum ferritin in diagnosing iron deficiency in the aetiology of anaemia in pregnancy. Iron deficiency in pregnancy is a major public health problem leading to the development of anaemia. Reducing the global prevalence of anaemia in women of reproductive age is a 2025 global nutrition target. Bone marrow aspiration is the gold standard test for iron deficiency but requires an invasive procedure; therefore, serum ferritin is the most clinically useful test. We undertook a systematic search of electronic databases and trial registers from inception to January 2016. Studies of iron or micronutrient supplementation in pregnancy with pre-defined serum ferritin thresholds were included. Two independent reviewers selected studies, extracted data and assessed quality. There were 76 relevant studies mainly of observational study design (57%). The most commonly used thresholds of serum ferritin for the diagnosis of iron deficiency were <12 and <15 ng mL -1 (68%). Most primary studies provided no justification for the choice of serum ferritin threshold used, but 25 studies (33%) used thresholds defined by expert consensus in a guideline development process. There were five studies (7%) using a serum ferritin threshold defining iron deficiency derived from primary studies of bone marrow aspiration. Unified international thresholds of iron deficiency for women throughout pregnancy are required for accurate assessments of the global disease burden and for evaluating effectiveness of interventions addressing this problem. © 2017 World Health Organization licensed by Transfusion Medicine published by John Wiley & Sons Ltd on behalf of British Blood Transfusion Society.

  17. Comparison of epicardial adipose tissue radiodensity threshold between contrast and non-contrast enhanced computed tomography scans: A cohort study of derivation and validation.

    PubMed

    Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig

    2018-05-11

    Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.

  18. Deriving site-specific soil clean-up values for metals and metalloids: Rationale for including protection of soil microbial processes

    PubMed Central

    Kuperman, Roman G; Siciliano, Steven D; Römbke, Jörg; Oorts, Koen

    2014-01-01

    Although it is widely recognized that microorganisms are essential for sustaining soil fertility, structure, nutrient cycling, groundwater purification, and other soil functions, soil microbial toxicity data were excluded from the derivation of Ecological Soil Screening Levels (Eco-SSL) in the United States. Among the reasons for such exclusion were claims that microbial toxicity tests were too difficult to interpret because of the high variability of microbial responses, uncertainty regarding the relevance of the various endpoints, and functional redundancy. Since the release of the first draft of the Eco-SSL Guidance document by the US Environmental Protection Agency in 2003, soil microbial toxicity testing and its use in ecological risk assessments have substantially improved. A wide range of standardized and nonstandardized methods became available for testing chemical toxicity to microbial functions in soil. Regulatory frameworks in the European Union and Australia have successfully incorporated microbial toxicity data into the derivation of soil threshold concentrations for ecological risk assessments. This article provides the 3-part rationale for including soil microbial processes in the development of soil clean-up values (SCVs): 1) presenting a brief overview of relevant test methods for assessing microbial functions in soil, 2) examining data sets for Cu, Ni, Zn, and Mo that incorporated soil microbial toxicity data into regulatory frameworks, and 3) offering recommendations on how to integrate the best available science into the method development for deriving site-specific SCVs that account for bioavailability of metals and metalloids in soil. Although the primary focus of this article is on the development of the approach for deriving SCVs for metals and metalloids in the United States, the recommendations provided in this article may also be applicable in other jurisdictions that aim at developing ecological soil threshold values for protection of microbial processes in contaminated soils. PMID:24376192

  19. Global epidemic invasion thresholds in directed cattle subpopulation networks having source, sink, and transit nodes

    USDA-ARS?s Scientific Manuscript database

    Through the characterization of a metapopulation cattle disease model on a directed network having source, transit, and sink nodes, we derive two global epidemic invasion thresholds. The first threshold defines the conditions necessary for an epidemic to successfully spread at the global scale. The ...

  20. Spatial Distribution of the Threshold Beam Spots of Laser Weapons Simulators

    DTIC Science & Technology

    1993-09-08

    This paper was based on the transmission theory of elliptical Gaussian beam fluxes in deriving some transmission equations for the threshold beam...spots of laser weapon simulators, in order to revise and expand the expressions for the threshold beam spots, their maximum range, the extinction

  1. Mathematical Model of Naive T Cell Division and Survival IL-7 Thresholds.

    PubMed

    Reynolds, Joseph; Coles, Mark; Lythe, Grant; Molina-París, Carmen

    2013-01-01

    We develop a mathematical model of the peripheral naive T cell population to study the change in human naive T cell numbers from birth to adulthood, incorporating thymic output and the availability of interleukin-7 (IL-7). The model is formulated as three ordinary differential equations: two describe T cell numbers, in a resting state and progressing through the cell cycle. The third is introduced to describe changes in IL-7 availability. Thymic output is a decreasing function of time, representative of the thymic atrophy observed in aging humans. Each T cell is assumed to possess two interleukin-7 receptor (IL-7R) signaling thresholds: a survival threshold and a second, higher, proliferation threshold. If the IL-7R signaling strength is below its survival threshold, a cell may undergo apoptosis. When the signaling strength is above the survival threshold, but below the proliferation threshold, the cell survives but does not divide. Signaling strength above the proliferation threshold enables entry into cell cycle. Assuming that individual cell thresholds are log-normally distributed, we derive population-average rates for apoptosis and entry into cell cycle. We have analyzed the adiabatic change in homeostasis as thymic output decreases. With a parameter set representative of a healthy individual, the model predicts a unique equilibrium number of T cells. In a parameter range representative of persistent viral or bacterial infection, where naive T cell cycle progression is impaired, a decrease in thymic output may result in the collapse of the naive T cell repertoire.

  2. The locking and unlocking thresholds for tearing modes in a cylindrical tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Wenlong; Zhu, Ping, E-mail: pzhu@ustc.edu.cn; Department of Engineering Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706

    2016-03-15

    The locking and unlocking thresholds for tearing modes are in general different. In this work, the physics origin for this difference is illustrated from theory analysis, and a numerical procedure is developed to find both locking and unlocking thresholds. In particular, a new scaling law for the unlocking threshold that is valid in both weak and strong rotation regimes has been derived from the lowest amplitude of the RMP (resonant magnetic perturbation) allowed for the locked-mode solution. Above the unlocking threshold, the criterion for the phase-flip instability is extended to identify the entire locked-mode states. Two different regimes of themore » RMP amplitude in terms of the accessibility of the locked-mode states have been found. In the first regime, the locked-mode state may or may not be accessible depending on the initial conditions of an evolving island. In the second regime, the locked-mode state can always be reached regardless of the initial conditions of the tearing mode. The lowest RMP amplitude for the second regime is determined to be the mode-locking threshold. The different characteristics of the two regimes above the unlocking threshold reveal the underlying physics for the gap between the locking and unlocking thresholds and provide an explanation for the closely related and widely observed hysteresis phenomena in island evolution during the sweeping process of the RMP amplitude up and down across that threshold gap.« less

  3. Differential equation models for sharp threshold dynamics.

    PubMed

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.

  4. Keeping it simple: Monitoring flood extent in large data-poor wetlands using MODIS SWIR data

    NASA Astrophysics Data System (ADS)

    Wolski, Piotr; Murray-Hudson, Mike; Thito, Kgalalelo; Cassidy, Lin

    2017-05-01

    Characterising inundation conditions for flood-pulsed wetlands is a critical first step towards assessment of flood risk as well as towards understanding hydrological dynamics that underlay their ecology and functioning. In this paper, we develop a series of inundation maps for the Okavango Delta, Botswana, based on the thresholding of the SWIR band (b7) MODIS MCD43A4 product. We show that in the Okavango Delta, SWIR is superior to other spectral bands or derived indices, and illustrate an innovative way of defining the spectral threshold used to separate inundated from dry land. The threshold is determined dynamically for each scene based on reflectances of training areas capturing end-members of the inundation spectrum. The method provides a very good accuracy and is suitable for automated processing.

  5. Inhalation TTC values: A new integrative grouping approach considering structural, toxicological and mechanistic features.

    PubMed

    Tluczkiewicz, I; Kühne, R; Ebert, R-U; Batke, M; Schüürmann, G; Mangelsdorf, I; Escher, S E

    2016-07-01

    The present publication describes an integrative grouping concept to derive threshold values for inhalation exposure. The classification scheme starts with differences in toxicological potency and develops criteria to group compounds into two potency classes, namely toxic (T-group) or low toxic (L-group). The TTC concept for inhalation exposure is based on the TTC RepDose data set, consisting of 296 organic compounds with 608 repeated-dose inhalation studies. Initially, 21 structural features (SFs) were identified as being characteristic for compounds of either high or low NOEC values (Schüürmann et al., 2016). In subsequent analyses these SF groups were further refined by taking into account structural homogeneity, type of toxicological effect observed, differences in absorption, metabolism and mechanism of action (MoA), to better define their structural and toxicological boundaries. Differentiation of a local or systemic mode of action did not improve the classification scheme. Finally, 28 groups were discriminated: 19 T-groups and 9 L-groups. Clearly distinct thresholds were derived for the T- and L-toxicity groups, being 2 × 10(-5) ppm (2 μg/person/day) and 0.05 ppm (4260 μg/person/day), respectively. The derived thresholds and the classification are compared to the initial mainly structure driven grouping (Schüürmann et al., 2016) and to the Cramer classification. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Objective definition of rainfall intensity-duration thresholds for the initiation of post-fire debris flows in southern California

    USGS Publications Warehouse

    Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.

    2012-01-01

    Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.

  7. Binary-Signal Recovery

    NASA Technical Reports Server (NTRS)

    Griebeler, Elmer L.

    2011-01-01

    Binary communication through long cables, opto-isolators, isolating transformers, or repeaters can become distorted in characteristic ways. The usual solution is to slow the communication rate, change to a different method, or improve the communication media. It would help if the characteristic distortions could be accommodated at the receiving end to ease the communication problem. The distortions come from loss of the high-frequency content, which adds slopes to the transitions from ones to zeroes and zeroes to ones. This weakens the definition of the ones and zeroes in the time domain. The other major distortion is the reduction of low frequency, which causes the voltage that defines the ones or zeroes to drift out of recognizable range. This development describes a method for recovering a binary data stream from a signal that has been subjected to a loss of both higher-frequency content and low-frequency content that is essential to define the difference between ones and zeroes. The method makes use of the frequency structure of the waveform created by the data stream, and then enhances the characteristics related to the data to reconstruct the binary switching pattern. A major issue is simplicity. The approach taken here is to take the first derivative of the signal and then feed it to a hysteresis switch. This is equivalent in practice to using a non-resonant band pass filter feeding a Schmitt trigger. Obviously, the derivative signal needs to be offset to halfway between the thresholds of the hysteresis switch, and amplified so that the derivatives reliably exceed the thresholds. A transition from a zero to a one is the most substantial, fastest plus movement of voltage, and therefore will create the largest plus first derivative pulse. Since the quiet state of the derivative is sitting between the hysteresis thresholds, the plus pulse exceeds the plus threshold, switching the hysteresis switch plus, which re-establishes the data zero to one transition, except now at the logic levels of the receiving circuit. Similarly, a transition from a one to a zero will be the most substantial and fastest minus movement of voltage and therefore will create the largest minus first derivative pulse. The minus pulse exceeds the minus threshold, switching the hysteresis switch minus, which re-establishes the data one to zero transition. This innovation has a large increase in tolerance for the degradation of the binary pattern of ones and zeroes, and can reject the introduction of noise in the form of low frequencies that can cause the voltage pattern to drift up or down, and also higher frequencies that are beyond the recognizable content in the binary transitions.

  8. Assessment of the anticonvulsant potency of various benzylamide derivatives in the mouse maximal electroshock-induced seizure threshold model.

    PubMed

    Świąder, Mariusz J; Paruszewski, Ryszard; Łuszczki, Jarogniew J

    2016-04-01

    The aim of this study was to assess the anticonvulsant potency of 6 various benzylamide derivatives [i.e., nicotinic acid benzylamide (Nic-BZA), picolinic acid 2-fluoro-benzylamide (2F-Pic-BZA), picolinic acid benzylamide (Pic-BZA), (RS)-methyl-alanine-benzylamide (Me-Ala-BZA), isonicotinic acid benzylamide (Iso-Nic-BZA), and (R)-N-methyl-proline-benzylamide (Me-Pro-BZA)] in the threshold for maximal electroshock (MEST)-induced seizures in mice. Electroconvulsions (seizure activity) were produced in mice by means of a current (sine-wave, 50Hz, 500V, strength from 4 to 18mA, ear-clip electrodes, 0.2-s stimulus duration, tonic hindlimb extension taken as the endpoint). Nic-BZA, 2F-Pic-BZA, Pic-BZA, Me-Ala-BZA, Iso-Nic-BZA, and Me-Pro-BZA administered systemically (ip) in a dose-dependent manner increase the threshold for maximal electroconvulsions in mice. Linear regression analysis of Nic-BZA, 2F-Pic-BZA, Pic-BZA, MeAla-BZA, IsoNic-BZA, and Me-Pro-BZA doses and their corresponding threshold increases allowed determining threshold increasing doses by 20% (TID20 values) that elevate the threshold in drug-treated animals over the threshold in control animals. The experimentally derived TID20 values in the MEST test for Nic-BZA, 2F-Pic-BZA, Pic-BZA, Me-Ala-BZA, Iso-Nic-BZA, and Me-Pro-BZA were 7.45mg/kg, 7.72mg/kg, 8.74mg/kg, 15.11mg/kg, 21.95mg/kg and 28.06mg/kg, respectively. The studied benzylamide derivatives can be arranged with respect to their anticonvulsant potency in the MEST test as follows: Nic-BZA>2F-Pic-BZA>Pic-BZA>Me-Ala-BZA>Iso-Nic-BZA>Me-Pro-BZA. Copyright © 2015 Institute of Pharmacology, Polish Academy of Sciences. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  9. Strengthening the Validity of Population-Based Suicide Rate Comparisons: An Illustration Using U.S. Military and Civilian Data

    ERIC Educational Resources Information Center

    Eaton, Karen M.; Messer, Stephen C.; Garvey Wilson, Abigail L.; Hoge, Charles W.

    2006-01-01

    The objectives of this study were to generate precise estimates of suicide rates in the military while controlling for factors contributing to rate variability such as demographic differences and classification bias, and to develop a simple methodology for the determination of statistically derived thresholds for detecting significant rate…

  10. Thresholds in chemical respiratory sensitisation.

    PubMed

    Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian

    2015-07-03

    There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the acquisition of sensitisation to chemical respiratory allergens is a dose-related phenomenon, and that thresholds exist, it is frequently difficult to define accurate numerical values for threshold exposure levels. Nevertheless, based on occupational exposure data it may sometimes be possible to derive levels of exposure in the workplace, which are safe. An additional observation is the lack currently of suitable experimental methods for both routine hazard characterisation and the measurement of thresholds, and that such methods are still some way off. Given the current trajectory of toxicology, and the move towards the use of non-animal in vitro and/or in silico) methods, there is a need to consider the development of alternative approaches for the identification and characterisation of respiratory sensitisation hazards, and for risk assessment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Magnetic Photon Splitting: The S-Matrix Formulation in the Landau Representation

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.

    1999-01-01

    Calculations of reaction rates for the third-order QED process of photon splitting gamma yields gamma.gamma in strong magnetic fields traditionally have employed either the effective Lagrangian method or variants of Schwinger's proper-time technique. Recently, Mentzel, Berg and Wunner [1] presented an alternative derivation via an S-matrix formulation in the Landau representation. Advantages of such a formulation include the ability to compute rates near pair resonances above pair threshold. This paper presents new developments of the Landau representation formalism as applied to photon splitting, providing significant, advances beyond the work of [1] by summing over the spin quantum numbers of the electron propagators, and analytically integrating over the component of momentum of the intermediate states that is parallel to field. The ensuing tractable expressions for the scattering amplitudes are satisfyingly compact, and of an appearance familiar to S-matrix theory applications. Such developments can facilitate numerical computations of splitting considerably both below and above pair threshold. Specializations to two regimes of interest are obtained, namely the limit of highly supercritical fields and the domain where photon energies are far inferior to that for the threshold of single-photon pair creation. In particular, for the first time the low-frequency amplitudes are simply expressed in terms of the Gamma function, its integral and its derivatives. In addition, the equivalence of the asymptotic forms in these two domains to extant results from effective Lagrangian/proper- time formulations is demonstrated.

  12. Threshold resummation for top-pair hadroproduction to next-to-next-to-leading log

    NASA Astrophysics Data System (ADS)

    Czakon, Michal; Mitov, Alexander; Sterman, George

    2009-10-01

    We derive the threshold-resummed total cross section for heavy quark production in hadronic collisions accurate to next-to-next-to-leading logarithm, employing recent advances on soft anomalous dimension matrices for massive pair production in the relevant kinematic limit. We also derive the relation between heavy quark threshold resummations for fixed pair kinematics and the inclusive cross section. As a check of our results, we have verified that they reproduce all poles of the color-averaged qq¯→tt¯ amplitudes at two loops, noting that the latter are insensitive to the color-antisymmetric terms of the soft anomalous dimension.

  13. Extremely low amplified spontaneous emission threshold and blue electroluminescence from a spin-coated octafluorene neat film

    NASA Astrophysics Data System (ADS)

    Kim, D.-H.; Sandanayaka, A. S. D.; Zhao, L.; Pitrat, D.; Mulatier, J. C.; Matsushima, T.; Andraud, C.; Ribierre, J. C.; Adachi, C.

    2017-01-01

    We report on the photophysical, amplified spontaneous emission (ASE), and electroluminescence properties of a blue-emitting octafluorene derivative in spin-coated films. The neat film shows an extremely low ASE threshold of 90 nJ/cm2, which is related to its high photoluminescence quantum yield of 87% and its large radiative decay rate of 1.7 × 109 s-1. Low-threshold organic distributed feedback semiconductor lasers and fluorescent organic light-emitting diodes with a maximum external quantum efficiency as high as 4.4% are then demonstrated, providing evidence that this octafluorene derivative is a promising candidate for organic laser applications.

  14. Towards a unifying basis of auditory thresholds: binaural summation.

    PubMed

    Heil, Peter

    2014-04-01

    Absolute auditory threshold decreases with increasing sound duration, a phenomenon explainable by the assumptions that the sound evokes neural events whose probabilities of occurrence are proportional to the sound's amplitude raised to an exponent of about 3 and that a constant number of events are required for threshold (Heil and Neubauer, Proc Natl Acad Sci USA 100:6151-6156, 2003). Based on this probabilistic model and on the assumption of perfect binaural summation, an equation is derived here that provides an explicit expression of the binaural threshold as a function of the two monaural thresholds, irrespective of whether they are equal or unequal, and of the exponent in the model. For exponents >0, the predicted binaural advantage is largest when the two monaural thresholds are equal and decreases towards zero as the monaural threshold difference increases. This equation is tested and the exponent derived by comparing binaural thresholds with those predicted on the basis of the two monaural thresholds for different values of the exponent. The thresholds, measured in a large sample of human subjects with equal and unequal monaural thresholds and for stimuli with different temporal envelopes, are compatible only with an exponent close to 3. An exponent of 3 predicts a binaural advantage of 2 dB when the two ears are equally sensitive. Thus, listening with two (equally sensitive) ears rather than one has the same effect on absolute threshold as doubling duration. The data suggest that perfect binaural summation occurs at threshold and that peripheral neural signals are governed by an exponent close to 3. They might also shed new light on mechanisms underlying binaural summation of loudness.

  15. Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.

    PubMed

    d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K

    2015-12-01

    Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.

  16. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  17. A derivation of the stable cavitation threshold accounting for bubble-bubble interactions.

    PubMed

    Guédra, Matthieu; Cornu, Corentin; Inserra, Claude

    2017-09-01

    The subharmonic emission of sound coming from the nonlinear response of a bubble population is the most used indicator for stable cavitation. When driven at twice their resonance frequency, bubbles can exhibit subharmonic spherical oscillations if the acoustic pressure amplitude exceeds a threshold value. Although various theoretical derivations exist for the subharmonic emission by free or coated bubbles, they all rest on the single bubble model. In this paper, we propose an analytical expression of the subharmonic threshold for interacting bubbles in a homogeneous, monodisperse cloud. This theory predicts a shift of the subharmonic resonance frequency and a decrease of the corresponding pressure threshold due to the interactions. For a given sonication frequency, these results show that an optimal value of the interaction strength (i.e. the number density of bubbles) can be found for which the subharmonic threshold is minimum, which is consistent with recently published experiments conducted on ultrasound contrast agents. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Security of six-state quantum key distribution protocol with threshold detectors

    PubMed Central

    Kato, Go; Tamaki, Kiyoshi

    2016-01-01

    The security of quantum key distribution (QKD) is established by a security proof, and the security proof puts some assumptions on the devices consisting of a QKD system. Among such assumptions, security proofs of the six-state protocol assume the use of photon number resolving (PNR) detector, and as a result the bit error rate threshold for secure key generation for the six-state protocol is higher than that for the BB84 protocol. Unfortunately, however, this type of detector is demanding in terms of technological level compared to the standard threshold detector, and removing the necessity of such a detector enhances the feasibility of the implementation of the six-state protocol. Here, we develop the security proof for the six-state protocol and show that we can use the threshold detector for the six-state protocol. Importantly, the bit error rate threshold for the key generation for the six-state protocol (12.611%) remains almost the same as the one (12.619%) that is derived from the existing security proofs assuming the use of PNR detectors. This clearly demonstrates feasibility of the six-state protocol with practical devices. PMID:27443610

  19. Differential reconstructed gene interaction networks for deriving toxicity threshold in chemical risk assessment.

    PubMed

    Yang, Yi; Maxwell, Andrew; Zhang, Xiaowei; Wang, Nan; Perkins, Edward J; Zhang, Chaoyang; Gong, Ping

    2013-01-01

    Pathway alterations reflected as changes in gene expression regulation and gene interaction can result from cellular exposure to toxicants. Such information is often used to elucidate toxicological modes of action. From a risk assessment perspective, alterations in biological pathways are a rich resource for setting toxicant thresholds, which may be more sensitive and mechanism-informed than traditional toxicity endpoints. Here we developed a novel differential networks (DNs) approach to connect pathway perturbation with toxicity threshold setting. Our DNs approach consists of 6 steps: time-series gene expression data collection, identification of altered genes, gene interaction network reconstruction, differential edge inference, mapping of genes with differential edges to pathways, and establishment of causal relationships between chemical concentration and perturbed pathways. A one-sample Gaussian process model and a linear regression model were used to identify genes that exhibited significant profile changes across an entire time course and between treatments, respectively. Interaction networks of differentially expressed (DE) genes were reconstructed for different treatments using a state space model and then compared to infer differential edges/interactions. DE genes possessing differential edges were mapped to biological pathways in databases such as KEGG pathways. Using the DNs approach, we analyzed a time-series Escherichia coli live cell gene expression dataset consisting of 4 treatments (control, 10, 100, 1000 mg/L naphthenic acids, NAs) and 18 time points. Through comparison of reconstructed networks and construction of differential networks, 80 genes were identified as DE genes with a significant number of differential edges, and 22 KEGG pathways were altered in a concentration-dependent manner. Some of these pathways were perturbed to a degree as high as 70% even at the lowest exposure concentration, implying a high sensitivity of our DNs approach. Findings from this proof-of-concept study suggest that our approach has a great potential in providing a novel and sensitive tool for threshold setting in chemical risk assessment. In future work, we plan to analyze more time-series datasets with a full spectrum of concentrations and sufficient replications per treatment. The pathway alteration-derived thresholds will also be compared with those derived from apical endpoints such as cell growth rate.

  20. Triangle singularities and XYZ quarkonium peaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szczepaniak, Adam P.

    2015-06-01

    We discuss analytical properties of partial waves derived from projection of a 4-legged amplitude with crossed-channel exchanges in the kinematic region of the direct channel that corresponds to the XYZ peaks in charmonium and bottomonium. We show that in general partial waves can develop anomalous branch points in the vicinity of the direct channel physical region. In a specific case, when these branch points lie on the opposite side of the unitary cut they pinch the integration contour in a dispersion relation and if the pinch happens close to threshold, the normal threshold cusp is enhanced. We show that this effect only occurs if masses of resonances in the crossed channel are in a specific, narrow range. We estimate the size of threshold enhancements originating from these anomalous singularities in reactions where themore » $$Z_c(3900)$$ and the $$Z_b(10610)$$ peaks have been observed.« less

  1. On thermonuclear ignition criterion at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Baolian; Kwan, Thomas J. T.; Wang, Yi-Ming

    2014-10-15

    Sustained thermonuclear fusion at the National Ignition Facility remains elusive. Although recent experiments approached or exceeded the anticipated ignition thresholds, the nuclear performance of the laser-driven capsules was well below predictions in terms of energy and neutron production. Such discrepancies between expectations and reality motivate a reassessment of the physics of ignition. We have developed a predictive analytical model from fundamental physics principles. Based on the model, we obtained a general thermonuclear ignition criterion in terms of the areal density and temperature of the hot fuel. This newly derived ignition threshold and its alternative forms explicitly show the minimum requirementsmore » of the hot fuel pressure, mass, areal density, and burn fraction for achieving ignition. Comparison of our criterion with existing theories, simulations, and the experimental data shows that our ignition threshold is more stringent than those in the existing literature and that our results are consistent with the experiments.« less

  2. Hearing parameters in noise exposed industrial workers.

    PubMed

    Celik, O; Yalçin, S; Oztürk, A

    1998-12-01

    This paper presents the results of a study carried out in a group of noise-exposed workers in a hydro-electric power plant. Thus, the main focus of the study is on 130 industrial workers who were exposed to high level of noise. The control group was consisted of 33 subjects with normal hearing. Hearing and acoustic reflex thresholds were obtained from all subjects and the results from age-matched subgroups were compared. The sensorineural hearing loss which were detected in 71 workers were bilateral, symmetrical and affected mainly frequencies of 4-6 kHz. In essence, the hearing losses were developed within the first 10 years of noise exposure and associated with slight progress in the following years. When acoustic reflex thresholds derived from the study and control groups were compared, statistically significant difference was determined only for the thresholds obtained at 4 kHz (p < 0.0005).

  3. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams

    USGS Publications Warehouse

    Black, R.W.; Moran, P.W.; Frankforter, J.D.

    2011-01-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).

  4. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams.

    PubMed

    Black, Robert W; Moran, Patrick W; Frankforter, Jill D

    2011-04-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.

  5. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  6. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  7. 2D modeling based comprehensive analysis of short channel effects in DMG strained VSTB FET

    NASA Astrophysics Data System (ADS)

    Saha, Priyanka; Banerjee, Pritha; Sarkar, Subir Kumar

    2018-06-01

    The paper aims to develop two dimensional analytical model of the proposed dual material (DM) Vertical Super Thin Body (VSTB) strained Field Effect Transistor (FET) with focus on its short channel behaviour in nanometer regime. Electrostatic potential across gate/channel and dielectric wall/channel interface is derived by solving 2D Poisson's equation with parabolic approximation method by applying appropriate boundary conditions. Threshold voltage is then calculated by using the criteria of minimum surface potential considering both gate and dielectric wall side potential. Performance analysis of the present structure is demonstrated in terms of potential, electric field, threshold voltage characteristics and subthreshold behaviour by varying various device parameters and applied biases. Effect of application of strain in channel is further explored to establish the superiority of the proposed device in comparison to conventional VSTB FET counterpart. All analytical results are compared with Silvaco ATLAS device simulated data to substantiate the accuracy of our derived model.

  8. Research on the Development of Green Finance in Shenzhen to Boost the Carbon Trading Market

    NASA Astrophysics Data System (ADS)

    Zhou, Jiping; Xiong, Siqin; Zhou, Yucheng; Zou, Zijian; Ma, Xiaoming

    2017-08-01

    This paper analyses the current development situations of Shenzhen carbon trading market and China’s green finance, and makes the policy recommendations for promoting the carbon trading market by developing green finance in Shenzhen. Shenzhen should take the lead in driving the localized application of green principle, and formulate Shenzhen green bond guidelines ASAP, to promote carbon trading associated enterprises to finance by using green bonds; it shall work to lower the threshold for financial institutions to participate in carbon trading market, and explore development of carbon derivatives.

  9. A threshold level of NFATc1 activity facilitates thymocyte differentiation and opposes notch-driven leukaemia development

    PubMed Central

    Klein-Hessling, Stefan; Rudolf, Ronald; Muhammad, Khalid; Knobeloch, Klaus-Peter; Maqbool, Muhammad Ahmad; Cauchy, Pierre; Andrau, Jean-Christophe; Avots, Andris; Talora, Claudio; Ellenrieder, Volker; Screpanti, Isabella; Serfling, Edgar; Patra, Amiya Kumar

    2016-01-01

    NFATc1 plays a critical role in double-negative thymocyte survival and differentiation. However, the signals that regulate Nfatc1 expression are incompletely characterized. Here we show a developmental stage-specific differential expression pattern of Nfatc1 driven by the distal (P1) or proximal (P2) promoters in thymocytes. Whereas, preTCR-negative thymocytes exhibit only P2 promoter-derived Nfatc1β expression, preTCR-positive thymocytes express both Nfatc1β and P1 promoter-derived Nfatc1α transcripts. Inducing NFATc1α activity from P1 promoter in preTCR-negative thymocytes, in addition to the NFATc1β from P2 promoter impairs thymocyte development resulting in severe T-cell lymphopenia. In addition, we show that NFATc1 activity suppresses the B-lineage potential of immature thymocytes, and consolidates their differentiation to T cells. Further, in the pTCR-positive DN3 cells, a threshold level of NFATc1 activity is vital in facilitating T-cell differentiation and to prevent Notch3-induced T-acute lymphoblastic leukaemia. Altogether, our results show NFATc1 activity is crucial in determining the T-cell fate of thymocytes. PMID:27312418

  10. Derivation of critical rainfall thresholds for landslide in Sicily

    NASA Astrophysics Data System (ADS)

    Caracciolo, Domenico; Arnone, Elisa; Noto, Leonardo V.

    2015-04-01

    Rainfall is the primary trigger of shallow landslides that can cause fatalities, damage to properties and economic losses in many areas of the world. For this reason, determining the rainfall amount/intensity responsible for landslide occurrence is important, and may contribute to mitigate the related risk and save lives. Efforts have been made in different countries to investigate triggering conditions in order to define landslide-triggering rainfall thresholds. The rainfall thresholds are generally described by a functional relationship of power in terms of cumulated or intensity event rainfall-duration, whose parameters are estimated empirically from the analysis of historical rainfall events that triggered landslides. The aim of this paper is the derivation of critical rainfall thresholds for landslide occurrence in Sicily, southern Italy, by focusing particularly on the role of the antecedent wet conditions. The creation of the appropriate landslide-rainfall database likely represents one of main efforts in this type of analysis. For this work, historical landslide events occurred in Sicily from 1919 to 2001 were selected from the archive of the Sistema Informativo sulle Catastrofi Idrogeologiche, developed under the project Aree Vulnerabili Italiane. The corresponding triggering precipitations were screened from the raingauges network in Sicily, maintained by the Osservatorio delle Acque - Agenzia Regionale per i Rifiuti e le Acque. In particular, a detailed analysis was carried out to identify and reconstruct the hourly rainfall events that caused the selected landslides. A bootstrapping statistical technique has been used to determine the uncertainties associated with the threshold parameters. The rainfall thresholds at different exceedance probability levels, from 1% to 10%, were defined in terms of cumulated event rainfall, E, and rainfall duration, D. The role of rainfall prior to the damaging events was taken into account by including in the analysis the rainfall fallen 6, 15 and 30 days before each landslide. The antecedent rainfall turned out to be particularly important in triggering landslides. The rainfall thresholds obtained for the Sicily were compared with the regional curves proposed by various authors confirming a good agreement with these.

  11. ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2017-01-01

    This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242

  12. Method for extracting long-equivalent wavelength interferometric information

    NASA Technical Reports Server (NTRS)

    Hochberg, Eric B. (Inventor)

    1991-01-01

    A process for extracting long-equivalent wavelength interferometric information from a two-wavelength polychromatic or achromatic interferometer. The process comprises the steps of simultaneously recording a non-linear sum of two different frequency visible light interferograms on a high resolution film and then placing the developed film in an optical train for Fourier transformation, low pass spatial filtering and inverse transformation of the film image to produce low spatial frequency fringes corresponding to a long-equivalent wavelength interferogram. The recorded non-linear sum irradiance derived from the two-wavelength interferometer is obtained by controlling the exposure so that the average interferogram irradiance is set at either the noise level threshold or the saturation level threshold of the film.

  13. Paradigm shift in lead design.

    PubMed

    Irnich, W

    1999-09-01

    During the past 30 years there has been a tremendous development in electrode technology from bulky (90 mm2) to pin-sized (1.0 mm2) electrodes. Simultaneously, impedance has increased from 110 Ohms to >1 kOhms, which has been termed a "paradigm shift" in lead design. If current is responsible for stimulation, why is its impedance a key factor in saving energy? Further, what mechanism is behind this development based on experimental findings and what conclusion can be drawn from it to optimize electrode size? If it is assumed that there is always a layer of nonexcitable tissue between the electrode surface and excitable myocardium and that the electric field (potential gradient) produced by the electrode at this boundary is reaching threshold level, then a formula can be derived for the voltage threshold that completely describes the electrophysiology and electrophysics of a hemispherical electrode. Assuming that the mean chronic threshold for porous steroid-eluting electrodes is 0.6 V with 0.5-ms pulse duration, thickness of nonexcitable tissue can be estimated to be 1.5 mm. Taking into account this measure and the relationship between chronaxie and electrode area, voltage threshold, impedance, and energy as a function of surface area can be calculated. The lowest voltage for 0.5-ms pulse duration is reached with r(o) = 0.5 d, yielding a surface area of 4 mm2 and a voltage threshold of 0.62 V, an impedance of 1 kOhms, and an energy level of 197 nJ. It can be deduced from our findings that a further reduction of surface areas below 1.6 mm2 will not diminish energy threshold substantially, if pulse duration remains at 0.5 ms. Lowest energy is reached with t = chronaxie, yielding an energy level <100 nJ with surface areas < or =1.5 mm2. It is striking to see how well the theoretically derived results correspond to the experimental findings. It is also surprising that the hemispheric model so accurately approximates experimental results with differently shaped electrodes that it can be concluded that electrode shape seems to play a minor role in electrode efficiency. Further energy reduction can only be achieved by reducing the pulse duration to chronaxie. A real paradigm shift will occur only if the fundamentals of electrostimulation in combination with electrophysics are accepted by the pacing community.

  14. Threshold thickness for applying diffusion equation in thin tissue optical imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yunyao; Zhu, Jingping; Cui, Weiwen; Nie, Wei; Li, Jie; Xu, Zhenghong

    2014-08-01

    We investigated the suitability of the semi-infinite model of the diffusion equation when using diffuse optical imaging (DOI) to image thin tissues with double boundaries. Both diffuse approximation and Monte Carlo methods were applied to simulate light propagation in the thin tissue model with variable optical parameters and tissue thicknesses. A threshold value of the tissue thickness was defined as the minimum thickness in which the semi-infinite model exhibits the same reflected intensity as that from the double-boundary model and was generated as the final result. In contrast to our initial hypothesis that all optical properties would affect the threshold thickness, our results show that only absorption coefficient is the dominant parameter and the others are negligible. The threshold thickness decreases from 1 cm to 4 mm as the absorption coefficient grows from 0.01 mm-1 to 0.2 mm-1. A look-up curve was derived to guide the selection of the appropriate model during the optical diagnosis of thin tissue cancers. These results are useful in guiding the development of the endoscopic DOI for esophageal, cervical and colorectal cancers, among others.

  15. Forecasting the probability of future groundwater levels declining below specified low thresholds in the conterminous U.S.

    USGS Publications Warehouse

    Dudley, Robert W.; Hodgkins, Glenn A.; Dickinson, Jesse

    2017-01-01

    We present a logistic regression approach for forecasting the probability of future groundwater levels declining or maintaining below specific groundwater-level thresholds. We tested our approach on 102 groundwater wells in different climatic regions and aquifers of the United States that are part of the U.S. Geological Survey Groundwater Climate Response Network. We evaluated the importance of current groundwater levels, precipitation, streamflow, seasonal variability, Palmer Drought Severity Index, and atmosphere/ocean indices for developing the logistic regression equations. Several diagnostics of model fit were used to evaluate the regression equations, including testing of autocorrelation of residuals, goodness-of-fit metrics, and bootstrap validation testing. The probabilistic predictions were most successful at wells with high persistence (low month-to-month variability) in their groundwater records and at wells where the groundwater level remained below the defined low threshold for sustained periods (generally three months or longer). The model fit was weakest at wells with strong seasonal variability in levels and with shorter duration low-threshold events. We identified challenges in deriving probabilistic-forecasting models and possible approaches for addressing those challenges.

  16. Impact of Xanthylium Derivatives on the Color of White Wine.

    PubMed

    Bührle, Franziska; Gohl, Anita; Weber, Fabian

    2017-08-19

    Xanthylium derivatives are yellow to orange pigments with a glyoxylic acid bridge formed by dimerization of flavanols, which are built by oxidative cleavage of tartaric acid. Although their structure and formation under wine-like conditions are well established, knowledge about their color properties and their occurrence and importance in wine is deficient. Xanthylium cations and their corresponding esters were synthesized in a model wine solution and isolated via high-performance countercurrent chromatography (HPCCC) and solid phase extraction (SPE). A Three-Alternative-Forced-Choice (3-AFC) test was applied to reveal the color perception threshold of the isolated compounds in white wine. Their presence and color impact was assessed in 70 different wines (58 white and 12 rosé wines) by UHPLC-DAD-ESI-MS n and the storage stability in wine was determined. The thresholds in young Riesling wine were 0.57 mg/L (cations), 1.04 mg/L (esters) and 0.67 mg/L (1:1 ( w / w ) mixture), respectively. The low thresholds suggest a possible impact on white wine color, but concentrations in wines were below the threshold. The stability study showed the degradation of the compounds during storage under several conditions. Despite the low perception threshold, xanthylium derivatives might have no direct impact on white wine color, but might play a role in color formation as intermediate products in polymerization and browning.

  17. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  18. Mechanism of and Threshold Biomechanical Conditions for Falsetto Voice Onset

    PubMed Central

    Deguchi, Shinji

    2011-01-01

    The sound source of a voice is produced by the self-excited oscillation of the vocal folds. In modal voice production, a drastic increase in transglottal pressure after vocal fold closure works as a driving force that develops self-excitation. Another type of vocal fold oscillation with less pronounced glottal closure observed in falsetto voice production has been accounted for by the mucosal wave theory. The classical theory assumes a quasi-steady flow, and the expected driving force onto the vocal folds under wavelike motion is derived from the Bernoulli effect. However, wavelike motion is not always observed during falsetto voice production. More importantly, the application of the quasi-steady assumption to a falsetto voice with a fundamental frequency of several hundred hertz is unsupported by experiments. These considerations suggested that the mechanism of falsetto voice onset may be essentially different from that explained by the mucosal wave theory. In this paper, an alternative mechanism is submitted that explains how self-excitation reminiscent of the falsetto voice could be produced independent of the glottal closure and wavelike motion. This new explanation is derived through analytical procedures by employing only general unsteady equations of motion for flow and solids. The analysis demonstrated that a convective acceleration of a flow induced by rapid wall movement functions as a negative damping force, leading to the self-excitation of the vocal folds. The critical subglottal pressure and volume flow are expressed as functions of vocal fold biomechanical properties, geometry, and voice fundamental frequency. The analytically derived conditions are qualitatively and quantitatively reasonable in view of reported measurement data of the thresholds required for falsetto voice onset. Understanding of the voice onset mechanism and the explicit mathematical descriptions of thresholds would be beneficial for the diagnosis and treatment of voice diseases and the development of artificial vocal folds. PMID:21408178

  19. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431

  20. Equilibrium analysis of a yellow Fever dynamical model with vaccination.

    PubMed

    Martorano Raimundo, Silvia; Amaku, Marcos; Massad, Eduardo

    2015-01-01

    We propose an equilibrium analysis of a dynamical model of yellow fever transmission in the presence of a vaccine. The model considers both human and vector populations. We found thresholds parameters that affect the development of the disease and the infectious status of the human population in the presence of a vaccine whose protection may wane over time. In particular, we derived a threshold vaccination rate, above which the disease would be eradicated from the human population. We show that if the mortality rate of the mosquitoes is greater than a given threshold, then the disease is naturally (without intervention) eradicated from the population. In contrast, if the mortality rate of the mosquitoes is less than that threshold, then the disease is eradicated from the populations only when the growing rate of humans is less than another threshold; otherwise, the disease is eradicated only if the reproduction number of the infection after vaccination is less than 1. When this reproduction number is greater than 1, the disease will be eradicated from the human population if the vaccination rate is greater than a given threshold; otherwise, the disease will establish itself among humans, reaching a stable endemic equilibrium. The analysis presented in this paper can be useful, both to the better understanding of the disease dynamics and also for the planning of vaccination strategies.

  1. Threshold Determination for Local Instantaneous Sea Surface Height Derivation with Icebridge Data in Beaufort Sea

    NASA Astrophysics Data System (ADS)

    Zhu, C.; Zhang, S.; Xiao, F.; Li, J.; Yuan, L.; Zhang, Y.; Zhu, T.

    2018-05-01

    The NASA Operation IceBridge (OIB) mission is the largest program in the Earth's polar remote sensing science observation project currently, initiated in 2009, which collects airborne remote sensing measurements to bridge the gap between NASA's ICESat and the upcoming ICESat-2 mission. This paper develop an improved method that optimizing the selection method of Digital Mapping System (DMS) image and using the optimal threshold obtained by experiments in Beaufort Sea to calculate the local instantaneous sea surface height in this area. The optimal threshold determined by comparing manual selection with the lowest (Airborne Topographic Mapper) ATM L1B elevation threshold of 2 %, 1 %, 0.5 %, 0.2 %, 0.1 % and 0.05 % in A, B, C sections, the mean of mean difference are 0.166 m, 0.124 m, 0.083 m, 0.018 m, 0.002 m and -0.034 m. Our study shows the lowest L1B data of 0.1 % is the optimal threshold. The optimal threshold and manual selections are also used to calculate the instantaneous sea surface height over images with leads, we find that improved methods has closer agreement with those from L1B manual selections. For these images without leads, the local instantaneous sea surface height estimated by using the linear equations between distance and sea surface height calculated over images with leads.

  2. Ontogenetic investigation of underwater hearing capabilities in loggerhead sea turtles (Caretta caretta) using a dual testing approach.

    PubMed

    Lavender, Ashley L; Bartol, Soraya M; Bartol, Ian K

    2014-07-15

    Sea turtles reside in different acoustic environments with each life history stage and may have different hearing capacity throughout ontogeny. For this study, two independent yet complementary techniques for hearing assessment, i.e. behavioral and electrophysiological audiometry, were employed to (1) measure hearing in post-hatchling and juvenile loggerhead sea turtles Caretta caretta (19-62 cm straight carapace length) to determine whether these migratory turtles exhibit an ontogenetic shift in underwater auditory detection and (2) evaluate whether hearing frequency range and threshold sensitivity are consistent in behavioral and electrophysiological tests. Behavioral trials first required training turtles to respond to known frequencies, a multi-stage, time-intensive process, and then recording their behavior when they were presented with sound stimuli from an underwater speaker using a two-response forced-choice paradigm. Electrophysiological experiments involved submerging restrained, fully conscious turtles just below the air-water interface and recording auditory evoked potentials (AEPs) when sound stimuli were presented using an underwater speaker. No significant differences in behavior-derived auditory thresholds or AEP-derived auditory thresholds were detected between post-hatchling and juvenile sea turtles. While hearing frequency range (50-1000/1100 Hz) and highest sensitivity (100-400 Hz) were consistent in audiograms pooled by size class for both behavior and AEP experiments, both post-hatchlings and juveniles had significantly higher AEP-derived than behavior-derived auditory thresholds, indicating that behavioral assessment is a more sensitive testing approach. The results from this study suggest that post-hatchling and juvenile loggerhead sea turtles are low-frequency specialists, exhibiting little differences in threshold sensitivity and frequency bandwidth despite residence in acoustically distinct environments throughout ontogeny. © 2014. Published by The Company of Biologists Ltd.

  3. Assessing and Adapting LiDAR-Derived Pit-Free Canopy Height Model Algorithm for Sites with Varying Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Scholl, V.; Hulslander, D.; Goulden, T.; Wasser, L. A.

    2015-12-01

    Spatial and temporal monitoring of vegetation structure is important to the ecological community. Airborne Light Detection and Ranging (LiDAR) systems are used to efficiently survey large forested areas. From LiDAR data, three-dimensional models of forests called canopy height models (CHMs) are generated and used to estimate tree height. A common problem associated with CHMs is data pits, where LiDAR pulses penetrate the top of the canopy, leading to an underestimation of vegetation height. The National Ecological Observatory Network (NEON) currently implements an algorithm to reduce data pit frequency, which requires two height threshold parameters, increment size and range ceiling. CHMs are produced at a series of height increments up to a height range ceiling and combined to produce a CHM with reduced pits (referred to as a "pit-free" CHM). The current implementation uses static values for the height increment and ceiling (5 and 15 meters, respectively). To facilitate the generation of accurate pit-free CHMs across diverse NEON sites with varying vegetation structure, the impacts of adjusting the height threshold parameters were investigated through development of an algorithm which dynamically selects the height increment and ceiling. A series of pit-free CHMs were generated using three height range ceilings and four height increment values for three ecologically different sites. Height threshold parameters were found to change CHM-derived tree heights up to 36% compared to original CHMs. The extent of the parameters' influence on modelled tree heights was greater than expected, which will be considered during future CHM data product development at NEON. (A) Aerial image of Harvard National Forest, (B) standard CHM containing pits, appearing as black speckles, (C) a pit-free CHM created with the static algorithm implementation, and (D) a pit-free CHM created through varying the height threshold ceiling up to 82 m and the increment to 1 m.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher

    Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less

  5. Detection of dominant runoff generation processes in flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Iacobellis, Vito; Fiorentino, Mauro; Gioia, Andrea; Manfreda, Salvatore

    2010-05-01

    The investigation on hydrologic similarity represents one of the most exciting challenges faced by hydrologists in the last few years, in order to reduce uncertainty on flood prediction in ungauged basins (e.g., IAHS Decade on Predictions in Ungauged Basins (PUB) - Sivapalan et al., 2003). In perspective, the identification of dominant runoff generation mechanisms may provide a strategy for catchment classification and identification hydrologically omogeneous regions. In this context, we exploited the framework of theoretically derived flood probability distributions, in order to interpret the physical behavior of real basins. Recent developments on theoretically derived distributions have highlighted that in a given basin different runoff processes may coexistence and modify or affect the shape of flood distributions. The identification of dominant runoff generation mechanisms represents a key signatures of flood distributions providing an insight in hydrologic similarity. Iacobellis and Fiorentino (2000) introduced a novel distribution of flood peak annual maxima, the "IF" distribution, which exploited the variable source area concept, coupled with a runoff threshold having scaling properties. More recently, Gioia et al (2008) introduced the Two Component-IF (TCIF) distribution, generalizing the IF distribution, based on two different threshold mechanisms, associated respectively to ordinary and extraordinary events. Indeed, ordinary floods are mostly due to rainfall events exceeding a threshold infiltration rate in a small source area, while the so-called outlier events, often responsible of the high skewness of flood distributions, are triggered by severe rainfalls exceeding a threshold storage in a large portion of the basin. Within this scheme, we focused on the application of both models (IF and TCIF) over a considerable number of catchments belonging to different regions of Southern Italy. In particular, we stressed, as a case of strong general interest in the field of statistical hydrology, the role of procedures for parameters estimation and techniques for model selection in the case of nested distributions. References Gioia, A., V. Iacobellis, S. Manfreda, M. Fiorentino, Runoff thresholds in derived flood frequency distributions, Hydrol. Earth Syst. Sci., 12, 1295-1307, 2008. Iacobellis, V., and M. Fiorentino (2000), Derived distribution of floods based on the concept of partial area coverage with a climatic appeal, Water Resour. Res., 36(2), 469-482. Sivapalan, M., Takeuchi, K., Franks, S. W., Gupta, V. K., Karambiri, H., Lakshmi, V., Liang, X., McDonnell, J. J., Mendiondo, E. M., O'Connell, P. E., Oki, T., Pomeroy, J. W., Schertzer, D., Uhlenbrook, S. and Zehe, E.: IAHS Decade on Predictions in Ungauged Basins (PUB), 2003-2012: Shaping an exciting future for the hydrological sciences, Hydrol. Sci. J., 48(6), 857-880, 2003.

  6. The perturbed compound Poisson risk model with constant interest and a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Liu, Zaiming

    2010-03-01

    In this paper, we consider the compound Poisson risk model perturbed by diffusion with constant interest and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the nth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case that the claim size distribution is exponential is considered in some detail.

  7. CT derived left atrial size identifies left heart disease in suspected pulmonary hypertension: Derivation and validation of predictive thresholds.

    PubMed

    Currie, Benjamin J; Johns, Chris; Chin, Matthew; Charalampopolous, Thanos; Elliot, Charlie A; Garg, Pankaj; Rajaram, Smitha; Hill, Catherine; Wild, Jim W; Condliffe, Robin A; Kiely, David G; Swift, Andy J

    2018-06-01

    Patients with pulmonary hypertension due to left heart disease (PH-LHD) have overlapping clinical features with pulmonary arterial hypertension making diagnosis reliant on right heart catheterization (RHC). This study aimed to investigate computed tomography pulmonary angiography (CTPA) derived cardiopulmonary structural metrics, in comparison to magnetic resonance imaging (MRI) for the diagnosis of left heart disease in patients with suspected pulmonary hypertension. Patients with suspected pulmonary hypertension who underwent CTPA, MRI and RHC were identified. Measurements of the cardiac chambers and vessels were recorded from CTPA and MRI. The diagnostic thresholds of individual measurements to detect elevated pulmonary arterial wedge pressure (PAWP) were identified in a derivation cohort (n = 235). Individual CT and MRI derived metrics were tested in validation cohort (n = 211). 446 patients, of which 88 had left heart disease. Left atrial area was a strong predictor of elevated PAWP>15 mm Hg and PAWP>18 mm Hg, area under curve (AUC) 0.854, and AUC 0.873 respectively. Similar accuracy was also identified for MRI derived LA volume, AUC 0.852 and AUC 0.878 for PAWP > 15 and 18 mm Hg, respectively. Left atrial area of 26.8 cm 2 and 30.0 cm 2 were optimal specific thresholds for identification of PAWP > 15 and 18 mm Hg, had sensitivity of 60%/53% and specificity 89%/94%, respectively in a validation cohort. CTPA and MRI derived left atrial size identifies left heart disease in suspected pulmonary hypertension with high specificity. The proposed diagnostic thresholds for elevated left atrial area on routine CTPA may be a useful to indicate the diagnosis of left heart disease in suspected pulmonary hypertension. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Automated quantification of myocardial perfusion SPECT using simplified normal limits.

    PubMed

    Slomka, Piotr J; Nishina, Hidetaka; Berman, Daniel S; Akincioglu, Cigdem; Abidov, Aiden; Friedman, John D; Hayes, Sean W; Germano, Guido

    2005-01-01

    To simplify development of normal limits for myocardial perfusion SPECT (MPS), we implemented a quantification scheme in which normal limits are derived without visual scoring of abnormal scans or optimization of regional thresholds. Normal limits were derived from same-day TI-201 rest/Tc-99m-sestamibi stress scans of male (n = 40) and female (n = 40) low-likelihood patients. Defect extent, total perfusion deficit (TPD), and regional perfusion extents were derived by comparison to normal limits in polar-map coordinates. MPS scans from 256 consecutive patients without known coronary artery disease, who underwent coronary angiography, were analyzed. The new method of quantification (TPD) was compared with our previously developed quantification system and visual scoring. The receiver operator characteristic area under the curve for detection of 50% or greater stenoses by TPD (0.88 +/- 0.02) was higher than by visual scoring (0.83 +/- 0.03) ( P = .039) or standard quantification (0.82 +/- 0.03) ( P = .004). For detection of 70% or greater stenoses, it was higher for TPD (0.89 +/- 0.02) than for standard quantification (0.85 +/- 0.02) ( P = .014). Sensitivity and specificity were 93% and 79%, respectively, for TPD; 81% and 85%, respectively, for visual scoring; and 80% and 73%, respectively, for standard quantification. The use of stress mode-specific normal limits did not improve performance. Simplified quantification achieves performance better than or equivalent to visual scoring or quantification based on per-segment visual optimization of abnormality thresholds.

  9. Room-Temperature Low-Threshold Lasing from Monolithically Integrated Nanostructured Porous Silicon Hybrid Microcavities.

    PubMed

    Robbiano, Valentina; Paternò, Giuseppe M; La Mattina, Antonino A; Motti, Silvia G; Lanzani, Guglielmo; Scotognella, Francesco; Barillaro, Giuseppe

    2018-05-22

    Silicon photonics would strongly benefit from monolithically integrated low-threshold silicon-based laser operating at room temperature, representing today the main challenge toward low-cost and power-efficient electronic-photonic integrated circuits. Here we demonstrate low-threshold lasing from fully transparent nanostructured porous silicon (PSi) monolithic microcavities (MCs) infiltrated with a polyfluorene derivative, namely, poly(9,9-di- n-octylfluorenyl-2,7-diyl) (PFO). The PFO-infiltrated PSiMCs support single-mode blue lasing at the resonance wavelength of 466 nm, with a line width of ∼1.3 nm and lasing threshold of 5 nJ (15 μJ/cm 2 ), a value that is at the state of the art of PFO lasers. Furthermore, time-resolved photoluminescence shows a significant shortening (∼57%) of PFO emission lifetime in the PSiMCs, with respect to nonresonant PSi reference structures, confirming a dramatic variation of the radiative decay rate due to a Purcell effect. Our results, given also that blue lasing is a worst case for silicon photonics, are highly appealing for the development of low-cost, low-threshold silicon-based lasers with wavelengths tunable from visible to the near-infrared region by simple infiltration of suitable emitting polymers in monolithically integrated nanostructured PSiMCs.

  10. Second harmonic generation and crystal growth of new chalcone derivatives

    NASA Astrophysics Data System (ADS)

    Patil, P. S.; Dharmaprakash, S. M.; Ramakrishna, K.; Fun, Hoong-Kun; Sai Santosh Kumar, R.; Narayana Rao, D.

    2007-05-01

    We report on the synthesis, crystal structure and optical characterization of chalcone derivatives developed for second-order nonlinear optics. The investigation of a series of five chalcone derivatives with the second harmonic generation powder test according to Kurtz and Perry revealed that these chalcones show efficient second-order nonlinear activity. Among them, high-quality single crystals of 3-Br-4'-methoxychalcone (3BMC) were grown by solvent evaporation solution growth technique. Grown crystals were characterized by X-ray powder diffraction (XRD), laser damage threshold, UV-vis-NIR and refractive index measurement studies. Infrared spectroscopy, thermogravimetric analysis and differential thermal analysis measurements were performed to study the molecular vibration and thermal behavior of 3BMC crystal. Thermal analysis does not show any structural phase transition.

  11. Universality of next-to-leading power threshold effects for colourless final states in hadronic collisions

    NASA Astrophysics Data System (ADS)

    Del Duca, V.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C. D.

    2017-11-01

    We consider the production of an arbitrary number of colour-singlet particles near partonic threshold, and show that next-to-leading order cross sections for this class of processes have a simple universal form at next-to-leading power (NLP) in the energy of the emitted gluon radiation. Our analysis relies on a recently derived factorisation formula for NLP threshold effects at amplitude level, and therefore applies both if the leading-order process is tree-level and if it is loop-induced. It holds for differential distributions as well. The results can furthermore be seen as applications of recently derived next-to-soft theorems for gauge theory amplitudes. We use our universal expression to re-derive known results for the production of up to three Higgs bosons at NLO in the large top mass limit, and for the hadro-production of a pair of electroweak gauge bosons. Finally, we present new analytic results for Higgs boson pair production at NLO and NLP, with exact top-mass dependence.

  12. Species sensitivity distribution evaluation for selenium in fish eggs: considerations for development of a Canadian tissue-based guideline.

    PubMed

    DeForest, David K; Gilron, Guy; Armstrong, Sarah A; Robertson, Erin L

    2012-01-01

    A freshwater Se guideline was developed for consideration based on concentrations in fish eggs or ovaries, with a focus on Canadian species, following the Canadian Council of Ministers of the Environment protocol for developing guideline values. When sufficient toxicity data are available, the protocol recommends deriving guidelines as the 5th percentile of the species sensitivity distribution (SSD). When toxicity data are limited, the protocol recommends a lowest value approach, where the lowest toxicity threshold is divided by a safety factor (e.g., 10). On the basis of a comprehensive review of the current literature and an assessment of the data therein, there are sufficient egg and ovary Se data available for freshwater fish to develop an SSD. For most fish species, Se EC10 values (10% effect concentrations) could be derived, but for some species, only no-observed-effect concentrations and/or lowest-observed-effect concentrations could be identified. The 5th percentile egg and ovary Se concentrations from the SSD were consistently 20 µg/g dry weight (dw) for the best-fitting distributions. In contrast, the lowest value approach using a safety factor of 10 would result in a Se egg and ovary guideline of 2 µg/g dw, which is unrealistically conservative, as this falls within the range of egg and ovary Se concentrations in laboratory control fish and fish collected from reference sites. An egg and ovary Se guideline of 20 µg/g dw should be considered a conservative, broadly applicable guideline, as no species mean toxicity thresholds lower than this value have been identified to date. When concentrations exceed this guideline, site-specific studies with local fish species, conducted using a risk-based approach, may result in higher egg and ovary Se toxicity thresholds. Copyright © 2011 SETAC.

  13. On the need for a time- and location-dependent estimation of the NDSI threshold value for reducing existing uncertainties in snow cover maps at different scales

    NASA Astrophysics Data System (ADS)

    Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten

    2018-05-01

    Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.

  14. Association between functional antibody against Group B Streptococcus and maternal and infant colonization in a Gambian cohort.

    PubMed

    Le Doare, Kirsty; Faal, Amadou; Jaiteh, Mustapha; Sarfo, Francess; Taylor, Stephen; Warburton, Fiona; Humphries, Holly; Birt, Jessica; Jarju, Sheikh; Darboe, Saffiatou; Clarke, Edward; Antonio, Martin; Foster-Nyarko, Ebenezer; Heath, Paul T; Gorringe, Andrew; Kampmann, Beate

    2017-05-19

    Vertical transmission of Group B Streptococcus (GBS) is a prerequisite for early-onset disease and a consequence of maternal GBS colonization. Disease protection is associated with maternally-derived anti-GBS antibody. Using a novel antibody-mediated C3b/iC3b deposition flow cytometry assay which correlates with opsonic killing we developed a model to assess the impact of maternally-derived functional anti-GBS antibody on infant GBS colonization from birth to day 60-89 of life. Rectovaginal swabs and cord blood (birth) and infant nasopharyngeal/rectal swabs (birth, day 6 and day 60-89) were obtained from 750 mother/infant pairs. Antibody-mediated C3b/iC3b deposition with cord and infant sera was measured by flow cytometry. We established that as maternally-derived anti-GBS functional antibody increases, infant colonization decreases at birth and up to three months of life, the critical time window for the development of GBS disease. Further, we observed a serotype (ST)-dependent threshold above which no infant was colonized at birth. Functional antibody above the upper 95th confidence interval for the geometric mean concentration was associated with absence of infant GBS colonization at birth for STII (p<0.001), STIII (p=0.01) and STV (p<0.001). Increased functional antibody was also associated with clearance of GBS between birth and day 60-89. Higher concentrations of maternally-derived antibody-mediated complement deposition are associated with a decreased risk of GBS colonization in infants up to day 60-89 of life. Our findings are of relevance to establish thresholds for protection following vaccination of pregnant women with future GBS vaccines. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Threshold virus dynamics with impulsive antiretroviral drug effects

    PubMed Central

    Lou, Jie; Lou, Yijun; Wu, Jianhong

    2013-01-01

    The purposes of this paper are twofold: to develop a rigorous approach to analyze the threshold behaviors of nonlinear virus dynamics models with impulsive drug effects and to examine the feasibility of virus clearance following the Manuals of National AIDS Free Antiviral Treatment in China. An impulsive system of differential equations is developed to describe the within-host virus dynamics of both wild-type and drug-resistant strains when a combination of antiretroviral drugs is used to induce instantaneous drug effects at a sequence of dosing times equally spaced while drug concentrations decay exponentially after the dosing time. Threshold parameters are derived using the basic reproduction number of periodic epidemic models, and are used to depict virus clearance/persistence scenarios using the theory of asymptotic periodic systems and the persistence theory of discrete dynamical systems. Numerical simulations using model systems parametrized in terms of the antiretroviral therapy recommended in the aforementioned Manuals illustrate the theoretical threshold virus dynamics, and examine conditions under which the impulsive antiretroviral therapy leads to treatment success. In particular, our results show that only the drug-resistant strain can dominate (the first-line treatment program guided by the Manuals) or both strains may be rapidly eliminated (the second-line treatment program), thus the work indicates the importance of implementing the second-line treatment program as soon as possible. PMID:21987085

  16. Electron-atom spin asymmetry and two-electron photodetachment - Addenda to the Coulomb-dipole threshold law

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1984-01-01

    Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).

  17. Enhanced-Resolution Satellite Microwave Brightness Temperature Records for Mapping Boreal-Arctic Landscape Freeze-Thaw Heterogeneity

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Du, J.; Kimball, J. S.

    2017-12-01

    The landscape freeze-thaw (FT) status derived from satellite microwave remote sensing is closely linked to vegetation phenology and productivity, surface energy exchange, evapotranspiration, snow/ice melt dynamics, and trace gas fluxes over land areas affected by seasonally frozen temperatures. A long-term global satellite microwave Earth System Data Record of daily landscape freeze-thaw status (FT-ESDR) was developed using similar calibrated 37GHz, vertically-polarized (V-pol) brightness temperatures (Tb) from SMMR, SSM/I, and SSMIS sensors. The FT-ESDR shows mean annual spatial classification accuracies of 90.3 and 84.3 % for PM and AM overpass retrievals relative surface air temperature (SAT) measurement based FT estimates from global weather stations. However, the coarse FT-ESDR gridding (25-km) is insufficient to distinguish finer scale FT heterogeneity. In this study, we tested alternative finer scale FT estimates derived from two enhanced polar-grid (3.125-km and 6-km resolution), 36.5 GHz V-pol Tb records derived from calibrated AMSR-E and AMSR2 sensor observations. The daily FT estimates are derived using a modified seasonal threshold algorithm that classifies daily Tb variations in relation to grid cell-wise FT thresholds calibrated using ERA-Interim reanalysis based SAT, downscaled using a digital terrain map and estimated temperature lapse rates. The resulting polar-grid FT records for a selected study year (2004) show mean annual spatial classification accuracies of 90.1% (84.2%) and 93.1% (85.8%) for respective PM (AM) 3.125km and 6-km Tb retrievals relative to in situ SAT measurement based FT estimates from regional weather stations. Areas with enhanced FT accuracy include water-land boundaries and mountainous terrain. Differences in FT patterns and relative accuracy obtained from the enhanced grid Tb records were attributed to several factors, including different noise contributions from underlying Tb processing and spatial mismatches between Tb retrievals and SAT calibrated FT thresholds.

  18. Investigations on the destruction of ultrasound contrast agents: Fragmentation thresholds, inertial cavitation, and bioeffects

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Shiang

    Ultrasound contrast agents (UCA) have shown great potential in both diagnostic and therapeutic applications recently. To fully explore the possible applications and the safety concerns of using UCA, a complete understanding of the UCA responses to various acoustic fields is necessary. Therefore, we performed a series of experiments and simulations to investigate the various acoustic properties of UCA with different gases and shells. We also investigated the mechanisms of some UCA-enhanced bioeffects including thrombolysis, hemolysis and high-intensity focused ultrasound (HIFU) tumor ablation. Two pressure thresholds were found: the fragmentation threshold and continuous inertial cavitation (IC) threshold. At the fragmentation threshold, bubbles were destroyed and the released gas dissolved in the surrounding solution at a rate which depended on the bubble's initial size and type of gas. The continuous IC threshold occurred at a higher pressure, where fragments of destroyed UCA (derivative bubbles) underwent violent inertial collapse; the period of activity depending on acoustic parameters such as frequency, pressure, pulse length, and pulse repetition frequency (PRF). Different UCA had different threshold pressures and demonstrated different magnitudes of IC activity after destruction. The amount of derivative bubbles generated by IC was determined by several acoustic parameters including pressure, pulse length and PRE For the same acoustic energy delivered, longer pulses generated more bubbles. More IC could be induced if the derivative bubbles could survive through the 'off' period of the pulsed ultrasound waves, and served as nuclei for the subsequent IC. In therapeutic applications, evidences of IC activity were recorded during the hemolysis, thrombolysis, and the lesion-formation processes with UCA. Hemolysis and thrombolysis were highly correlated to the presence of ultrasound and UCA, and correlated well with the amount of the IC activity. Finally, the 'tadpole-shaped' lesion formed during high-intensity, focused ultrasound treatment was the result of bubble formation by boiling.

  19. Deriving freshwater safety thresholds for hexabromocyclododecane and comparison of toxicity of brominated flame retardants.

    PubMed

    Dong, Liang; Zheng, Lei; Yang, Suwen; Yan, Zhenguang; Jin, Weidong; Yan, Yuhong

    2017-05-01

    Hexabromocyclododecane (HBCD) is a brominated flame retardant used throughout the world. It has been detected in various environmental media and has been shown toxic to aquatic life. The toxic effects of HBCD to aquatic organisms in Chinese freshwater ecosystems are discussed here. Experiments were conducted with nine types of acute toxicity testing and three types of chronic toxicity testing. After comparing a range of species sensitivity distribution models, the optimal model of Bull III was used to derive the safety thresholds for HBCD. The acute safety threshold and the chronic safety threshold of HBCD for Chinese freshwater organisms were found to be 2.32mg/L and 0.128mg/L, respectively. Both values were verified by the methods of the Netherlands and the United States. HBCD was found to be less toxic compared to other widely used brominated flame retardants. The present results provide valuable information for revision of the water quality standard of HBCD in China. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Mirror instability near the threshold: Hybrid simulations

    NASA Astrophysics Data System (ADS)

    Hellinger, P.; Trávníček, P.; Passot, T.; Sulem, P.; Kuznetsov, E. A.; Califano, F.

    2007-12-01

    Nonlinear behavior of the mirror instability near the threshold is investigated using 1-D hybrid simulations. The simulations demonstrate the presence of an early phase where quasi-linear effects dominate [ Shapiro and Shevchenko, 1964]. The quasi-linear diffusion is however not the main saturation mechanism. A second phase is observed where the mirror mode is linearly stable (the stability is evaluated using the instantaneous ion distribution function) but where the instability nevertheless continues to develop, leading to nonlinear coherent structures in the form of magnetic humps. This regime is well modeled by a nonlinear equation for the magnetic field evolution, derived from a reductive perturbative expansion of the Vlasov-Maxwell equations [ Kuznetsov et al., 2007] with a phenomenological term which represents local variations of the ion Larmor radius. In contrast with previous models where saturation is due to the cooling of a population of trapped particles, the resulting equation correctly reproduces the development of magnetic humps from an initial noise. References Kuznetsov, E., T. Passot and P. L. Sulem (2007), Dynamical model for nonlinear mirror modes near threshold, Phys. Rev. Lett., 98, 235003. Shapiro, V. D., and V. I. Shevchenko (1964), Sov. JETP, 18, 1109.

  1. Cloud cover over the equatorial eastern Pacific derived from July 1983 International Satellite Cloud Climatology Project data using a hybrid bispectral threshold method

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Harrison, Edwin F.; Gibson, Gary G.

    1987-01-01

    A set of visible and IR data obtained with GOES from July 17-31, 1983 is analyzed using a modified version of the hybrid bispectral threshold method developed by Minnis and Harrison (1984). This methodology can be divided into a set of procedures or optional techniques to determine the proper contaminate clear-sky temperature or IR threshold. The various optional techniques are described; the options are: standard, low-temperature limit, high-reflectance limit, low-reflectance limit, coldest pixel and thermal adjustment limit, IR-only low-cloud temperature limit, IR clear-sky limit, and IR overcast limit. Variations in the cloud parameters and the characteristics and diurnal cycles of trade cumulus and stratocumulus clouds over the eastern equatorial Pacific are examined. It is noted that the new method produces substantial changes in about one third of the cloud amount retrieval; and low cloud retrievals are affected most by the new constraints.

  2. Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Reed, Seann M.

    2003-09-01

    The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.

  3. Intensity Thresholds on Raw Acceleration Data: Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) Approaches

    PubMed Central

    Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V.; Esliger, Dale W.; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L.

    2016-01-01

    Objectives (1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Methods Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. Results For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Conclusions Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD thresholds, in order to accurately separate sedentary behaviours from common motion-based light-intensity physical activities. However, caution should be taken if isolating sedentary behaviours from standing is of particular interest. PMID:27706241

  4. Intensity Thresholds on Raw Acceleration Data: Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) Approaches.

    PubMed

    Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V; Esliger, Dale W; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L

    2016-01-01

    (1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD thresholds, in order to accurately separate sedentary behaviours from common motion-based light-intensity physical activities. However, caution should be taken if isolating sedentary behaviours from standing is of particular interest.

  5. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  6. Using machine learning to examine medication adherence thresholds and risk of hospitalization.

    PubMed

    Lo-Ciganic, Wei-Hsuan; Donohue, Julie M; Thorpe, Joshua M; Perera, Subashan; Thorpe, Carolyn T; Marcum, Zachary A; Gellad, Walid F

    2015-08-01

    Quality improvement efforts are frequently tied to patients achieving ≥80% medication adherence. However, there is little empirical evidence that this threshold optimally predicts important health outcomes. To apply machine learning to examine how adherence to oral hypoglycemic medications is associated with avoidance of hospitalizations, and to identify adherence thresholds for optimal discrimination of hospitalization risk. A retrospective cohort study of 33,130 non-dual-eligible Medicaid enrollees with type 2 diabetes. We randomly selected 90% of the cohort (training sample) to develop the prediction algorithm and used the remaining (testing sample) for validation. We applied random survival forests to identify predictors for hospitalization and fit survival trees to empirically derive adherence thresholds that best discriminate hospitalization risk, using the proportion of days covered (PDC). Time to first all-cause and diabetes-related hospitalization. The training and testing samples had similar characteristics (mean age, 48 y; 67% female; mean PDC=0.65). We identified 8 important predictors of all-cause hospitalizations (rank in order): prior hospitalizations/emergency department visit, number of prescriptions, diabetes complications, insulin use, PDC, number of prescribers, Elixhauser index, and eligibility category. The adherence thresholds most discriminating for risk of all-cause hospitalization varied from 46% to 94% according to patient health and medication complexity. PDC was not predictive of hospitalizations in the healthiest or most complex patient subgroups. Adherence thresholds most discriminating of hospitalization risk were not uniformly 80%. Machine-learning approaches may be valuable to identify appropriate patient-specific adherence thresholds for measuring quality of care and targeting nonadherent patients for intervention.

  7. Health technology assessment and haemophilia.

    PubMed

    Farrugia, A; O'Mahony, B; Cassar, J

    2012-03-01

    Although the funding of rare diseases such as haemophilia in developing countries remains a low priority, pressures on the funding of haemophilia treatment are also emerging in developed economies affected by the global economic downturn and the other demands on health care budgets. This is leading advisory bodies and payers alike to explore the tools of Health Technology Assessment (HTAs) in deriving recommendations for reimbursement policies. In particular, the use of cost utility analysis (CUA) in deriving costs per quality adjusted life year (QALY) for different interventions is being used to rank interventions in order of priorities relative to a threshold cost per QALY. In these exercises, rare chronic disorders such as haemophilia emerge as particularly unattractive propositions for reimbursement, as the accepted methodology of deriving a CUA. For e.g. the use of prophylaxis in haemophilia leads to a range of costs/QALY which exceed the willingness to pay thresholds of most payers. In this commentary, we review the principles utilized in a recent systematic review of the use of haemophilia products carried out in Sweden as part of an HTA. We suggest that ranking haemophilia related interventions with the standard interventions of therapeutics and public health in CUA comparisons is inappropriate. Given that haemophilia treatment is a form of blood replacement therapy, we propose that such comparisons should be made with the interventions of mainstream blood transfusion. We suggest that unequivocally effective treatments such as haemophilia therapies should be assessed differently from mainstream interventions, that new methodologies are required for these kinds of diseases and that evidence of a societal willingness to support people with rare disorders needs to be recognized when reimbursement policies are developed. © 2012 Blackwell Publishing Ltd.

  8. The effect of signal duration on the underwater hearing thresholds of two harbor seals (Phoca vitulina) for single tonal signals between 0.2 and 40 kHz.

    PubMed

    Kastelein, Ronald A; Hoek, Lean; Wensveen, Paul J; Terhune, John M; de Jong, Christ A F

    2010-02-01

    The underwater hearing sensitivities of two 2-year-old female harbor seals were quantified in a pool built for acoustic research by using a behavioral psycho-acoustic technique. The animals were trained only to respond when they detected an acoustic signal ("go/no-go" response). Detection thresholds were obtained for pure tone signals (frequencies: 0.2-40 kHz; durations: 0.5-5000 ms, depending on the frequency; 59 frequency-duration combinations). Detection thresholds were quantified by varying the signal amplitude by the 1-up, 1-down staircase method, and were defined as the stimulus levels, resulting in a 50% detection rate. The hearing thresholds of the two seals were similar for all frequencies except for 40 kHz, for which the thresholds differed by, on average, 3.7 dB. There was an inverse relationship between the time constant (tau), derived from an exponential model of temporal integration, and the frequency [log(tau)=2.86-0.94 log(f);tau in ms and f in kHz]. Similarly, the thresholds increased when the pulse was shorter than approximately 780 cycles (independent of the frequency). For pulses shorter than the integration time, the thresholds increased by 9-16 dB per decade reduction in the duration or number of cycles in the pulse. The results of this study suggest that most published hearing thresholds

  9. Overcoming the effects of false positives and threshold bias in graph theoretical analyses of neuroimaging data.

    PubMed

    Drakesmith, M; Caeyenberghs, K; Dutt, A; Lewis, G; David, A S; Jones, D K

    2015-09-01

    Graph theory (GT) is a powerful framework for quantifying topological features of neuroimaging-derived functional and structural networks. However, false positive (FP) connections arise frequently and influence the inferred topology of networks. Thresholding is often used to overcome this problem, but an appropriate threshold often relies on a priori assumptions, which will alter inferred network topologies. Four common network metrics (global efficiency, mean clustering coefficient, mean betweenness and smallworldness) were tested using a model tractography dataset. It was found that all four network metrics were significantly affected even by just one FP. Results also show that thresholding effectively dampens the impact of FPs, but at the expense of adding significant bias to network metrics. In a larger number (n=248) of tractography datasets, statistics were computed across random group permutations for a range of thresholds, revealing that statistics for network metrics varied significantly more than for non-network metrics (i.e., number of streamlines and number of edges). Varying degrees of network atrophy were introduced artificially to half the datasets, to test sensitivity to genuine group differences. For some network metrics, this atrophy was detected as significant (p<0.05, determined using permutation testing) only across a limited range of thresholds. We propose a multi-threshold permutation correction (MTPC) method, based on the cluster-enhanced permutation correction approach, to identify sustained significant effects across clusters of thresholds. This approach minimises requirements to determine a single threshold a priori. We demonstrate improved sensitivity of MTPC-corrected metrics to genuine group effects compared to an existing approach and demonstrate the use of MTPC on a previously published network analysis of tractography data derived from a clinical population. In conclusion, we show that there are large biases and instability induced by thresholding, making statistical comparisons of network metrics difficult. However, by testing for effects across multiple thresholds using MTPC, true group differences can be robustly identified. Copyright © 2015. Published by Elsevier Inc.

  10. The perturbed Sparre Andersen model with a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Gao, Heli; Yin, Chuancun

    2008-10-01

    In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.

  11. Identifying Thresholds for Ecosystem-Based Management

    PubMed Central

    Samhouri, Jameal F.; Levin, Phillip S.; Ainsworth, Cameron H.

    2010-01-01

    Background One of the greatest obstacles to moving ecosystem-based management (EBM) from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. Methodology/Principal Findings To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution) at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity) and functional (e.g., resilience) attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1) fishing and (2) nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. Conclusions/Significance For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management. PMID:20126647

  12. Spatial contrast sensitivity vision loss in children with cortical visual impairment.

    PubMed

    Good, William V; Hou, Chuan; Norcia, Anthony M

    2012-11-19

    Although cortical visual impairment (CVI) is the leading cause of bilateral vision impairment in children in Western countries, little is known about the effects of CVI on visual function. The aim of this study was to compare visual evoked potential measures of contrast sensitivity and grating acuity in children with CVI with those of age-matched typically developing controls. The swept parameter visual evoked potential (sVEP) was used to measure contrast sensitivity and grating acuity in 34 children with CVI at 5 months to 5 years of age and in 16 age-matched control children. Contrast thresholds and spatial frequency thresholds (grating acuities) were derived by extrapolating the tuning functions to zero amplitude. These thresholds and maximal suprathreshold response amplitudes were compared between groups. Among 34 children with CVI, 30 had measurable but reduced contrast sensitivity with a median threshold of 10.8% (range 5.0%-30.0% Michelson), and 32 had measurable but reduced grating acuity with median threshold 0.49 logMAR (9.8 c/deg, range 5-14 c/deg). These thresholds were significantly reduced, compared with age-matched control children. In addition, response amplitudes over the entire sweep range for both measures were significantly diminished in children with CVI compared with those of control children. Our results indicate that spatial contrast sensitivity and response amplitudes are strongly affected by CVI. The substantial degree of loss in contrast sensitivity suggests that contrast is a sensitive measure for evaluating vision deficits in patients with CVI.

  13. A study on the temperature dependence of the threshold switching characteristics of Ge2Sb2Te5

    NASA Astrophysics Data System (ADS)

    Lee, Suyoun; Jeong, Doo Seok; Jeong, Jeung-hyun; Zhe, Wu; Park, Young-Wook; Ahn, Hyung-Woo; Cheong, Byung-ki

    2010-01-01

    We investigated the temperature dependence of the threshold switching characteristics of a memory-type chalcogenide material, Ge2Sb2Te5. We found that the threshold voltage (Vth) decreased linearly with temperature, implying the existence of a critical conductivity of Ge2Sb2Te5 for its threshold switching. In addition, we investigated the effect of bias voltage and temperature on the delay time (tdel) of the threshold switching of Ge2Sb2Te5 and described the measured relationship by an analytic expression which we derived based on a physical model where thermally activated hopping is a dominant transport mechanism in the material.

  14. Choice of threshold line angle for binary phase-only filters

    NASA Astrophysics Data System (ADS)

    Vijaya Kumar, Bhagavatula; Hendrix, Charles D.

    1993-10-01

    The choice of threshold line angle (TLA) is an important issue in designing Binary Phase-Only Filters (BPOFs). In this paper, we derive expressions that explicitly relate the TLA to correlation peak intensity. We also show some examples that illustrate the effect of choosing the wrong TLA.

  15. Climatic thresholds for concentrations of minerals and heavy metals in Argentinean soybeans

    USDA-ARS?s Scientific Manuscript database

    Mineral undernourishment is of concern throughout the world, and plant-derived foods are considered a major dietary source contributing to adequate daily mineral intake. Soybeans and soy ingredients are consumed daily by humans and animals. In this study, we demonstrate the climate thresholds for op...

  16. Threshold law for positron-atom impact ionisation

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1982-01-01

    The threshold law for ionisation of atoms by positron impact is adduced in analogy with our approach to the electron-atom ionization. It is concluded the Coulomb-dipole region of the potential gives the essential part of the interaction in both cases and leads to the same kind of result: a modulated linear law. An additional process which enters positron ionization is positronium formation in the continuum, but that will not dominate the threshold yield. The result is in sharp contrast to the positron threshold law as recently derived by Klar on the basis of a Wannier-type analysis.

  17. Statistical properties of light from optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Vyas, Reeta; Singh, Surendra

    2009-12-01

    Coherence properties of light beams generated by optical parametric oscillators (OPOs) are discussed in the region of threshold. Analytic expressions, that are valid throughout the threshold region, for experimentally measurable quantities such as the mean and variance of photon number fluctuations, squeezing of field quadratures, and photon counting distributions are derived. These expressions describe non-Gaussian fluctuations of light in the region of threshold and reproduce Gaussian fluctuations below and above threshold, thus providing a bridge between below and above threshold regimes of operation. They are used to study the transformation of fluctuation properties of light as the OPOs make a transition from below to above threshold. The results for the OPOs are compared to those for the single-mode and two-mode lasers and their similarities and differences are discussed.

  18. Near-threshold equal-loudness contours for harbor seals (Phoca vitulina) derived from reaction times during underwater audiometry: a preliminary study.

    PubMed

    Kastelein, Ronald A; Wensveen, Paul J; Terhune, John M; de Jong, Christ A F

    2011-01-01

    Equal-loudness functions describe relationships between the frequencies of sounds and their perceived loudness. This pilot study investigated the possibility of deriving equal-loudness contours based on the assumption that sounds of equal perceived loudness elicit equal reaction times (RTs). During a psychoacoustic underwater hearing study, the responses of two young female harbor seals to tonal signals between 0.125 and 100 kHz were filmed. Frame-by-frame analysis was used to quantify RT (the time between the onset of the sound stimulus and the onset of movement of the seal away from the listening station). Near-threshold equal-latency contours, as surrogates for equal-loudness contours, were estimated from RT-level functions fitted to mean RT data. The closer the received sound pressure level was to the 50% detection hearing threshold, the more slowly the animals reacted to the signal (RT range: 188-982 ms). Equal-latency contours were calculated relative to the RTs shown by each seal at sound levels of 0, 10, and 20 dB above the detection threshold at 1 kHz. Fifty percent detection thresholds are obtained with well-trained subjects actively listening for faint familiar sounds. When calculating audibility ranges of sounds for harbor seals in nature, it may be appropriate to consider levels 20 dB above this threshold.

  19. The sensitivity of normal brain and intracranially implanted VX2 tumour to interstitial photodynamic therapy.

    PubMed Central

    Lilge, L.; Olivo, M. C.; Schatz, S. W.; MaGuire, J. A.; Patterson, M. S.; Wilson, B. C.

    1996-01-01

    The applicability and limitations of a photodynamic threshold model, used to describe quantitatively the in vivo response of tissues to photodynamic therapy, are currently being investigated in a variety of normal and malignant tumour tissues. The model states that tissue necrosis occurs when the number of photons absorbed by the photosensitiser per unit tissue volume exceeds a threshold. New Zealand White rabbits were sensitised with porphyrin-based photosensitisers. Normal brain or intracranially implanted VX2 tumours were illuminated via an optical fibre placed into the tissue at craniotomy. The light fluence distribution in the tissue was measured by multiple interstitial optical fibre detectors. The tissue concentration of the photosensitiser was determined post mortem by absorption spectroscopy. The derived photodynamic threshold values for normal brain are significantly lower than for VX2 tumour for all photosensitisers examined. Neuronal damage is evident beyond the zone of frank necrosis. For Photofrin the threshold decreases with time delay between photosensitiser administration and light treatment. No significant difference in threshold is found between Photofrin and haematoporphyrin derivative. The threshold in normal brain (grey matter) is lowest for sensitisation by 5 delta-aminolaevulinic acid. The results confirm the very high sensitivity of normal brain to porphyrin photodynamic therapy and show the importance of in situ light fluence monitoring during photodynamic irradiation. Images Figure 1 Figure 4 Figure 5 Figure 6 Figure 7 PMID:8562339

  20. Cartilage quantification using contrast-enhanced MRI in the wrist of rheumatoid arthritis: cartilage loss is associated with bone marrow edema.

    PubMed

    Fujimori, Motoshi; Nakamura, Satoko; Hasegawa, Kiminori; Ikeno, Kunihiro; Ichikawa, Shota; Sutherland, Kenneth; Kamishima, Tamotsu

    2017-08-01

    To quantify wrist cartilage using contrast MRI and compare with the extent of adjacent synovitis and bone marrow edema (BME) in patients with rheumatoid arthritis (RA). 18 patients with RA underwent post-contrast fat-suppressed T 1 weighted coronal imaging. Cartilage area at the centre of the scaphoid-capitate and radius-scaphoid joints was measured by in-house developed software. We defined cartilage as the pixels with signal intensity between two thresholds (lower: 0.4, 0.5 and 0.6 times the muscle signal, upper: 0.9, 1.0, 1.1, 1.2 and 1.3 times the muscle signal). We investigated the association of cartilage loss with synovitis and BME score derived from RA MRI scoring system. Cartilage area was correlated with BME score when thresholds were adequately set with lower threshold at 0.6 times the muscle signal and upper threshold at 1.2 times the muscle signal for both SC (r s =-0.469, p < 0.05) and RS (r s =-0.486, p < 0.05) joints, while it showed no significant correlation with synovitis score at any thresholds. Our software can accurately quantify cartilage in the wrist and BME associated with cartilage loss in patients with RA. Advances in knowledge: Our software can quantify cartilage using conventional MR images of the wrist. BME is associated with cartilage loss in RA patients.

  1. The FY 1980 Department of Defense Program for Research, Development, and Acquisition

    DTIC Science & Technology

    1979-02-01

    materiel. Up to a point, superior performance is an offset to this quantitative disadvantage. Lanchester’s theory of warfare derived simplified relations...intermediate ranges. Underground Test. The next scheduled underground test ( UGT ), MINERS IRON, in FY 1980, will provide engineering and design data on...methods of discriminating between UGTs and earthquakes, and address U.S. capabilities to monitor both the existing Threshold Test Ban Treaty and the

  2. A six-year record of volcanic ash detection with Envisat MIPAS

    NASA Astrophysics Data System (ADS)

    Griessbach, S.; Hoffmann, L.; von Hobe, M.; Müller, R.; Spang, R.

    2012-04-01

    Volcanic ash particles have an impact on the Earth's radiation budget and pose a severe danger to air traffic. Therefore, the ability to detect and characterize volcanic ash layers on a global and altitude-dependent scale is essential. The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) on-board ESA's Envisat is mainly used for measurements of vertical profiles of atmospheric trace gases. It is also very sensitive to cloud and aerosol particles. We developed a fast, simple, and reliable method to detect volcanic ash using MIPAS spectra. From calculations of volcanic ash and ice particle optical properties, such as extinction coefficients and single scattering albedos as well as simulated MIPAS radiance spectra, we derived two optimal micro windows at 10.5 and 12.1 μm to detect volcanic ash. The calculations were performed with the JUelich RApid Spectral Simulation Code (JURASSIC), which includes a scattering module. Our method applies two radiance thresholds to detect volcanic ash. The first one is derived from a statistical analysis of six years of measured MIPAS radiances in the selected spectral windows. This statistical threshold accounts only for pure volcanic ash detections. The second threshold is derived from simulations of MIPAS radiances with JURASSIC for a broad range of atmospheric conditions and tangent altitudes for volcanic ash and ice particles. The second threshold allows more volcanic ash detections, because it accounts also for mixtures of ice and volcanic ash particles within the instrument's field of view. With the new method major eruptions (from e.g. Chaiten, Okmok, Kasatochi, Sarychev, Eyafjallajökull, Merapi, Grimsvötn, Puyehue-Cordon Caulle, Nabro) as well as several smaller eruptions at mid-latitudes and in polar regions between 2006 - 2011 were clearly identified in the MIPAS data. Trajectory calculations using the Chemical Langangian Model of the Stratosphere (CLaMS) are used to locate a volcanic eruption for each detection. In a case study of the 2011 eruption of the Chilean volcano Puyehue-Cordon Caulle we show how the volcanic ash spreads over the complete southern hemisphere mid-latitudes, is diluted and descends slowly with time. Ash is detected up to two month after the first eruption.

  3. Deriving Snow-Cover Depletion Curves for Different Spatial Scales from Remote Sensing and Snow Telemetry Data

    NASA Technical Reports Server (NTRS)

    Fassnacht, Steven R.; Sexstone, Graham A.; Kashipazha, Amir H.; Lopez-Moreno, Juan Ignacio; Jasinski, Michael F.; Kampf, Stephanie K.; Von Thaden, Benjamin C.

    2015-01-01

    During the melting of a snowpack, snow water equivalent (SWE) can be correlated to snow-covered area (SCA) once snow-free areas appear, which is when SCA begins to decrease below 100%. This amount of SWE is called the threshold SWE. Daily SWE data from snow telemetry stations were related to SCA derived from moderate-resolution imaging spectro radiometer images to produce snow-cover depletion curves. The snow depletion curves were created for an 80,000 sq km domain across southern Wyoming and northern Colorado encompassing 54 snow telemetry stations. Eight yearly snow depletion curves were compared, and it is shown that the slope of each is a function of the amount of snow received. Snow-cover depletion curves were also derived for all the individual stations, for which the threshold SWE could be estimated from peak SWE and the topography around each station. A stations peak SWE was much more important than the main topographic variables that included location, elevation, slope, and modelled clear sky solar radiation. The threshold SWE mostly illustrated inter-annual consistency.

  4. Delineating high-density areas in spatial Poisson fields from strip-transect sampling using indicator geostatistics: application to unexploded ordnance removal.

    PubMed

    Saito, Hirotaka; McKenna, Sean A

    2007-07-01

    An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.

  5. Blister formation at subcritical doses in tungsten irradiated by MeV protons

    NASA Astrophysics Data System (ADS)

    Gavish Segev, I.; Yahel, E.; Silverman, I.; Makov, G.

    2017-12-01

    The material response of tungsten to irradiation by MeV protons has been studied experimentally, in particular with respect to bubble and blister formation. Tungsten samples were irradiated by 2.2 MeV protons at the Soreq Applied Research Accelerator Facility (SARAF) to doses of the order of 1017 protons/cm2 which are below the reported critical threshold for blister formation derived from keV range irradiation studies. Large, well-developed blisters are observed indicating that for MeV range protons the critical threshold is at least an order of magnitude lower than the lowest value reported previously. The effects of fluence, flux, and corresponding temperature on the distribution and characteristics of the obtained blisters were studied. FIB cross sections of several blisters exposed their depth and structure.

  6. Development of ecological indicator guilds for land management

    USGS Publications Warehouse

    Krzysik, A.J.; Balbach, H.E.; Duda, J.J.; Emlen, J.M.; Freeman, D.C.; Graham, J.H.; Kovacic, D.A.; Smith, L.M.; Zak, J.C.

    2005-01-01

    Agency land-use must be efficiently and cost-effectively monitored to assess conditions and trends in ecosystem processes and natural resources relevant to mission requirements and legal mandates. Ecological Indicators represent important land management tools for tracking ecological changes and preventing irreversible environmental damage in disturbed landscapes. The overall objective of the research was to develop both individual and integrated sets (i.e., statistically derived guilds) of Ecological Indicators to: quantify habitat conditions and trends, track and monitor ecological changes, provide early warning or threshold detection, and provide guidance for land managers. The derivation of Ecological Indicators was based on statistical criteria, ecosystem relevance, reliability and robustness, economy and ease of use for land managers, multi-scale performance, and stress response criteria. The basis for the development of statistically based Ecological Indicators was the identification of ecosystem metrics that analytically tracked a landscape disturbance gradient.

  7. Simultaneous determination of estrogenic odorant alkylphenols, chlorophenols, and their derivatives in water using online headspace solid phase microextraction coupled with gas chromatography-mass spectrometry.

    PubMed

    Yuan, Su-Fen; Liu, Ze-Hua; Lian, Hai-Xian; Yang, Chuangtao; Lin, Qing; Yin, Hua; Dang, Zhi

    2016-10-01

    A simple online headspace solid-phase microextraction (HS-SPME) coupled with the gas chromatography-mass spectrometry (GC-MS) method was developed for simultaneous determination of trace amounts of nine estrogenic odorant alkylphenols and chlorophenols and their derivatives in water samples. The extraction conditions of HS-SPME were optimized including fiber selection, extraction temperature, extraction time, and salt concentration. Results showed that divinylbenzene/Carboxen/polydimethylsiloxane (DVB/CAR/PDMS) fiber was the most appropriate one among the three selected commercial fibers, and the optimal extraction temperature, time, and salt concentration were 70 °C, 30 min, and 0.25 g/mL, respectively. The developed method was validated and showed good linearity (R (2) > 0.989), low limit of detection (LOD, 0.002-0.5 μg/L), and excellent recoveries (76-126 %) with low relative standard deviation (RSD, 0.7-12.9 %). The developed method was finally applied to two surface water samples and some of these target compounds were detected. All these detected compounds were below their odor thresholds, except for 2,4,6-TCAS and 2,4,6-TBAS wherein their concentrations were near their odor thresholds. However, in the two surface water samples, these detected compounds contributed to a certain amount of estrogenicity, which seemed to suggest that more attention should be paid to the issue of estrogenicity rather than to the odor problem.

  8. [Determination of the anaerobic threshold by the rate of ventilation and cardio interval variability].

    PubMed

    Seluianov, V N; Kalinin, E M; Pak, G D; Maevskaia, V I; Konrad, A H

    2011-01-01

    The aim of this work is to develop methods for determining the anaerobic threshold according to the rate of ventilation and cardio interval variability during the test with stepwise increases load on the cycle ergometer and treadmill. In the first phase developed the method for determining the anaerobic threshold for lung ventilation. 49 highly skilled skiers took part in the experiment. They performed a treadmill ski-walking test with sticks with gradually increasing slope from 0 to 25 degrees, the slope increased by one degree every minute. In the second phase we developed a method for determining the anaerobic threshold according dynamics ofcardio interval variability during the test. The study included 86 athletes of different sports specialties who performed pedaling on the cycle ergometer "Monarch" in advance. Initial output was 25 W, power increased by 25 W every 2 min. The pace was steady--75 rev/min. Measurement of pulmonary ventilation and oxygen and carbon dioxide content was performed using gas analyzer COSMED K4. Sampling of arterial blood was carried from the ear lobe or finger, blood lactate concentration was determined using an "Akusport" instrument. RR-intervals registration was performed using heart rate monitor Polar s810i. As a result, it was shown that the graphical method for determining the onset of anaerobic threshold ventilation (VAnP) coincides with the accumulation of blood lactate 3.8 +/- 0.1 mmol/l when testing on a treadmill and 4.1 +/- 0.6 mmol/1 on the cycle ergometer. The connection between the measure of oxygen consumption at VAnP and the dispersion of cardio intervals (SD1), derived regression equation: VO2AnT = 0.35 + 0.01SD1W + 0.0016SD1HR + + 0.106SD1(ms), l/min; (R = 0.98, error evaluation function 0.26 L/min, p < 0.001), where W (W)--Power, HR--heart rate (beats/min), SD1--cardio intervals dispersion (ms) at the moment of registration of cardio interval threshold.

  9. Approaches for derivation of environmental quality criteria for substances applied in risk assessment of discharges from offshore drilling operations.

    PubMed

    Altin, Dag; Frost, Tone Karin; Nilssen, Ingunn

    2008-04-01

    In order to achieve the offshore petroleum industries "zero harm" goal to the environment, the environmental impact factor for drilling discharges was developed as a tool to identify and quantify the environmental risks associated with disposal of drilling discharges to the marine environment. As an initial step in this work the main categories of substances associated with drilling discharges and assumed to contribute to toxic or nontoxic stress were identified and evaluated for inclusion in the risk assessment. The selection were based on the known toxicological properties of the substances, or the total amount discharged together with their potential for accumulation in the water column or sediments to levels that could be expected to cause toxic or nontoxic stress to the biota. Based on these criteria 3 categories of chemicals were identified for risk assessment the water column and sediments: Natural organic substances, metals, and drilling fluid chemicals. Several approaches for deriving the environmentally safe threshold concentrations as predicted no effect concentrations were evaluated in the process. For the water column consensus were reached for using the species sensitivity distribution approach for metals and the assessment factor approach for natural organic substances and added drilling chemicals. For the sediments the equilibrium partitioning approach was selected for all three categories of chemicals. The theoretically derived sediment quality criteria were compared to field-derived threshold effect values based on statistical approaches applied on sediment monitoring data from the Norwegian Continental Shelf. The basis for derivation of predicted no effect concentration values for drilling discharges should be consistent with the principles of environmental risk assessment as described in the Technical Guidance Document on Risk Assessment issued by the European Union.

  10. Threshold exceedance risk assessment in complex space-time systems

    NASA Astrophysics Data System (ADS)

    Angulo, José M.; Madrid, Ana E.; Romero, José L.

    2015-04-01

    Environmental and health impact risk assessment studies most often involve analysis and characterization of complex spatio-temporal dynamics. Recent developments in this context are addressed, among other objectives, to proper representation of structural heterogeneities, heavy-tailed processes, long-range dependence, intermittency, scaling behavior, etc. Extremal behaviour related to spatial threshold exceedances can be described in terms of geometrical characteristics and distribution patterns of excursion sets, which are the basis for construction of risk-related quantities, such as in the case of evolutionary study of 'hotspots' and long-term indicators of occurrence of extremal episodes. Derivation of flexible techniques, suitable for both the application under general conditions and the interpretation on singularities, is important for practice. Modern risk theory, a developing discipline motivated by the need to establish solid general mathematical-probabilistic foundations for rigorous definition and characterization of risk measures, has led to the introduction of a variety of classes and families, ranging from some conceptually inspired by specific fields of applications, to some intended to provide generality and flexibility to risk analysts under parametric specifications, etc. Quantile-based risk measures, such as Value-at-Risk (VaR), Average Value-at-Risk (AVaR), and generalization to spectral measures, are of particular interest for assessment under very general conditions. In this work, we study the application of quantile-based risk measures in the spatio-temporal context in relation to certain geometrical characteristics of spatial threshold exceedance sets. In particular, we establish a closed-form relationship between VaR, AVaR, and the expected value of threshold exceedance areas and excess volumes. Conditional simulation allows us, by means of empirical global and local spatial cumulative distributions, the derivation of various statistics of practical interest, and subsequent construction of dynamic risk maps. Further, we study the implementation of static and dynamic spatial deformation under this setup, meaningful, among other aspects, for incorporation of heterogeneities and/or covariate effects, or consideration of external factors for risk measurement. We illustrate this approach though Environment and Health applications. This work is partially supported by grant MTM2012-32666 of the Spanish Ministry of Economy and Competitiveness (co-financed by FEDER).

  11. Threshold and non-threshold chemical carcinogens: A survey of the present regulatory landscape.

    PubMed

    Bevan, Ruth J; Harrison, Paul T C

    2017-08-01

    For the proper regulation of a carcinogenic material it is necessary to fully understand its mode of action, and in particular whether it demonstrates a threshold of effect. This paper explores our present understanding of carcinogenicity and the mechanisms underlying the carcinogenic response. The concepts of genotoxic and non-genotoxic and threshold and non-threshold carcinogens are fully described. We provide summary tables of the types of cancer considered to be associated with exposure to a number of carcinogens and the available evidence relating to whether carcinogenicity occurs through a threshold or non-threshold mechanism. In light of these observations we consider how different regulatory bodies approach the question of chemical carcinogenesis, looking in particular at the definitions and methodologies used to derive Occupational Exposure Levels (OELs) for carcinogens. We conclude that unless proper differentiation is made between threshold and non-threshold carcinogens, inappropriate risk management measures may be put in place - and lead also to difficulties in translating carcinogenicity research findings into appropriate health policies. We recommend that clear differentiation between threshold and non-threshold carcinogens should be made by all expert groups and regulatory bodies dealing with carcinogen classification and risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Insights from internet-based remote intrathoracic impedance monitoring as part of a heart failure disease management program.

    PubMed

    Mullens, Wilfried; Oliveira, Leonardo P J; Verga, Tanya; Wilkoff, Bruce L; Tang, Wai Hong Wilson

    2010-01-01

    Changes in intrathoracic impedance (Z) leading to crossing of a derived fluid index (FI) threshold has been associated with heart failure (HF) hospitalization. The authors developed a remote monitoring program as part of HF disease management and prospectively examined the feasibility and resource utilization of monitoring individuals with an implanted device capable of measuring Z. An HF nurse analyzed all transmitted data daily, as they were routinely uploaded as part of quarterly remote device monitoring, and called the patient if the FI crossed the threshold (arbitrarily defined at 60 Omega) to identify clinically relevant events (CREs) that occurred during this period (eg, worsening dyspnea or increase in edema or weight). A total of 400 uploads were completed during the 4-month study period. During this period, 34 patients (18%) had an FI threshold crossing, averaging 0.52 FI threshold crossings per patient-year. Thirty-two of 34 patients contacted by telephone (94%) with FI threshold crossing had evidence of CREs during this period. However, only 6 (18%) had HF hospitalizations, 19 (56%) had reported changes in HF therapy, and 13 (38%) reported drug and/or dietary plan nonadherence. The average data analysis time required was 30 min daily when focusing on those with FI threshold crossing, averaging 8 uploads for review per working day and 5 telephone follow-ups per week. Our pilot observations suggested that Internet-based remote monitoring of Z trends from existing device interrogation uploads is feasible as part of a daily routine of HF disease management. 2010 Wiley Periodicals, Inc.

  13. Comparison of singlet oxygen threshold dose for PDT.

    PubMed

    Zhu, Timothy C; Liu, Baochang; Kim, Michele M; McMillan, Dayton; Liang, Xing; Finlay, Jarod C; Busch, Theresa M

    2014-02-01

    Macroscopic modeling of singlet oxygen ( 1 O 2 ) is of particular interest because it is the major cytotoxic agent causing biological effects for type II photosensitizers during PDT. We have developed a macroscopic model to calculate reacted singlet oxygen concentration ([1O2] rx for PDT. An in-vivo RIF tumor mouse model is used to correlate the necrosis depth to the calculation based on explicit PDT dosimetry of light fluence distribution, tissue optical properties, and photosensitizer concentrations. Inputs to the model include 4 photosensitizer specific photochemical parameters along with the apparent singlet oxygen threshold concentration. Photosensitizer specific model parameters are determined for several type II photosensitizers (Photofrin, BPD, and HPPH). The singlet oxygen threshold concentration is approximately 0.41 - 0.56 mM for all three photosensitizers studied, assuming that the fraction of singlet oxygen generated that interacts with the cell is ( f = 1). In comparison, value derived from other in-vivo mice studies is 0.4 mM for mTHPC. However, the singlet oxygen threshold doses were reported to be 7.9 and 12.1 mM for a multicell in-vitro EMT6/Ro spheroid model for mTHPC and Photofrin PDT, respectively. The sensitivity of threshold singlet oxygen dose for our experiment is examined. The possible influence of vascular vs. apoptotic cell killing mechanism on the singlet oxygen threshold dose is discussed using the BPD with different drug-light intervals 3 hrs vs. 15 min. The observed discrepancies between different experiments warrant further investigation to explain the cause of the difference.

  14. Comparison of singlet oxygen threshold dose for PDT

    PubMed Central

    Zhu, Timothy C; Liu, Baochang; Kim, Michele M.; McMillan, Dayton; Liang, Xing; Finlay, Jarod C.; Busch, Theresa M.

    2015-01-01

    Macroscopic modeling of singlet oxygen (1O2) is of particular interest because it is the major cytotoxic agent causing biological effects for type II photosensitizers during PDT. We have developed a macroscopic model to calculate reacted singlet oxygen concentration ([1O2]rx for PDT. An in-vivo RIF tumor mouse model is used to correlate the necrosis depth to the calculation based on explicit PDT dosimetry of light fluence distribution, tissue optical properties, and photosensitizer concentrations. Inputs to the model include 4 photosensitizer specific photochemical parameters along with the apparent singlet oxygen threshold concentration. Photosensitizer specific model parameters are determined for several type II photosensitizers (Photofrin, BPD, and HPPH). The singlet oxygen threshold concentration is approximately 0.41 – 0.56 mM for all three photosensitizers studied, assuming that the fraction of singlet oxygen generated that interacts with the cell is (f = 1). In comparison, value derived from other in-vivo mice studies is 0.4 mM for mTHPC. However, the singlet oxygen threshold doses were reported to be 7.9 and 12.1 mM for a multicell in-vitro EMT6/Ro spheroid model for mTHPC and Photofrin PDT, respectively. The sensitivity of threshold singlet oxygen dose for our experiment is examined. The possible influence of vascular vs. apoptotic cell killing mechanism on the singlet oxygen threshold dose is discussed using the BPD with different drug-light intervals 3 hrs vs. 15 min. The observed discrepancies between different experiments warrant further investigation to explain the cause of the difference. PMID:25999651

  15. Controlling Contagion Processes in Activity Driven Networks

    NASA Astrophysics Data System (ADS)

    Liu, Suyu; Perra, Nicola; Karsai, Márton; Vespignani, Alessandro

    2014-03-01

    The vast majority of strategies aimed at controlling contagion processes on networks consider the connectivity pattern of the system either quenched or annealed. However, in the real world, many networks are highly dynamical and evolve, in time, concurrently with the contagion process. Here, we derive an analytical framework for the study of control strategies specifically devised for a class of time-varying networks, namely activity-driven networks. We develop a block variable mean-field approach that allows the derivation of the equations describing the coevolution of the contagion process and the network dynamic. We derive the critical immunization threshold and assess the effectiveness of three different control strategies. Finally, we validate the theoretical picture by simulating numerically the spreading process and control strategies in both synthetic networks and a large-scale, real-world, mobile telephone call data set.

  16. Analytical Computation of the Epidemic Threshold on Temporal Networks

    NASA Astrophysics Data System (ADS)

    Valdano, Eugenio; Ferreri, Luca; Poletto, Chiara; Colizza, Vittoria

    2015-04-01

    The time variation of contacts in a networked system may fundamentally alter the properties of spreading processes and affect the condition for large-scale propagation, as encoded in the epidemic threshold. Despite the great interest in the problem for the physics, applied mathematics, computer science, and epidemiology communities, a full theoretical understanding is still missing and currently limited to the cases where the time-scale separation holds between spreading and network dynamics or to specific temporal network models. We consider a Markov chain description of the susceptible-infectious-susceptible process on an arbitrary temporal network. By adopting a multilayer perspective, we develop a general analytical derivation of the epidemic threshold in terms of the spectral radius of a matrix that encodes both network structure and disease dynamics. The accuracy of the approach is confirmed on a set of temporal models and empirical networks and against numerical results. In addition, we explore how the threshold changes when varying the overall time of observation of the temporal network, so as to provide insights on the optimal time window for data collection of empirical temporal networked systems. Our framework is of both fundamental and practical interest, as it offers novel understanding of the interplay between temporal networks and spreading dynamics.

  17. Tetramethylammonium for in vivo marking of the cross-sectional area of the scala media in the guinea pig cochlea.

    PubMed

    Salt, A N; DeMott, J

    1992-01-01

    A physiologic technique was developed to measure endolymphatic cross-sectional area in vivo using tetramethylammonium (TMA) as a volume marker. The technique was evaluated in guinea pigs as an animal model. In the method, the cochlea was exposed surgically and TMA was injected into endolymph of the second turn at a constant rate by iontophoresis. The concentration of TMA was monitored during and after the injection using ion-selective electrodes. Cross-section estimates derived from the TMA concentration measurements were compared in normal animals and animals in which endolymphatic hydrops had been induced by ablation of the endolymphatic duct and sac 8 weeks earlier. The method demonstrated a mean increase in cross-sectional area of 258% in the hydropic group. Individually measured area values were compared with action potential threshold shifts and the magnitude of the endocochlear potential (EP). Hydropic animals typically showed an increase in threshold to 2 kHz stimuli and a decrease in EP. However, the degree of threshold shift or EP decrease did not correlate well with the degree of hydrops present.

  18. A critique of the use of indicator-species scores for identifying thresholds in species responses

    USGS Publications Warehouse

    Cuffney, Thomas F.; Qian, Song S.

    2013-01-01

    Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.

  19. A Multinomial Model for Identifying Significant Pure-Tone Threshold Shifts

    ERIC Educational Resources Information Center

    Schlauch, Robert S.; Carney, Edward

    2007-01-01

    Purpose: Significant threshold differences on retest for pure-tone audiometry are often evaluated by application of ad hoc rules, such as a shift in a pure-tone average or in 2 adjacent frequencies that exceeds a predefined amount. Rules that are so derived do not consider the probability of observing a particular audiogram. Methods: A general…

  20. Predicting the spatial distribution of Ochlerotatus albifasciatus (Diptera: Culicidae) abundance with NOAA imagery.

    PubMed

    Gleiser, R M; Gorla, D E

    2007-12-01

    Ochlerotatus albifasciatus is a vector of western equine encephalomyelitis in Argentina and a nuisance mosquito affecting beef and dairy production. The objective of this study was to analyze whether environmental proxy data derived from 1 km resolution NOAA-AVHRR images could be useful as a rapid tool for locating areas with higher potential for Oc. albifasciatus activity at a regional scale. Training sites for mosquito abundance categories were 3.3x3.3 km polygons over sampling sites. Abundance was classified into two categories according to a proposed threshold for economic losses. Data of channels 1, 2, 4 and 5 were used to calculate five biophysical variables: normalized differences vegetation index (NDVI), land surface temperature, total precipitable water, dew point and vapour saturation deficit. A discriminant analysis correctly classified 100% of the areas predicted to be above or below the economic threshold of 2500 mosquitoes per night of capture, respectively. Components of the NDVI, the total precipitable water and the dew point temperature contributed most to the function value. The results suggest that environmental data derived from AVHRR-NOAA could be useful for rapidly identifying adequate areas for mosquito development or activity.

  1. Population Dynamics of Belonolaimus longicaudatusin a Cotton Production System

    PubMed Central

    Crow, W. T.; Weingartner, D. P.; McSorley, R.; Dickson, D. W.

    2000-01-01

    Belonolaimus longicaudatus is a recognized pathogen of cotton (Gossypium hirsutum), but insufficient information is available on the population dynamics and economic thresholds of B. longicaudatus in cotton production. In this study, data collected from a field in Florida were used to develop models predicting population increases of B. longicaudatus on cotton and population declines under clean fallow. Population densities of B. longicaudatus increased on cotton, reaching a carrying capacity of 139 nematodes/130 cm³ of soil, but decreased exponentially during periods of bare fallow. The model indicated that population densities should decrease each year of monocropped cotton, if an alternate host is not present between sequential cotton crops. Economic thresholds derived from published damage functions and current prices for cotton and nematicides varied from 2 to 5 B. longicaudatus/130 cm³ of soil, depending on the nematicide used. PMID:19270968

  2. Changes in ecosystem resilience detected in automated measures of ecosystem metabolism during a whole-lake manipulation

    PubMed Central

    Batt, Ryan D.; Carpenter, Stephen R.; Cole, Jonathan J.; Pace, Michael L.; Johnson, Robert A.

    2013-01-01

    Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems. PMID:24101479

  3. Invited perspectives: Hydrological perspectives on precipitation intensity-duration thresholds for landslide initiation: proposing hydro-meteorological thresholds

    NASA Astrophysics Data System (ADS)

    Bogaard, Thom; Greco, Roberto

    2018-01-01

    Many shallow landslides and debris flows are precipitation initiated. Therefore, regional landslide hazard assessment is often based on empirically derived precipitation intensity-duration (ID) thresholds and landslide inventories. Generally, two features of precipitation events are plotted and labeled with (shallow) landslide occurrence or non-occurrence. Hereafter, a separation line or zone is drawn, mostly in logarithmic space. The practical background of ID is that often only meteorological information is available when analyzing (non-)occurrence of shallow landslides and, at the same time, it could be that precipitation information is a good proxy for both meteorological trigger and hydrological cause. Although applied in many case studies, this approach suffers from many false positives as well as limited physical process understanding. Some first steps towards a more hydrologically based approach have been proposed in the past, but these efforts received limited follow-up.Therefore, the objective of our paper is to (a) critically analyze the concept of precipitation ID thresholds for shallow landslides and debris flows from a hydro-meteorological point of view and (b) propose a trigger-cause conceptual framework for lumped regional hydro-meteorological hazard assessment based on published examples and associated discussion. We discuss the ID thresholds in relation to return periods of precipitation, soil physics, and slope and catchment water balance. With this paper, we aim to contribute to the development of a stronger conceptual model for regional landslide hazard assessment based on physical process understanding and empirical data.

  4. A threshold dose distribution approach for the study of PDT resistance development: A threshold distribution approach for the study of PDT resistance.

    PubMed

    de Faria, Clara Maria Gonçalves; Inada, Natalia Mayumi; Vollet-Filho, José Dirceu; Bagnato, Vanderlei Salvador

    2018-05-01

    Photodynamic therapy (PDT) is a technique with well-established principles that often demands repeated applications for sequential elimination of tumor cells. An important question concerns the way surviving cells from a treatment behave in the subsequent one. Threshold dose is a core concept in PDT dosimetry, as the minimum amount of energy to be delivered for cell destruction via PDT. Concepts of threshold distribution have shown to be an important tool for PDT results analysis in vitro. In this study, we used some of these concepts for demonstrating subsequent treatments with partial elimination of cells modify the distribution, which represents an increased resistance of the cells to the photodynamic action. HepG2 and HepaRG were used as models of tumor and normal liver cells and a protocol to induce resistance, consisted of repeated PDT sessions using Photogem® as a photosensitizer, was applied to the tumor ones. The response of these cells to PDT was assessed using a standard viability assay and the dose response curves were used for deriving the threshold distributions. The changes in the distribution revealed that the resistance protocol effectively eliminated the most sensitive cells. Nevertheless, HepaRG cell line was the most resistant one among the cells analyzed, which indicates a specificity in clinical applications that enables the use of high doses and drug concentrations with minimal damage to the surrounding normal tissue. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Changes in ecosystem resilience detected in automated measures of ecosystem metabolism during a whole-lake manipulation.

    PubMed

    Batt, Ryan D; Carpenter, Stephen R; Cole, Jonathan J; Pace, Michael L; Johnson, Robert A

    2013-10-22

    Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems.

  6. Conception, fabrication and characterization of a silicon based MEMS inertial switch with a threshold value of 5 g

    NASA Astrophysics Data System (ADS)

    Zhang, Fengtian; Wang, Chao; Yuan, Mingquan; Tang, Bin; Xiong, Zhuang

    2017-12-01

    Most of the MEMS inertial switches developed in recent years are intended for shock and impact sensing with a threshold value above 50 g. In order to follow the requirement of detecting linear acceleration signal at low-g level, a silicon based MEMS inertial switch with a threshold value of 5 g was designed, fabricated and characterized. The switch consisted of a large proof mass, supported by circular spiral springs. An analytical model of the structure stiffness of the proposed switch was derived and verified by finite-element simulation. The structure fabrication was based on a customized double-buried layer silicon-on-insulator wafer and encapsulated by glass wafers. The centrifugal experiment and nanoindentation experiment were performed to measure the threshold value as well as the structure stiffness. The actual threshold values were measured to be 0.1-0.3 g lower than the pre-designed value of 5 g due to the dimension loss during non-contact lithography processing. Concerning the reliability assessment, a series of environmental experiments were conducted and the switches remained operational without excessive errors. However, both the random vibration and the shock tests indicate that the metal particles generated during collision of contact parts might affect the contact reliability and long-time stability. According to the conclusion reached in this report, an attentive study on switch contact behavior should be included in future research.

  7. Torque-onset determination: Unintended consequences of the threshold method.

    PubMed

    Dotan, Raffy; Jenkins, Glenn; O'Brien, Thomas D; Hansen, Steve; Falk, Bareket

    2016-12-01

    Compared with visual torque-onset-detection (TOD), threshold-based TOD produces onset bias, which increases with lower torques or rates of torque development (RTD). To compare the effects of differential TOD-bias on common contractile parameters in two torque-disparate groups. Fifteen boys and 12 men performed maximal, explosive, isometric knee-extensions. Torque and EMG were recorded for each contraction. Best contractions were selected by peak torque (MVC) and peak RTD. Visual-TOD-based torque-time traces, electromechanical delays (EMD), and times to peak RTD (tRTD) were compared with corresponding data derived from fixed 4-Nm- and relative 5%MVC-thresholds. The 5%MVC TOD-biases were similar for boys and men, but the corresponding 4-Nm-based biases were markedly different (40.3±14.1 vs. 18.4±7.1ms, respectively; p<0.001). Boys-men EMD differences were most affected, increasing from 5.0ms (visual) to 26.9ms (4Nm; p<0.01). Men's visually-based torque kinetics tended to be faster than the boys' (NS), but the 4-Nm-based kinetics erroneously depicted the boys as being much faster to any given %MVC (p<0.001). When comparing contractile properties of dissimilar groups, e.g., children vs. adults, threshold-based TOD methods can misrepresent reality and lead to erroneous conclusions. Relative-thresholds (e.g., 5% MVC) still introduce error, but group-comparisons are not confounded. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Generation of squeezed microwave states by a dc-pumped degenerate parametric Josephson junction oscillator

    NASA Astrophysics Data System (ADS)

    Kaertner, Franz X.; Russer, Peter

    1990-11-01

    The master equation for a dc-pumped degenerate Josephson parametric amplifier is derived. It is shown that the Wigner distribution representation of this master equation can be approximated by a Fokker-Planck equation. By using this equation, the dynamical behavior of this degenerate Josephson amplifier with respect to squeezing of the radiation field is investigated. It is shown that below threshold of parametric oscillation, a squeezed vacuum state can be generated, and above threshold a second bifurcation point exists, where the device generates amplitude squeezed radiation. Basic relations between the achievable amplitude squeezing, the output power, and the operation frequency are derived.

  9. Threshold current for fireball generation

    NASA Astrophysics Data System (ADS)

    Dijkhuis, Geert C.

    1982-05-01

    Fireball generation from a high-intensity circuit breaker arc is interpreted here as a quantum-mechanical phenomenon caused by severe cooling of electrode material evaporating from contact surfaces. According to the proposed mechanism, quantum effects appear in the arc plasma when the radius of one magnetic flux quantum inside solid electrode material has shrunk to one London penetration length. A formula derived for the threshold discharge current preceding fireball generation is found compatible with data reported by Silberg. This formula predicts linear scaling of the threshold current with the circuit breaker's electrode radius and concentration of conduction electrons.

  10. Deriving fuel-based emission factor thresholds to interpret heavy-duty vehicle roadside plume measurements.

    PubMed

    Quiros, David C; Smith, Jeremy D; Ham, Walter A; Robertson, William H; Huai, Tao; Ayala, Alberto; Hu, Shaohua

    2018-04-13

    Remote sensing devices have been used for decades to measure gaseous emissions from individual vehicles at the roadside. Systems have also been developed that entrain diluted exhaust and can also measure particulate matter (PM) emissions. In 2015, the California Air Resources Board (CARB) reported that 8% of in-field diesel particulate filters (DPF) on heavy-duty (HD) vehicles were malfunctioning and emitted about 70% of total diesel PM emissions from the DPF-equipped fleet. A new high-emitter problem in the heavy-duty vehicle fleet had emerged. Roadside exhaust plume measurements reflect a snapshot of real-world operation, typically lasting several seconds. In order to relate roadside plume measurements to laboratory emission tests, we analyzed carbon dioxide (CO 2 ), oxides of nitrogen (NO X ), and PM emissions collected from four HD vehicles during several driving cycles on a chassis dynamometer. We examined the fuel-based emission factors corresponding to possible exceedances of emission standards as a function of vehicle power. Our analysis suggests that a typical HD vehicle will exceed the model year (MY) 2010 emission standards (of 0.2 g NO X /bhp-hr and 0.01 g PM/bhp-hr) by three times when fuel-based emission factors are 9.3 g NO X /kg fuel and 0.11 g PM/kg using the roadside plume measurement approach. Reported limits correspond to 99% confidence levels, which were calculated using the detection uncertainty of emissions analyzers, accuracy of vehicle power calculations, and actual emissions variability of fixed operational parameters. The PM threshold was determined for acceleration events between 0.47 and 1.4 mph/sec only, and the NO X threshold was derived from measurements where aftertreatment temperature was above 200°C. Anticipating a growing interest in real-world driving emissions, widespread implementation of roadside exhaust plume measurements as a compliment to in-use vehicle programs may benefit from expanding this analysis to a larger sample of in-use HD vehicles. Implications Regulatory agencies, civil society, and the public at large have a growing interest in vehicle emission compliance in the real world. Leveraging roadside plume measurements to identify vehicles with malfunctioning emission control systems is emerging as a viable new and useful method to assess in-use performance. This work proposes fuel-based emission factor thresholds for PM and NOx that signify exceedances of emission standards on a work-specific basis by analyzing real-time emissions in the laboratory. These thresholds could be used to pre-screen vehicles before roadside enforcement inspection or other inquiry, enhance and further develop emission inventories, and potentially develop new requirements for heavy-duty inspection and maintenance (I/M) programs, including but not limited to identifying vehicles for further testing.

  11. Towards developing drought impact functions to advance drought monitoring and early warning

    NASA Astrophysics Data System (ADS)

    Bachmair, Sophie; Stahl, Kerstin; Hannaford, Jamie; Svoboda, Mark

    2015-04-01

    In natural hazard analysis, damage functions (also referred to as vulnerability or susceptibility functions) relate hazard intensity to the negative effects of the hazard event, often expressed as damage ratio or monetary loss. While damage functions for floods and seismic hazards have gained considerable attention, there is little knowledge on how drought intensity translates into ecological and socioeconomic impacts. One reason for this is the multifaceted nature of drought affecting different domains of the hydrological cycle and different sectors of human activity (for example, recognizing meteorological - agricultural - hydrological - socioeconomic drought) leading to a wide range of drought impacts. Moreover, drought impacts are often non-structural and hard to quantify or monetarize (e.g. impaired navigability of streams, bans on domestic water use, increased mortality of aquatic species). Knowledge on the relationship between drought intensity and drought impacts, i.e. negative environmental, economic or social effects experienced under drought conditions, however, is vital to identify critical thresholds for drought impact occurrence. Such information may help to improve drought monitoring and early warning (M&EW), one goal of the international DrIVER project (Drought Impacts: Vulnerability thresholds in monitoring and Early-warning Research). The aim of this study is to test the feasibility of designing "drought impact functions" for case study areas in Europe (Germany and UK) and the United States to derive thresholds meaningful for drought impact occurrence; to account for the multidimensionality of drought impacts, we use the broader term "drought impact function" over "damage function". First steps towards developing empirical drought impact functions are (1) to identify meaningful indicators characterizing the hazard intensity (e.g. indicators expressing a precipitation or streamflow deficit), (2) to identify suitable variables representing impacts, damage, or loss due to drought, and (3) to test different statistical models to link drought intensity with drought impact information to derive meaningful thresholds. While the focus regarding drought impact variables lies on text-based impact reports from the European Drought Impact report Inventory (EDII) and the US Drought Impact Reporter (DIR), the information gain through exploiting other variables such as agricultural yield statistics and remotely sensed vegetation indices is explored. First results reveal interesting insights into the complex relationship between drought indicators and impacts and highlight differences among drought impact variables and geographies. Although a simple intensity threshold evoking specific drought impacts cannot be identified, developing drought impact functions helps to elucidate how drought conditions relate to ecological or socioeconomic impacts. Such knowledge may provide guidance for inferring meaningful triggers for drought M&EW and could have potential for a wide range of drought management applications (for example, building drought scenarios for testing the resilience of drought plans or water supply systems).

  12. Subject Indexing and Citation Indexing--Part I: Clustering Structure in the Cystic Fibrosis Document Collection [and] Part II: An Evaluation and Comparison.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.

    1990-01-01

    These two articles discuss clustering structure in the Cystic Fibrosis Document Collection, which is derived from the National Library of Medicine's MEDLINE file. The exhaustivity of four subject representations and two citation representations is examined, and descriptor-weight thresholds and similarity thresholds are used to compute…

  13. Derivation of soil screening thresholds to protect chisel-toothed kangaroo rat from uranium mine waste in northern Arizona

    USGS Publications Warehouse

    Hinck, Jo E.; Linder, Greg L.; Otton, James K.; Finger, Susan E.; Little, Edward E.; Tillitt, Donald E.

    2013-01-01

    Chemical data from soil and weathered waste material samples collected from five uranium mines north of the Grand Canyon (three reclaimed, one mined but not reclaimed, and one never mined) were used in a screening-level risk analysis for the Arizona chisel-toothed kangaroo rat (Dipodomys microps leucotis); risks from radiation exposure were not evaluated. Dietary toxicity reference values were used to estimate soil-screening thresholds presenting risk to kangaroo rats. Sensitivity analyses indicated that body weight critically affected outcomes of exposed-dose calculations; juvenile kangaroo rats were more sensitive to the inorganic constituent toxicities than adult kangaroo rats. Species-specific soil-screening thresholds were derived for arsenic (137 mg/kg), cadmium (16 mg/kg), copper (1,461 mg/kg), lead (1,143 mg/kg), nickel (771 mg/kg), thallium (1.3 mg/kg), uranium (1,513 mg/kg), and zinc (731 mg/kg) using toxicity reference values that incorporate expected chronic field exposures. Inorganic contaminants in soils within and near the mine areas generally posed minimal risk to kangaroo rats. Most exceedances of soil thresholds were for arsenic and thallium and were associated with weathered mine wastes.

  14. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    NASA Astrophysics Data System (ADS)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  15. Greenhouse evaluation and environmental impact assessment of different urine-derived struvite fertilizers as phosphorus sources for plants.

    PubMed

    Antonini, Samantha; Arias, Maria Alejandra; Eichert, Thomas; Clemens, Joachim

    2012-11-01

    A selection of six urine-derived struvite fertilizers generated by innovative precipitation technologies was assessed for their quality and their effectiveness as phosphorus sources for crops. Struvite purity was influenced by drying techniques and magnesium dosage. In a greenhouse experiment, the urine fertilizers led to biomass yields and phosphorus uptakes comparable to or higher than those induced by a commercial mineral fertilizer. Heavy metal concentrations of the different struvite fertilizers were below the threshold limits specified by the German Fertilizer and Sewage Sludge Regulations. The computed loading rates of heavy metals to agricultural land were also below the threshold limits decreed by the Federal Soil Protection Act. Urine-derived struvite contributed less to heavy metal inputs to farmland than other recycling products or commercial mineral and organic fertilizers. When combined with other soil conditioners, urine-derived struvite is an efficient fertilizer which covers the magnesium and more than half of the phosphorus demand of crops. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Principal component analysis of the CT density histogram to generate parametric response maps of COPD

    NASA Astrophysics Data System (ADS)

    Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.

    2015-03-01

    Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.

  17. Digital Correlation Microwave Polarimetry: Analysis and Demonstration

    NASA Technical Reports Server (NTRS)

    Piepmeier, J. R.; Gasiewski, A. J.; Krebs, Carolyn A. (Technical Monitor)

    2000-01-01

    The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of three-level digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean wind-vector measurement.

  18. Mathematical Model Relating Uniaxial Compressive Behavior of Manufactured Sand Mortar to MIP-Derived Pore Structure Parameters

    PubMed Central

    Tian, Zhenghong; Bu, Jingwu

    2014-01-01

    The uniaxial compression response of manufactured sand mortars proportioned using different water-cement ratio and sand-cement ratio is examined. Pore structure parameters such as porosity, threshold diameter, mean diameter, and total amounts of macropores, as well as shape and size of micropores are quantified by using mercury intrusion porosimetry (MIP) technique. Test results indicate that strains at peak stress and compressive strength decreased with the increasing sand-cement ratio due to insufficient binders to wrap up entire sand. A compression stress-strain model of normal concrete extending to predict the stress-strain relationships of manufactured sand mortar is verified and agreed well with experimental data. Furthermore, the stress-strain model constant is found to be influenced by threshold diameter, mean diameter, shape, and size of micropores. A mathematical model relating stress-strain model constants to the relevant pore structure parameters of manufactured sand mortar is developed. PMID:25133257

  19. Dynamical processes and epidemic threshold on nonlinear coupled multiplex networks

    NASA Astrophysics Data System (ADS)

    Gao, Chao; Tang, Shaoting; Li, Weihua; Yang, Yaqian; Zheng, Zhiming

    2018-04-01

    Recently, the interplay between epidemic spreading and awareness diffusion has aroused the interest of many researchers, who have studied models mainly based on linear coupling relations between information and epidemic layers. However, in real-world networks the relation between two layers may be closely correlated with the property of individual nodes and exhibits nonlinear dynamical features. Here we propose a nonlinear coupled information-epidemic model (I-E model) and present a comprehensive analysis in a more generalized scenario where the upload rate differs from node to node, deletion rate varies between susceptible and infected states, and infection rate changes between unaware and aware states. In particular, we develop a theoretical framework of the intra- and inter-layer dynamical processes with a microscopic Markov chain approach (MMCA), and derive an analytic epidemic threshold. Our results suggest that the change of upload and deletion rate has little effect on the diffusion dynamics in the epidemic layer.

  20. Mathematical model relating uniaxial compressive behavior of manufactured sand mortar to MIP-derived pore structure parameters.

    PubMed

    Tian, Zhenghong; Bu, Jingwu

    2014-01-01

    The uniaxial compression response of manufactured sand mortars proportioned using different water-cement ratio and sand-cement ratio is examined. Pore structure parameters such as porosity, threshold diameter, mean diameter, and total amounts of macropores, as well as shape and size of micropores are quantified by using mercury intrusion porosimetry (MIP) technique. Test results indicate that strains at peak stress and compressive strength decreased with the increasing sand-cement ratio due to insufficient binders to wrap up entire sand. A compression stress-strain model of normal concrete extending to predict the stress-strain relationships of manufactured sand mortar is verified and agreed well with experimental data. Furthermore, the stress-strain model constant is found to be influenced by threshold diameter, mean diameter, shape, and size of micropores. A mathematical model relating stress-strain model constants to the relevant pore structure parameters of manufactured sand mortar is developed.

  1. Derivation of the threshold condition for the ion temperature gradient mode with an inverted density profile from a simple physics picture

    NASA Astrophysics Data System (ADS)

    Jhang, Hogun

    2018-05-01

    We show that the threshold condition for the toroidal ion temperature gradient (ITG) mode with an inverted density profile can be derived from a simple physics argument. The key in this picture is that the density inversion reduces the ion compression due to the ITG mode and the electron drift motion mitigates the poloidal potential build-up. This condition reproduces the same result that has been reported from a linear gyrokinetic calculation [T. S. Hahm and W. M. Tang, Phys. Fluids B 1, 1185 (1989)]. The destabilizing role of trapped electrons in toroidal geometry is easily captured in this picture.

  2. The effect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling

    NASA Astrophysics Data System (ADS)

    Sulistyo, Bambang

    2016-11-01

    The research was aimed at studying the efect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling of The USLE using remote sensing data and GIS technique. Methods applied was by analysing all factors affecting erosion such that all data were in the form of raster. Those data were R, K, LS, C and P factors. Monthly R factor was evaluated based on formula developed by Abdurachman. K factor was determined using modified formula used by Ministry of Forestry based on soil samples taken in the field. LS factor was derived from Digital Elevation Model. Three C factors used were all derived from NDVI and developed by Suriyaprasit (non-linear) and by Sulistyo (linear and non-linear). P factor was derived from the combination between slope data and landcover classification interpreted from Landsat 7 ETM+. Another analysis was the creation of map of Bulk Density used to convert erosion unit. To know the model accuracy, model validation was done by applying statistical analysis and by comparing Emodel with Eactual. A threshold value of ≥ 0.80 or ≥ 80% was chosen to justify. The research result showed that all Emodel using three formulae of C factors have coeeficient of correlation value of > 0.8. The results of analysis of variance showed that there was significantly difference between Emodel and Eactual when using C factor formula developed by Suriyaprasit and Sulistyo (non-linear). Among the three formulae, only Emodel using C factor formula developed by Sulistyo (linear) reached the accuracy of 81.13% while the other only 56.02% as developed by Sulistyo (nonlinear) and 4.70% as developed by Suriyaprasit, respectively.

  3. Analysis of Critical Mass in Threshold Model of Diffusion

    NASA Astrophysics Data System (ADS)

    Kim, Jeehong; Hur, Wonchang; Kang, Suk-Ho

    2012-04-01

    Why does diffusion sometimes show cascade phenomena but at other times is impeded? In addressing this question, we considered a threshold model of diffusion, focusing on the formation of a critical mass, which enables diffusion to be self-sustaining. Performing an agent-based simulation, we found that the diffusion model produces only two outcomes: Almost perfect adoption or relatively few adoptions. In order to explain the difference, we considered the various properties of network structures and found that the manner in which thresholds are arrayed over a network is the most critical factor determining the size of a cascade. On the basis of the results, we derived a threshold arrangement method effective for generation of a critical mass and calculated the size required for perfect adoption.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slipchenko, S. O., E-mail: serghpl@mail.ioffe.ru; Podoskin, A. A.; Pikhtin, N. A.

    Threshold conditions for generation of a closed mode in the crystal of the Fabry-Perot semiconductor laser with a quantum-well active region are analyzed. It is found that main parameters affecting the closed mode lasing threshold for the chosen laser heterostructure are as follows: the optical loss in the passive region, the optical confinement factor of the closed mode in the gain region, and material gain detuning. The relations defining the threshold conditions for closed mode lasing in terms of optical and geometrical characteristics of the semiconductor laser are derived. It is shown that the threshold conditions can be satisfied atmore » a lower material gain in comparison with the Fabry-Perot cavity mode due to zero output loss for the closed mode.« less

  5. Generalised form of a power law threshold function for rainfall-induced landslides

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Díaz, Manuel Roberto; Nadim, Farrokh; Høeg, Kaare; Elverhøi, Anders

    2010-05-01

    The following new function is proposed for estimating thresholds for rainfall-triggered landslides: I = α1Anα2Dβ, where I is rainfall intensity in mm/h, D is rainfall duration in h, An is the n-hours or n-days antecedent precipitation, and α1, α2, β and n are threshold parameters. A threshold model that combines two functions with different durations of antecedent precipitation is also introduced. A storm observation exceeds the threshold when the storm parameters are located at or above the two functions simultaneously. A novel optimisation procedure for estimating the threshold parameters is proposed using Receiver Operating Characteristics (ROC) analysis. The new threshold function and optimisation procedure are applied for estimating thresholds for triggering of debris flows in the Western Metropolitan Area of San Salvador (AMSS), El Salvador, where up to 500 casualties were produced by a single event. The resulting thresholds are I = 2322 A7d-1D-0.43 and I = 28534 A150d-1D-0.43 for debris flows having volumes greater than 3000 m3. Thresholds are also derived for debris flows greater than 200 000 m3 and for hyperconcentrated flows initiating in burned areas caused by forest fires. The new thresholds show an improved performance compared to the traditional formulations, indicated by a reduction in false alarms from 51 to 5 for the 3000 m3 thresholds and from 6 to 0 false alarms for the 200 000 m3 thresholds.

  6. Advances in mechanistic understanding of release rate control mechanisms of extended-release hydrophilic matrix tablets.

    PubMed

    Timmins, Peter; Desai, Divyakant; Chen, Wei; Wray, Patrick; Brown, Jonathan; Hanley, Sarah

    2016-08-01

    Approaches to characterizing and developing understanding around the mechanisms that control the release of drugs from hydrophilic matrix tablets are reviewed. While historical context is provided and direct physical characterization methods are described, recent advances including the role of percolation thresholds, the application on magnetic resonance and other spectroscopic imaging techniques are considered. The influence of polymer and dosage form characteristics are reviewed. The utility of mathematical modeling is described. Finally, how all the information derived from applying the developed mechanistic understanding from all of these tools can be brought together to develop a robust and reliable hydrophilic matrix extended-release tablet formulation is proposed.

  7. 24 CFR 594.7 - Other threshold requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Other threshold requirements. 594.7 Section 594.7 Housing and Urban Development Regulations Relating to Housing and Urban Development... Other threshold requirements. In addition, an applicant must meet the following threshold requirements...

  8. Chemical sensing thresholds for mine detection dogs

    NASA Astrophysics Data System (ADS)

    Phelan, James M.; Barnett, James L.

    2002-08-01

    Mine detection dogs have been found to be an effective method to locate buried landmines. The capabilities of the canine olfaction method are from a complex combination of training and inherent capacity of the dog for odor detection. The purpose of this effort was to explore the detection thresholds of a limited group of dogs that were trained specifically for landmine detection. Soils were contaminated with TNT and 2,4-DNT to develop chemical vapor standards to present to the dogs. Soils contained ultra trace levels of TNT and DNT, which produce extremely low vapor levels. Three groups of dogs were presented the headspace vapors from the contaminated soils in work environments for each dog group. One positive sample was placed among several that contained clean soils and, the location and vapor source (strength, type) was frequently changed. The detection thresholds for the dogs were determined from measured and extrapolated dilution of soil chemical residues and, estimated soil vapor values using phase partitioning relationships. The results showed significant variances in dog sensing thresholds, where some dogs could sense the lowest levels and others had trouble with even the highest source. The remarkable ultra-trace levels detectable by the dogs are consistent with the ultra-trace chemical residues derived from buried landmines; however, poor performance may go unnoticed without periodic challenge tests at levels consistent with performance requirements.

  9. Modeled summer background concentration nutrients and ...

    EPA Pesticide Factsheets

    We used regression models to predict background concentration of four water quality indictors: total nitrogen (N), total phosphorus (P), chloride, and total suspended solids (TSS), in the mid-continent (USA) great rivers, the Upper Mississippi, the Lower Missouri, and the Ohio. From best-model linear regressions of water quality indicators with land use and other stressor variables, we determined the concentration of the indicators when the land use and stressor variables were all set to zero the y-intercept. Except for total P on the Upper Mississippi River and chloride on the Ohio River, we were able to predict background concentration from significant regression models. In every model with more than one predictor variable, the model included at least one variable representing agricultural land use and one variable representing development. Predicted background concentration of total N was the same on the Upper Mississippi and Lower Missouri rivers (350 ug l-1), which was much lower than a published eutrophication threshold and percentile-based thresholds (25th percentile of concentration at all sites in the population) but was similar to a threshold derived from the response of sestonic chlorophyll a to great river total N concentration. Background concentration of total P on the Lower Missouri (53 ug l-1) was also lower than published and percentile-based thresholds. Background TSS concentration was higher on the Lower Missouri (30 mg l-1) than the other ri

  10. Spreading dynamics of a SIQRS epidemic model on scale-free networks

    NASA Astrophysics Data System (ADS)

    Li, Tao; Wang, Yuanmei; Guan, Zhi-Hong

    2014-03-01

    In order to investigate the influence of heterogeneity of the underlying networks and quarantine strategy on epidemic spreading, a SIQRS epidemic model on the scale-free networks is presented. Using the mean field theory the spreading dynamics of the virus is analyzed. The spreading critical threshold and equilibria are derived. Theoretical results indicate that the critical threshold value is significantly dependent on the topology of the underlying networks and quarantine rate. The existence of equilibria is determined by threshold value. The stability of disease-free equilibrium and the permanence of the disease are proved. Numerical simulations confirmed the analytical results.

  11. Use of LiDAR to define habitat thresholds for forest bird conservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garabedian, James E.; Moorman, Christopher E.; Nils Peterson, M.

    Quantifying species-habitat relationships provides guidance for establishment of recovery standards for endangered species, but research on forest bird habitat has been limited by availability of fine-grained forest structure data across broad extents. New tools for collection of data on forest bird response to fine-grained forest structure provide opportunities to evaluate habitat thresholds for forest birds. We used LiDAR-derived estimates of habitat attributes and resource selection to evaluate foraging habitat thresholds for recovery of the federally endangered red-cockaded woodpecker (Leuconotopicus borealis; RCW) on the Savannah River Site, South Carolina.

  12. Use of LiDAR to define habitat thresholds for forest bird conservation

    DOE PAGES

    Garabedian, James E.; Moorman, Christopher E.; Nils Peterson, M.; ...

    2017-09-01

    Quantifying species-habitat relationships provides guidance for establishment of recovery standards for endangered species, but research on forest bird habitat has been limited by availability of fine-grained forest structure data across broad extents. New tools for collection of data on forest bird response to fine-grained forest structure provide opportunities to evaluate habitat thresholds for forest birds. We used LiDAR-derived estimates of habitat attributes and resource selection to evaluate foraging habitat thresholds for recovery of the federally endangered red-cockaded woodpecker (Leuconotopicus borealis; RCW) on the Savannah River Site, South Carolina.

  13. Toward a generalized theory of epidemic awareness in social networks

    NASA Astrophysics Data System (ADS)

    Wu, Qingchu; Zhu, Wenfang

    We discuss the dynamics of a susceptible-infected-susceptible (SIS) model with local awareness in networks. Individual awareness to the infectious disease is characterized by a general function of epidemic information in its neighborhood. We build a high-accuracy approximate equation governing the spreading dynamics and derive an approximate epidemic threshold above which the epidemic spreads over the whole network. Our results extend the previous work and show that the epidemic threshold is dependent on the awareness function in terms of one infectious neighbor. Interestingly, when a pow-law awareness function is chosen, the epidemic threshold can emerge in infinite networks.

  14. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew; Chignell, Steve

    2017-01-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  15. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    NASA Astrophysics Data System (ADS)

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew W.; Chignell, Stephen M.

    2017-07-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  16. Definition of temperature thresholds: the example of the French heat wave warning system.

    PubMed

    Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal

    2013-01-01

    Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.

  17. Development of a precipitation-area curve for warning criteria of short-duration flash flood

    NASA Astrophysics Data System (ADS)

    Bae, Deg-Hyo; Lee, Moon-Hwan; Moon, Sung-Keun

    2018-01-01

    This paper presents quantitative criteria for flash flood warning that can be used to rapidly assess flash flood occurrence based on only rainfall estimates. This study was conducted for 200 small mountainous sub-catchments of the Han River basin in South Korea because South Korea has recently suffered many flash flood events. The quantitative criteria are calculated based on flash flood guidance (FFG), which is defined as the depth of rainfall of a given duration required to cause frequent flooding (1-2-year return period) at the outlet of a small stream basin and is estimated using threshold runoff (TR) and antecedent soil moisture conditions in all sub-basins. The soil moisture conditions were estimated during the flooding season, i.e., July, August and September, over 7 years (2002-2009) using the Sejong University Rainfall Runoff (SURR) model. A ROC (receiver operating characteristic) analysis was used to obtain optimum rainfall values and a generalized precipitation-area (P-A) curve was developed for flash flood warning thresholds. The threshold function was derived as a P-A curve because the precipitation threshold with a short duration is more closely related to basin area than any other variables. For a brief description of the P-A curve, generalized thresholds for flash flood warnings can be suggested for rainfall rates of 42, 32 and 20 mm h-1 in sub-basins with areas of 22-40, 40-100 and > 100 km2, respectively. The proposed P-A curve was validated based on observed flash flood events in different sub-basins. Flash flood occurrences were captured for 9 out of 12 events. This result can be used instead of FFG to identify brief flash flood (less than 1 h), and it can provide warning information to decision-makers or citizens that is relatively simple, clear and immediate.

  18. Personalizing annual lung cancer screening for patients with chronic obstructive pulmonary disease: A decision analysis.

    PubMed

    Lowry, Kathryn P; Gazelle, G Scott; Gilmore, Michael E; Johanson, Colden; Munshi, Vidit; Choi, Sung Eun; Tramontano, Angela C; Kong, Chung Yin; McMahon, Pamela M

    2015-05-15

    Lung cancer screening with annual chest computed tomography (CT) is recommended for current and former smokers with a ≥30-pack-year smoking history. Patients with chronic obstructive pulmonary disease (COPD) are at increased risk of developing lung cancer and may benefit from screening at lower pack-year thresholds. We used a previously validated simulation model to compare the health benefits of lung cancer screening in current and former smokers ages 55-80 with ≥30 pack-years with hypothetical programs using lower pack-year thresholds for individuals with COPD (≥20, ≥10, and ≥1 pack-years). Calibration targets for COPD prevalence and associated lung cancer risk were derived using the Framingham Offspring Study limited data set. We performed sensitivity analyses to evaluate the stability of results across different rates of adherence to screening, increased competing mortality risk from COPD, and increased surgical ineligibility in individuals with COPD. The primary outcome was projected life expectancy. Programs using lower pack-year thresholds for individuals with COPD yielded the highest life expectancy gains for a given number of screens. Highest life expectancy was achieved when lowering the pack-year threshold to ≥1 pack-year for individuals with COPD, which dominated all other screening strategies. These results were stable across different adherence rates to screening and increases in competing mortality risk for COPD and surgical ineligibility. Current and former smokers with COPD may disproportionately benefit from lung cancer screening. A lower pack-year threshold for screening eligibility may benefit this high-risk patient population. © 2015 American Cancer Society.

  19. The effect of the impactor diameter and temperature on low velocity impact behavior of CFRP laminates

    NASA Astrophysics Data System (ADS)

    Evci, C.; Uyandıran, I.

    2017-02-01

    Impact damage is one of the major concerns that should be taken into account with the new aircraft and spacecraft structures which employ ever-growing use of composite materials. Considering the thermal loads encountered at different altitudes, both low and high temperatures can affect the properties and impact behavior of composite materials. This study aims to investigate the effect of temperature and impactor diameter on the impact behavior and damage development in balanced and symmetrical CFRP laminates which were manufactured by employing vacuum bagging process with autoclave cure. Instrumented drop-weight impact testing system is used to perform the low velocity impact tests in a range of temperatures ranged from 60 down to -50 °C. Impact tests for each temperature level were conducted using three different hemispherical impactor diameters varying from 10 to 20 mm. Energy profile method is employed to determine the impact threshold energies for damage evolution. The level of impact damage is determined from the dent depth on the impacted face and delamination damage detected using ultrasonic C-Scan technique. Test results reveal that the threshold of penetration energy, main failure force and delamination area increase with impactor diameter at all temperature levels. No clear influence of temperature on the critical force thresholds could be derived. However, penetration threshold energy decreased as the temperature was lowered. Drop in the penetration threshold was more obvious with quite low temperatures. Delamination damage area increased while the temperature decreased from +60 °C to -50 °C.

  20. Saltation threshold on Mars - The effect of interparticle force, surface roughness, and low atmospheric density. [from wind-tunnel experiments

    NASA Technical Reports Server (NTRS)

    Iversen, J. D.; White, B. R.; Pollack, J. B.; Greeley, R.

    1976-01-01

    Results are reported for wind-tunnel experiments performed to determine the threshold friction speed of particles with different densities. Experimentally determined threshold speeds are plotted as a function of particle diameter and in terms of threshold parameter vs particle friction Reynolds number. The curves are compared with those of previous experiments, and an A-B curve is plotted to show differences in threshold speed due to differences in size distributions and particle shapes. Effects of particle diameter are investigated, an expression for threshold speed is derived by considering the equilibrium forces acting on a single particle, and other approximately valid expressions are evaluated. It is shown that the assumption of universality of the A-B curve is in error at very low pressures for small particles and that only predictions which take account of both Reynolds number and effects of interparticle forces yield reasonable agreement with experimental data. Effects of nonerodible surface roughness are examined, and threshold speeds computed with allowance for this factor are compared with experimental values. Threshold friction speeds on Mars are then estimated for a surface pressure of 5 mbar, taking into account all the factors considered.

  1. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  2. Threshold and resilience management of coupled urbanization and water environmental system in the rapidly changing coastal region.

    PubMed

    Li, Yangfan; Li, Yi; Wu, Wei

    2016-01-01

    The concept of thresholds shows important implications for environmental and resource management. Here we derived potential landscape thresholds which indicated abrupt changes in water quality or the dividing points between exceeding and failing to meet national surface water quality standards for a rapidly urbanizing city on the Eastern Coast in China. The analysis of landscape thresholds was based on regression models linking each of the seven water quality variables to each of the six landscape metrics for this coupled land-water system. We found substantial and accelerating urban sprawl at the suburban areas between 2000 and 2008, and detected significant nonlinear relations between water quality and landscape pattern. This research demonstrated that a simple modeling technique could provide insights on environmental thresholds to support more-informed decision making in land use, water environmental and resilience management. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The threshold laws for electron-atom and positron-atom impact ionization

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1983-01-01

    The Coulomb-dipole theory is employed to derive a threshold law for the lowest energy needed for the separation of three particles from one another. The study focuses on an electron impinging on a neutral atom, and the dipole is formed between an inner electron and the nucleus. The analytical dependence of the transition matrix element on energy is reduced to lowest order to obtain the threshold law, with the inner electron providing a shield for the nucleus. Experimental results using the LAMPF accelerator to produce a high energy beam of H- ions, which are then exposed to an optical laser beam to detach the negative H- ion, are discussed. The threshold level is found to be confined to the region defined by the upper bound of the inverse square of the Coulomb-dipole region. Difficulties in exact experimental confirmation of the threshold are considered.

  4. Global epidemic invasion thresholds in directed cattle subpopulation networks having source, sink, and transit nodes.

    PubMed

    Schumm, Phillip; Scoglio, Caterina; Zhang, Qian; Balcan, Duygu

    2015-02-21

    Through the characterization of a metapopulation cattle disease model on a directed network having source, transit, and sink nodes, we derive two global epidemic invasion thresholds. The first threshold defines the conditions necessary for an epidemic to successfully spread at the global scale. The second threshold defines the criteria that permit an epidemic to move out of the giant strongly connected component and to invade the populations of the sink nodes. As each sink node represents a final waypoint for cattle before slaughter, the existence of an epidemic among the sink nodes is a serious threat to food security. We find that the relationship between these two thresholds depends on the relative proportions of transit and sink nodes in the system and the distributions of the in-degrees of both node types. These analytic results are verified through numerical realizations of the metapopulation cattle model. Published by Elsevier Ltd.

  5. The challenges of developing a contrast-based video game for treatment of amblyopia

    PubMed Central

    Hussain, Zahra; Astle, Andrew T.; Webb, Ben S.; McGraw, Paul V.

    2014-01-01

    Perceptual learning of visual tasks is emerging as a promising treatment for amblyopia, a developmental disorder of vision characterized by poor monocular visual acuity. The tasks tested thus far span the gamut from basic psychophysical discriminations to visually complex video games. One end of the spectrum offers precise control over stimulus parameters, whilst the other delivers the benefits of motivation and reward that sustain practice over long periods. Here, we combined the advantages of both approaches by developing a video game that trains contrast sensitivity, which in psychophysical experiments, is associated with significant improvements in visual acuity in amblyopia. Target contrast was varied adaptively in the game to derive a contrast threshold for each session. We tested the game on 20 amblyopic subjects (10 children and 10 adults), who played at home using their amblyopic eye for an average of 37 sessions (approximately 11 h). Contrast thresholds from the game improved reliably for adults but not for children. However, logMAR acuity improved for both groups (mean = 1.3 lines; range = 0–3.6 lines). We present the rationale leading to the development of the game and describe the challenges of incorporating psychophysical methods into game-like settings. PMID:25404922

  6. The challenges of developing a contrast-based video game for treatment of amblyopia.

    PubMed

    Hussain, Zahra; Astle, Andrew T; Webb, Ben S; McGraw, Paul V

    2014-01-01

    Perceptual learning of visual tasks is emerging as a promising treatment for amblyopia, a developmental disorder of vision characterized by poor monocular visual acuity. The tasks tested thus far span the gamut from basic psychophysical discriminations to visually complex video games. One end of the spectrum offers precise control over stimulus parameters, whilst the other delivers the benefits of motivation and reward that sustain practice over long periods. Here, we combined the advantages of both approaches by developing a video game that trains contrast sensitivity, which in psychophysical experiments, is associated with significant improvements in visual acuity in amblyopia. Target contrast was varied adaptively in the game to derive a contrast threshold for each session. We tested the game on 20 amblyopic subjects (10 children and 10 adults), who played at home using their amblyopic eye for an average of 37 sessions (approximately 11 h). Contrast thresholds from the game improved reliably for adults but not for children. However, logMAR acuity improved for both groups (mean = 1.3 lines; range = 0-3.6 lines). We present the rationale leading to the development of the game and describe the challenges of incorporating psychophysical methods into game-like settings.

  7. Stimulated emission and optical properties of pyranyliden fragment containing compounds in PVK matrix

    NASA Astrophysics Data System (ADS)

    Vembris, Aivars; Zarins, Elmars; Kokars, Valdis

    2017-10-01

    Organic solid state lasers are thoughtfully investigated due to their potential applications in communication, sensors, biomedicine, etc. Low amplified spontaneous emission (ASE) excitation threshold value is essential for further use of the material in devices. Intramolecular interaction limits high molecule density load in the matrix. It is the case of the well-known red light emitting laser dye - 4-(dicyanomethylene)-2-methyl-6-(4-dimethylaminostyryl)-4H-pyran (DCM). The lowest ASE threshold value of the mentioned laser dye could be obtained within the concentration range between 2 and 4 wt%. At higher concentration threshold energy drastically increases. In this work optical and ASE properties of three original DCM derivatives in poly(N-vinylcarbazole) (PVK) at various concentrations will be discussed. One of the derivatives is modified DCM dye in which the methyl substituents in the electron donor part have been replaced with bulky trityloxyethyl groups (DWK-1). These sterically significant functional groups do not influence electron transitions in the dye but prevent aggregation of the molecules. The chemical structure of the second investigated compound is similar to DWK-1 where the methyl group is replaced with the tert-butyl substituent (DWK-1TB). The third derivative (DWK-2) consists of two N,N-di(trityloxyethyl)amino electron donor groups. All results were compared with DCM:PVK system. Photoluminescence quantum yield (PLQY) is up to ten times larger for DWK-1TB with respect to DCM systems. Bulky trityloxyethyl groups prevent aggregation of the molecules thus decreasing interaction between dyes and amount of non-radiative decays. The red shift of the photoluminescence and amplified spontaneous emission at higher concentrations were observed due to the solid state solvation effect. The increase of the investigated dye density in the matrix with a smaller reduction in PLQY resulted in low ASE threshold energy. The lowest threshold value was obtained around 21 μJ/cm2 (2.1 kW/cm2) in DWK-1TB:PVK films.

  8. The self-perception of dyspnoea threshold during the 6-min walk test: a good alternative to estimate the ventilatory threshold in chronic obstructive pulmonary disease.

    PubMed

    Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly

    2016-12-01

    To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.

  9. Dynamics of social contagions with memory of nonredundant information

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Tang, Ming; Zhang, Hai-Feng; Lai, Ying-Cheng

    2015-07-01

    A key ingredient in social contagion dynamics is reinforcement, as adopting a certain social behavior requires verification of its credibility and legitimacy. Memory of nonredundant information plays an important role in reinforcement, which so far has eluded theoretical analysis. We first propose a general social contagion model with reinforcement derived from nonredundant information memory. Then, we develop a unified edge-based compartmental theory to analyze this model, and a remarkable agreement with numerics is obtained on some specific models. We use a spreading threshold model as a specific example to understand the memory effect, in which each individual adopts a social behavior only when the cumulative pieces of information that the individual received from his or her neighbors exceeds an adoption threshold. Through analysis and numerical simulations, we find that the memory characteristic markedly affects the dynamics as quantified by the final adoption size. Strikingly, we uncover a transition phenomenon in which the dependence of the final adoption size on some key parameters, such as the transmission probability, can change from being discontinuous to being continuous. The transition can be triggered by proper parameters and structural perturbations to the system, such as decreasing individuals' adoption threshold, increasing initial seed size, or enhancing the network heterogeneity.

  10. Beverton-Holt discrete pest management models with pulsed chemical control and evolution of pesticide resistance

    NASA Astrophysics Data System (ADS)

    Liang, Juhua; Tang, Sanyi; Cheke, Robert A.

    2016-07-01

    Pest resistance to pesticides is usually managed by switching between different types of pesticides. The optimal switching time, which depends on the dynamics of the pest population and on the evolution of the pesticide resistance, is critical. Here we address how the dynamic complexity of the pest population, the development of resistance and the spraying frequency of pulsed chemical control affect optimal switching strategies given different control aims. To do this, we developed novel discrete pest population growth models with both impulsive chemical control and the evolution of pesticide resistance. Strong and weak threshold conditions which guarantee the extinction of the pest population, based on the threshold values of the analytical formula for the optimal switching time, were derived. Further, we addressed switching strategies in the light of chosen economic injury levels. Moreover, the effects of the complex dynamical behaviour of the pest population on the pesticide switching times were also studied. The pesticide application period, the evolution of pesticide resistance and the dynamic complexity of the pest population may result in complex outbreak patterns, with consequent effects on the pesticide switching strategies.

  11. Threshold Laws for Two-Electron Ejection Processes: A Still Controversial Problem in Atomic Physics

    NASA Technical Reports Server (NTRS)

    Temkin, Aaron

    2003-01-01

    This talk deals with collision processes of the following kind: (a) an ionizing collision of an electron with a neutral atom, (b) a photon incident of a negative ion resulting in two-electron ejection. In both cases the final state is a positive ion and two outgoing electrons, and in principle both processes should be governed by the same form of threshold law. It is generally conceded that this is one of the most difficult basic problems in nonrelativistic quantum mechanics. The standard treatment (due to Wannier) will be briefly reviewed in terms of the derivation of his well- known threshold law for the yield (Q) of positive ions vs. the excess energy (E): Q(sub w) varies as E(exp 1.127...). The derivation is a brilliant analysis based on Newton's equations, leading to the dominance of events in which the two electrons emerge on opposite sides of the residual ion with similar energies. In contrast, I will argue on the basis of quantum mechanical ideas that in the threshold limit the more likely outcome are events in which the electrons emerge with decidedly different energies, leading to a formally different (Coulomb-dipole) threshold law Q(sub CD) varies as E(1 + C sin(alpha ln(E)+mu)]/[ln(E)](exp 2). Additional aspects of that approach will be discussed . Some: experimental results will be presented, and more incisive predictions involving polarized projectiles and targets will be given.

  12. Amplified spontaneous emission of pyranyliden derivatives in PVK matrix

    NASA Astrophysics Data System (ADS)

    Vembris, Aivars; Zarinsh, Elmars; Kokars, Valdis

    2016-04-01

    One of the well-known red light emitting laser dyes is 4-(dicyanomethylene)-2-methyl-6-(4-dimethylaminostyryl)-4Hpyran (DCM). Amplified spontaneous emission (ASE) has been widely investigated of DCM molecules or its derivatives in polymer or low molecular weight matrix. The main issue for these molecules is aggregation which limits doping concentration in matrix. Lowest ASE threshold values within concentration range of 2 and 4 wt% were obtained. In this work ASE properties of two original DCM derivatives in poly(N-vinylcarbazole) (PVK) at various concentrations will be discussed. One of the derivatives is the same DCM dye with replaced butyl groups at electron donor part with bulky trytiloxyethyl groups (DWK-1). These groups do not influence electron transitions in the dye but prevent aggregation of the molecules. Second derivative (DWK-2) consists of two equal donor groups with the attached trytiloxyethyl groups. All results were compared with DCM:PVK system. Photoluminescence quantum yield (PLQY) is almost three times larger for DWK-1 concentration up to 20wt% with respect to DCM systems. PLQY was saturated on 0.06 at higher DWK-1 concentrations. Bulky trytiloxyethyl groups prevent aggregation of the molecules thus decreasing interaction between dyes and numbers of non-radiative decays. Red shift of photoluminescence and amplified spontaneous emission at higher concentrations were observed due to the solid state solvation effect. Increases of dye density in matrix with smaller lose in PLQY resulted in low ASE threshold energy. The lowest threshold value was obtained around 29 μJ/cm2 in DWK-1:PVK films.

  13. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  14. Phytotoxicity and accumulation of chromium in carrot plants and the derivation of soil thresholds for Chinese soils.

    PubMed

    Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang

    2014-10-01

    Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Molecular beam mass spectrometer equipped with a catalytic wall reactor for in situ studies in high temperature catalysis research

    NASA Astrophysics Data System (ADS)

    Horn, R.; Ihmann, K.; Ihmann, J.; Jentoft, F. C.; Geske, M.; Taha, A.; Pelzer, K.; Schlögl, R.

    2006-05-01

    A newly developed apparatus combining a molecular beam mass spectrometer and a catalytic wall reactor is described. The setup has been developed for in situ studies of high temperature catalytic reactions (>1000°C), which involve besides surface reactions also gas phase reactions in their mechanism. The goal is to identify gas phase radicals by threshold ionization. A tubular reactor, made from the catalytic material, is positioned in a vacuum chamber. Expansion of the gas through a 100μm sampling orifice in the reactor wall into differentially pumped nozzle, skimmer, and collimator chambers leads to the formation of a molecular beam. A quadrupole mass spectrometer with electron impact ion source designed for molecular beam inlet and threshold ionization measurements is used as the analyzer. The sampling time from nozzle to detector is estimated to be less than 10ms. A detection time resolution of up to 20ms can be reached. The temperature of the reactor is measured by pyrometry. Besides a detailed description of the setup components and the physical background of the method, this article presents measurements showing the performance of the apparatus. After deriving the shape and width of the energy spread of the ionizing electrons from measurements on N2 and He we estimated the detection limit in threshold ionization measurements using binary mixtures of CO in N2 to be in the range of several hundreds of ppm. Mass spectra and threshold ionization measurements recorded during catalytic partial oxidation of methane at 1250°C on a Pt catalyst are presented. The detection of CH3• radicals is successfully demonstrated.

  16. Molecular beam mass spectrometer equipped with a catalytic wall reactor for in situ studies in high temperature catalysis research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horn, R.; Ihmann, K.; Ihmann, J.

    2006-05-15

    A newly developed apparatus combining a molecular beam mass spectrometer and a catalytic wall reactor is described. The setup has been developed for in situ studies of high temperature catalytic reactions (>1000 deg. C), which involve besides surface reactions also gas phase reactions in their mechanism. The goal is to identify gas phase radicals by threshold ionization. A tubular reactor, made from the catalytic material, is positioned in a vacuum chamber. Expansion of the gas through a 100 {mu}m sampling orifice in the reactor wall into differentially pumped nozzle, skimmer, and collimator chambers leads to the formation of a molecularmore » beam. A quadrupole mass spectrometer with electron impact ion source designed for molecular beam inlet and threshold ionization measurements is used as the analyzer. The sampling time from nozzle to detector is estimated to be less than 10 ms. A detection time resolution of up to 20 ms can be reached. The temperature of the reactor is measured by pyrometry. Besides a detailed description of the setup components and the physical background of the method, this article presents measurements showing the performance of the apparatus. After deriving the shape and width of the energy spread of the ionizing electrons from measurements on N{sub 2} and He we estimated the detection limit in threshold ionization measurements using binary mixtures of CO in N{sub 2} to be in the range of several hundreds of ppm. Mass spectra and threshold ionization measurements recorded during catalytic partial oxidation of methane at 1250 deg. C on a Pt catalyst are presented. The detection of CH{sub 3}{center_dot} radicals is successfully demonstrated.« less

  17. High speed point derivative microseismic detector

    DOEpatents

    Uhl, J.E.; Warpinski, N.R.; Whetten, E.B.

    1998-06-30

    A high speed microseismic event detector constructed in accordance with the present invention uses a point derivative comb to quickly and accurately detect microseismic events. Compressional and shear waves impinging upon microseismic receiver stations disposed to collect waves are converted into digital data and analyzed using a point derivative comb including assurance of quiet periods prior to declaration of microseismic events. If a sufficient number of quiet periods have passed, the square of a two point derivative of the incoming digital signal is compared to a trip level threshold exceeding the determined noise level to declare a valid trial event. The squaring of the derivative emphasizes the differences between noise and signal, and the valid event is preferably declared when the trip threshold has been exceeded over a temporal comb width to realize a comb over a given time period. Once a trial event has been declared, the event is verified through a spatial comb, which applies the temporal event comb to additional stations. The detector according to the present invention quickly and accurately detects initial compressional waves indicative of a microseismic event which typically exceed the ambient cultural noise level by a small amount, and distinguishes the waves from subsequent larger amplitude shear waves. 9 figs.

  18. High speed point derivative microseismic detector

    DOEpatents

    Uhl, James Eugene; Warpinski, Norman Raymond; Whetten, Ernest Blayne

    1998-01-01

    A high speed microseismic event detector constructed in accordance with the present invention uses a point derivative comb to quickly and accurately detect microseismic events. Compressional and shear waves impinging upon microseismic receiver stations disposed to collect waves are converted into digital data and analyzed using a point derivative comb including assurance of quiet periods prior to declaration of microseismic events. If a sufficient number of quiet periods have passed, the square of a two point derivative of the incoming digital signal is compared to a trip level threshold exceeding the determined noise level to declare a valid trial event. The squaring of the derivative emphasizes the differences between noise and signal, and the valid event is preferably declared when the trip threshold has been exceeded over a temporal comb width to realize a comb over a given time period. Once a trial event has been declared, the event is verified through a spatial comb, which applies the temporal event comb to additional stations. The detector according to the present invention quickly and accurately detects initial compressional waves indicative of a microseismic event which typically exceed the ambient cultural noise level by a small amount, and distinguishes the waves from subsequent larger amplitude shear waves.

  19. Threshold law for electron-atom impact ionization

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1982-01-01

    A derivation of the explicit form of the threshold law for electron impact ionization of atoms is presented, based on the Coulomb-dipole theory. The important generalization is made of using a dipole function whose moment is the dipole moment formed by an inner electron and the nucleus. The result is a modulated quasi-linear law for the yield of positive ions which applies to positron-atom impact ionization.

  20. Searching for low percolation thresholds within amphiphilic polymer membranes: The effect of side chain branching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorenbos, G., E-mail: dorenbos@ny.thn.ne.jp

    Percolation thresholds for solvent diffusion within hydrated model polymeric membranes are derived from dissipative particle dynamics in combination with Monte Carlo (MC) tracer diffusion calculations. The polymer backbones are composed of hydrophobic A beads to which at regular intervals Y-shaped side chains are attached. Each side chain is composed of eight A beads and contains two identical branches that are each terminated with a pendant hydrophilic C bead. Four types of side chains are considered for which the two branches (each represented as [C], [AC], [AAC], or [AAAC]) are splitting off from the 8th, 6th, 4th, or 2nd A bead,more » respectively. Water diffusion through the phase separated water containing pore networks is deduced from MC tracer diffusion calculations. The percolation threshold for the architectures containing the [C] and [AC] branches is at a water volume fraction of ∼0.07 and 0.08, respectively. These are much lower than those derived earlier for linear architectures of various side chain length and side chain distributions. Control of side chain architecture is thus a very interesting design parameter to decrease the percolation threshold for solvent and proton transports within flexible amphiphilic polymer membranes.« less

  1. Identifying Threshold Concepts in the Careers of Educational Developers

    ERIC Educational Resources Information Center

    Timmermans, Julie A.

    2014-01-01

    The purpose of this multiple case study was to identify threshold concepts in the careers of educational developers. Twenty-one common threshold concepts emerged, with one threshold concept common among all participants: Facilitating a change process. The remaining 20 threshold concepts were captured in the following three categories: (1) Ways of…

  2. Increased anesthesia time using 2,2,2-tribromoethanol-chloral hydrate with low impact on mouse psychoacoustics.

    PubMed

    Maheras, Kathleen J; Gow, Alexander

    2013-09-30

    To examine psychoacoustics in mice, we have used 2,2,2-tribromoethanol anesthesia in multiple studies. We find this drug is fast-acting and yields consistent results, providing 25-30 min of anesthesia. Our recent studies in binaural hearing prompted development of a regimen to anesthesia time to 1h. We tested a novel cocktail using 2,2,2-tribromoethanol coupled with low dose chloral hydrate to extend the effective anesthesia time. We have established an intraperitoneal dosing regimen for 2,2,2-tribromoethanol-chloral hydrate anesthesia. To measure efficacy of the drug cocktail, we measured auditory brainstem responses (ABRs) at 10 min intervals to determine the effects on hearing thresholds and wave amplitudes and latencies. This novel drug combination increases effective anesthesia to 1h. ABR Wave I amplitudes, but not latencies, are marginally suppressed. Additionally, amplitudes of the centrally derived Waves III and V show significant inter-animal variability that is independent of stimulus intensity. These data argue against the systematic suppression of ABRs by the drug cocktail. Using 2,2,2-tribromoethanol-chloral hydrate combination in psychoacoustic studies has several advantages over other drug cocktails, the most important being preservation of latencies from centrally- and peripherally-derived ABR waves. In addition, hearing thresholds are unchanged and wave amplitudes are not systematically suppressed, although they exhibit greater variability. We demonstrate that 375 mg/kg 2,2,2-tribromoethanol followed after 5 min by 200mg/kg chloral hydrate provides an anesthesia time of 60 min, has negligible effects on ABR wave latencies and thresholds and non-systematic effects on amplitudes. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Increased Anesthesia Time Using 2,2,2-tribromoethanol-Chloral Hydrate With Low Impact On Mouse Psychoacoustics

    PubMed Central

    Maheras, Kathleen J.; Gow, Alexander

    2013-01-01

    Background To examine psychoacoustics in mice, we have used 2,2,2-tribromoethanol anesthesia in multiple studies. We find this drug is fast-acting and yields consistent results, providing 30 – 40 min of anesthesia. Our recent studies in binaural hearing prompted development of a regimen to anesthesia time to one hour. We tested a novel cocktail using 2,2,2-tribromoethanol coupled with low dose chloral hydrate to extend the effective anesthesia time. New Method We have established an intraperitoneal dosing regimen for 2,2,2-tribromoethanol-chloral hydrate anesthesia. To measure efficacy of the drug cocktail, we measured auditory brainstem responses (ABRs) at 10 min intervals to determine the effects on hearing thresholds and wave amplitudes and latencies. Results This novel drug combination increases effective anesthesia to one hour. ABR Wave I amplitudes, but not latencies, are marginally suppressed. Additionally, amplitudes of the centrally-derived Waves III and V show significant inter-animal variability that is independent of stimulus intensity. These data argue against the systematic suppression of ABRs by the drug cocktail. Comparison with Existing Methods Using 2,2,2-tribromoethanol-chloral hydrate combination in psychoacoustic studies has several advantages over other drug cocktails, the most important being preservation of latencies from centrally- and peripherally-derived ABR waves. In addition, hearing thresholds are unchanged and wave amplitudes are not systematically suppressed, although they exhibit greater variability. Conclusions We demonstrate that 375 mg/kg 2,2,2-tribromoethanol followed after five min by 200 mg/kg chloral hydrate provides an anesthesia time of 60 min, has negligible effects on ABR wave latencies and thresholds and non-systematic effects on amplitudes. PMID:23856212

  4. On singlet s-wave electron-hydrogen scattering.

    NASA Technical Reports Server (NTRS)

    Madan, R. N.

    1973-01-01

    Discussion of various zeroth-order approximations to s-wave scattering of electrons by hydrogen atoms below the first excitation threshold. The formalism previously developed by the author (1967, 1968) is applied to Feshbach operators to derive integro-differential equations, with the optical-potential set equal to zero, for the singlet and triplet cases. Phase shifts of s-wave scattering are computed in the zeroth-order approximation of the Feshbach operator method and in the static-exchange approximation. It is found that the convergence of numerical computations is faster in the former approximation than in the latter.

  5. Long ligands reinforce biological adhesion under shear flow

    NASA Astrophysics Data System (ADS)

    Belyaev, Aleksey V.

    2018-04-01

    In this work, computer modeling has been used to show that longer ligands allow biological cells (e.g., blood platelets) to withstand stronger flows after their adhesion to solid walls. A mechanistic model of polymer-mediated ligand-receptor adhesion between a microparticle (cell) and a flat wall has been developed. The theoretical threshold between adherent and non-adherent regimes has been derived analytically and confirmed by simulations. These results lead to a deeper understanding of numerous biophysical processes, e.g., arterial thrombosis, and to the design of new biomimetic colloid-polymer systems.

  6. Energy as a witness of multipartite entanglement in chains of arbitrary spins

    NASA Astrophysics Data System (ADS)

    Troiani, F.; Siloi, I.

    2012-09-01

    We develop a general approach for deriving the energy minima of biseparable states in chains of arbitrary spins s, and we report numerical results for spin values s≤5/2 (with N≤8). The minima provide a set of threshold values for exchange energy that allow us to detect different degrees of multipartite entanglement in one-dimensional spin systems. We finally demonstrate that the Heisenberg exchange Hamiltonian of N spins has a nondegenerate N-partite entangled ground state, and it can thus witness such correlations in all finite spin chains.

  7. Using default methodologies to derive an acceptable daily exposure (ADE).

    PubMed

    Faria, Ellen C; Bercu, Joel P; Dolan, David G; Morinello, Eric J; Pecquet, Alison M; Seaman, Christopher; Sehner, Claudia; Weideman, Patricia A

    2016-08-01

    This manuscript discusses the different historical and more recent default approaches that have been used to derive an acceptable daily exposure (ADE). While it is preferable to derive a health-based ADE based on a complete nonclinical and clinical data package, this is not always possible. For instance, for drug candidates in early development there may be no or limited nonclinical or clinical trial data. Alternative approaches that can support decision making with less complete data packages represent a variety of methods that rely on default assumptions or data inputs where chemical-specific data on health effects are lacking. A variety of default approaches are used including those based on certain toxicity estimates, a fraction of the therapeutic dose, cleaning-based limits, the threshold of toxicological concern (TTC), and application of hazard banding tools such as occupational exposure banding (OEB). Each of these default approaches is discussed in this manuscript, including their derivation, application, strengths, and limitations. In order to ensure patient safety when faced with toxicological and clinical data-gaps, default ADE methods should be purposefully as or more protective than ADEs derived from full data packages. Reliance on the subset of default approaches (e.g., TTC or OEB) that are based on toxicological data is preferred over other methods for establishing ADEs in early development while toxicology and clinical data are still being collected. Copyright © 2016. Published by Elsevier Inc.

  8. Computational Modeling of Semiconductor Dynamics at Femtosecond Time Scales

    NASA Technical Reports Server (NTRS)

    Agrawal, Govind P.; Goorjian, Peter M.

    1998-01-01

    The main objective of the Joint-Research Interchange NCC2-5149 was to develop computer codes for accurate simulation of femtosecond pulse propagation in semiconductor lasers and semiconductor amplifiers [I]. The code should take into account all relevant processes such as the interband and intraband carrier relaxation mechanisms and the many-body effects arising from the Coulomb interaction among charge carriers [2]. This objective was fully accomplished. We made use of a previously developed algorithm developed at NASA Ames [3]-[5]. The new algorithm was tested on several problems of practical importance. One such problem was related to the amplification of femtosecond optical pulses in semiconductors. These results were presented in several international conferences over a period of three years. With the help of a postdoctoral fellow, we also investigated the origin of instabilities that can lead to the formation of femtosecond pulses in different kinds of lasers. We analyzed the occurrence of absolute instabilities in lasers that contain a dispersive host material with third-order nonlinearities. Starting from the Maxwell-Bloch equations, we derived general multimode equations to distinguish between convective and absolute instabilities. We find that both self-phase modulation and intensity-dependent absorption can dramatically affect the absolute stability of such lasers. In particular, the self-pulsing threshold (the so-called second laser threshold) can occur at few times the first laser threshold even in good-cavity lasers for which no self-pulsing occurs in the absence of intensity-dependent absorption. These results were presented in an international conference and published in the form of two papers.

  9. Modelling of surface-water temperature for the estimation of the Czech fishery productivity under the climate change

    NASA Astrophysics Data System (ADS)

    Svobodová, Eva; Trnka, Miroslav; Kopp, Radovan; Mareš, Jan; Dubrovský, Martin; Spurný, Petr; Žalud, Zděněk

    2015-04-01

    Freshwater fish production is significantly correlated with water temperature which is expected to increase under the climate change. This study is dealing with the estimation of the change of water temperature in productive ponds and its impact on the fishery in the Czech Republic. Calculation of surface-water temperature which was based on three-day mean of the air temperature was developed and tested in several ponds in three main fish production areas. Output of surface-water temperature model was compared with measured data and showed that the lower range of model accuracy is surface-water temperature 3°C, under this temperature threshold the model loses its predictive competence. In the expecting of surface-water temperature above the temperature 3°C the model has proved the well consistence between observed and modelled surface-water temperature (R 0.79 - 0.96). Verified model was applied in the conditions of climate change determined by the pattern scaling method, in which standardised scenarios were derived from five global circulation models MPEH5, CSMK3, IPCM4, GFCM21 and HADGEM. Results were evaluated with regard to thresholds which characterise the fish species requirements on water temperature. Used thresholds involved the upper temperature threshold for fish survival and the tolerable number of days in continual period with mentioned threshold surface-water temperature. Target fish species were Common carp (Cyprinus carpio), Maraene whitefish (Coregonus maraena), Northern whitefish (Coregonus peled) and Rainbow trout (Oncorhynchus mykis). Results indicated the limitation of the Czech fish-farming in terms of i) the increase of the length of continual periods with surface-water temperature above the threshold appropriate to given fish species toleration, ii) the increase of the number of continual periods with surface-water temperature above the threshold, both appropriate to given fish species toleration, and iii) the increase of overall number of days within the continual period with temperature above the threshold tolerated by given fish species. ACKNOWLEDGEMENTS: This study was funded by project "Building up a multidisciplinary scientific team focused on drought" No. CZ.1.07/2.3.00/20.0248.

  10. Spectral singularities, threshold gain, and output intensity for a slab laser with mirrors

    NASA Astrophysics Data System (ADS)

    Doğan, Keremcan; Mostafazadeh, Ali; Sarısaman, Mustafa

    2018-05-01

    We explore the consequences of the emergence of linear and nonlinear spectral singularities in TE modes of a homogeneous slab of active optical material that is placed between two mirrors. We use the results together with two basic postulates regarding the behavior of laser light emission to derive explicit expressions for the laser threshold condition and output intensity for these modes of the slab and discuss their physical implications. In particular, we reveal the details of the dependence of the threshold gain and output intensity on the position and properties of the mirrors and on the real part of the refractive index of the gain material.

  11. Communication: Classical threshold law for ion-neutral-neutral three-body recombination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pérez-Ríos, Jesús; Greene, Chris H.

    2015-07-28

    A very recently method for classical trajectory calculations for three-body collision [Pérez-Ríos et al., J. Chem. Phys. 140, 044307 (2014)] has been applied to describe ion-neutral-neutral ternary processes for low energy collisions: 0.1 mK–10 mK. As a result, a threshold law for the three-body recombination cross section is obtained and corroborated numerically. The derived threshold law predicts the formation of weakly bound dimers, with binding energies comparable to the collision energy of the collisional partners. In this low energy range, this analysis predicts that molecular ions should dominate over molecular neutrals as the most products formed.

  12. Electrical percolation threshold of magnetostrictive inclusions in a piezoelectric matrix composite as a function of relative particle size

    NASA Astrophysics Data System (ADS)

    Barbero, Ever J.; Bedard, Antoine Joseph

    2018-04-01

    Magnetoelectric composites can be produced by embedding magnetostrictive particles in a piezoelectric matrix derived from a piezoelectric powder precursor. Ferrite magnetostrictive particles, if allowed to percolate, can short the potential difference generated in the piezoelectric phase. Modeling a magnetoelectric composite as an aggregate of bi-disperse hard shells, molecular dynamics was used to explore relationships among relative particle size, particle affinity, and electrical percolation with the goal of maximizing the percolation threshold. It is found that two factors raise the percolation threshold, namely the relative size of magnetostrictive to piezoelectric particles, and the affinity between the magnetostrictive and piezoelectric particles.

  13. Mapping irrigated areas in Afghanistan over the past decade using MODIS NDVI

    USGS Publications Warehouse

    Pervez, Md Shahriar; Budde, Michael; Rowland, James

    2014-01-01

    Agricultural production capacity contributes to food security in Afghanistan and is largely dependent on irrigated farming, mostly utilizing surface water fed by snowmelt. Because of the high contribution of irrigated crops (> 80%) to total agricultural production, knowing the spatial distribution and year-to-year variability in irrigated areas is imperative to monitoring food security for the country. We used 16-day composites of the Normalized Difference Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to create 23-point time series for each year from 2000 through 2013. Seasonal peak values and time series were used in a threshold-dependent decision tree algorithm to map irrigated areas in Afghanistan for the last 14 years. In the absence of ground reference irrigated area information, we evaluated these maps with the irrigated areas classified from multiple snapshots of the landscape during the growing season from Landsat 5 optical and thermal sensor images. We were able to identify irrigated areas using Landsat imagery by selecting as irrigated those areas with Landsat-derived NDVI greater than 0.30–0.45, depending on the date of the Landsat image and surface temperature less than or equal to 310 Kelvin (36.9 ° C). Due to the availability of Landsat images, we were able to compare with the MODIS-derived maps for four years: 2000, 2009, 2010, and 2011. The irrigated areas derived from Landsat agreed well r2 = 0.91 with the irrigated areas derived from MODIS, providing confidence in the MODIS NDVI threshold approach. The maps portrayed a highly dynamic irrigated agriculture practice in Afghanistan, where the amount of irrigated area was largely determined by the availability of surface water, especially snowmelt, and varied by as much as 30% between water surplus and water deficit years. During the past 14 years, 2001, 2004, and 2008 showed the lowest levels of irrigated area (~ 1.5 million hectares), attesting to the severe drought conditions in those years, whereas 2009, 2012 and 2013 registered the largest irrigated area (~ 2.5 million hectares) due to record snowpack and snowmelt in the region. The model holds promise the ability to provide near-real-time (by the end of the growing seasons) estimates of irrigated area, which are beneficial for food security monitoring as well as subsequent decision making for the country. While the model is developed for Afghanistan, it can be adopted with appropriate adjustments in the derived threshold values to map irrigated areas elsewhere.

  14. Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies.

    PubMed

    Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C

    2002-08-01

    To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.

  15. On the renewal risk model under a threshold strategy

    NASA Astrophysics Data System (ADS)

    Dong, Yinghui; Wang, Guojing; Yuen, Kam C.

    2009-08-01

    In this paper, we consider the renewal risk process under a threshold dividend payment strategy. For this model, the expected discounted dividend payments and the Gerber-Shiu expected discounted penalty function are investigated. Integral equations, integro-differential equations and some closed form expressions for them are derived. When the claims are exponentially distributed, it is verified that the expected penalty of the deficit at ruin is proportional to the ruin probability.

  16. Rainfall thresholds as a landslide indicator for engineered slopes on the Irish Rail network

    NASA Astrophysics Data System (ADS)

    Martinović, Karlo; Gavin, Kenneth; Reale, Cormac; Mangan, Cathal

    2018-04-01

    Rainfall thresholds express the minimum levels of rainfall that need to be reached or exceeded in order for landslides to occur in a particular area. They are a common tool in expressing the temporal portion of landslide hazard analysis. Numerous rainfall thresholds have been developed for different areas worldwide, however none of these are focused on landslides occurring on the engineered slopes on transport infrastructure networks. This paper uses empirical method to develop the rainfall thresholds for landslides on the Irish Rail network earthworks. For comparison, rainfall thresholds are also developed for natural terrain in Ireland. The results show that particular thresholds involving relatively low rainfall intensities are applicable for Ireland, owing to the specific climate. Furthermore, the comparison shows that rainfall thresholds for engineered slopes are lower than those for landslides occurring on the natural terrain. This has severe implications as it indicates that there is a significant risk involved when using generic weather alerts (developed largely for natural terrain) for infrastructure management, and showcases the need for developing railway and road specific rainfall thresholds for landslides.

  17. SeaWiFS Technical Report Series. Volume 7: Cloud screening for polar orbiting visible and infrared (IR) satellite sensors

    NASA Technical Reports Server (NTRS)

    Darzi, Michael; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)

    1992-01-01

    Methods for detecting and screening cloud contamination from satellite derived visible and infrared data are reviewed in this document. The methods are applicable to past, present, and future polar orbiting satellite radiometers. Such instruments include the Coastal Zone Color Scanner (CZCS), operational from 1978 through 1986; the Advanced Very High Resolution Radiometer (AVHRR); the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), scheduled for launch in August 1993; and the Moderate Resolution Imaging Spectrometer (IMODIS). Constant threshold methods are the least demanding computationally, and often provide adequate results. An improvement to these methods are the least demanding computationally, and often provide adequate results. An improvement to these methods is to determine the thresholds dynamically by adjusting them according to the areal and temporal distributions of the surrounding pixels. Spatial coherence methods set thresholds based on the expected spatial variability of the data. Other statistically derived methods and various combinations of basic methods are also reviewed. The complexity of the methods is ultimately limited by the computing resources. Finally, some criteria for evaluating cloud screening methods are discussed.

  18. L to H mode transition: Parametric dependencies of the temperature threshold

    DOE PAGES

    Bourdelle, C.; Chone, L.; Fedorczak, N.; ...

    2015-06-15

    The L to H mode transition occurs at a critical power which depends on various parameters, such as the magnetic field, the density, etc. Experimental evidence on various tokamaks (JET, ASDEX-Upgrade, DIII-D, Alcator C-Mod) points towards the existence of a critical temperature characterizing the transition. This criterion for the L-H transition is local and is therefore easier to be compared to theoretical approaches. In order to shed light on the mechanisms of the transition, simple theoretical ideas are used to derive a temperature threshold (T th). They are based on the stabilization of the underlying turbulence by a mean radialmore » electric field shear. The nature of the turbulence varies as the collisionality decreases, from resistive ballooning modes to ion temperature gradient and trapped electron modes. The obtained parametric dependencies of the derived T th are tested versus magnetic field, density, effective charge. Furthermore, various robust experimental observations are reproduced, in particular T th increases with magnetic field B and increases with density below the density roll-over observed on the power threshold.« less

  19. Real-time detection of faecally contaminated drinking water with tryptophan-like fluorescence: defining threshold values.

    PubMed

    Sorensen, James P R; Baker, Andy; Cumberland, Susan A; Lapworth, Dan J; MacDonald, Alan M; Pedley, Steve; Taylor, Richard G; Ward, Jade S T

    2018-05-01

    We assess the use of fluorescent dissolved organic matter at excitation-emission wavelengths of 280nm and 360nm, termed tryptophan-like fluorescence (TLF), as an indicator of faecally contaminated drinking water. A significant logistic regression model was developed using TLF as a predictor of thermotolerant coliforms (TTCs) using data from groundwater- and surface water-derived drinking water sources in India, Malawi, South Africa and Zambia. A TLF threshold of 1.3ppb dissolved tryptophan was selected to classify TTC contamination. Validation of the TLF threshold indicated a false-negative error rate of 15% and a false-positive error rate of 18%. The threshold was unsuccessful at classifying contaminated sources containing <10 TTC cfu per 100mL, which we consider the current limit of detection. If only sources above this limit were classified, the false-negative error rate was very low at 4%. TLF intensity was very strongly correlated with TTC concentration (ρ s =0.80). A higher threshold of 6.9ppb dissolved tryptophan is proposed to indicate heavily contaminated sources (≥100 TTC cfu per 100mL). Current commercially available fluorimeters are easy-to-use, suitable for use online and in remote environments, require neither reagents nor consumables, and crucially provide an instantaneous reading. TLF measurements are not appreciably impaired by common intereferents, such as pH, turbidity and temperature, within typical natural ranges. The technology is a viable option for the real-time screening of faecally contaminated drinking water globally. Copyright © 2017 Natural Environment Research Council (NERC), as represented by the British Geological Survey (BGS. Published by Elsevier B.V. All rights reserved.

  20. A tutorial on the use of ROC analysis for computer-aided diagnostic systems.

    PubMed

    Scheipers, Ulrich; Perrey, Christian; Siebers, Stefan; Hansen, Christian; Ermert, Helmut

    2005-07-01

    The application of the receiver operating characteristic (ROC) curve for computer-aided diagnostic systems is reviewed. A statistical framework is presented and different methods of evaluating the classification performance of computer-aided diagnostic systems, and, in particular, systems for ultrasonic tissue characterization, are derived. Most classifiers that are used today are dependent on a separation threshold, which can be chosen freely in many cases. The separation threshold separates the range of output values of the classification system into different target groups, thus conducting the actual classification process. In the first part of this paper, threshold specific performance measures, e.g., sensitivity and specificity, are presented. In the second part, a threshold-independent performance measure, the area under the ROC curve, is reviewed. Only the use of separation threshold-independent performance measures provides classification results that are overall representative for computer-aided diagnostic systems. The following text was motivated by the lack of a complete and definite discussion of the underlying subject in available textbooks, references and publications. Most manuscripts published so far address the theme of performance evaluation using ROC analysis in a manner too general to be practical for everyday use in the development of computer-aided diagnostic systems. Nowadays, the user of computer-aided diagnostic systems typically handles huge amounts of numerical data, not always distributed normally. Many assumptions made in more or less theoretical works on ROC analysis are no longer valid for real-life data. The paper aims at closing the gap between theoretical works and real-life data. The review provides the interested scientist with information needed to conduct ROC analysis and to integrate algorithms performing ROC analysis into classification systems while understanding the basic principles of classification.

  1. Threshold Capability Development in Intensive Mode Business Units

    ERIC Educational Resources Information Center

    Crispin, Stuart; Hancock, Phil; Male, Sally Amanda; Baillie, Caroline; MacNish, Cara; Leggoe, Jeremy; Ranmuthugala, Dev; Alam, Firoz

    2016-01-01

    Purpose: The purpose of this paper is to explore: student perceptions of threshold concepts and capabilities in postgraduate business education, and the potential impacts of intensive modes of teaching on student understanding of threshold concepts and development of threshold capabilities. Design/Methodology/Approach: The student experience of…

  2. The Development of New Composite Metrics for the Comprehensive Analytic and Visual Assessment of Hypoglycemia Using the Hypo-Triad.

    PubMed

    Thomas, Andreas; Shin, John; Jiang, Boyi; McMahon, Chantal; Kolassa, Ralf; Vigersky, Robert A

    2018-01-01

    Quantifying hypoglycemia has traditionally been limited to using the frequency of hypoglycemic events during a given time interval using data from blood glucose (BG) testing. However, continuous glucose monitoring (CGM) captures three parameters-a Hypo-Triad-unavailable with BG monitoring that can be used to better characterize hypoglycemia: area under the curve (AUC), time (duration of hypoglycemia), and frequency of daily episodes below a specified threshold. We developed two new analytic metrics to enhance the traditional Hypo-Triad of CGM-derived data to more effectively capture the intensity of hypoglycemia (IntHypo) and overall hypoglycemic environment called the "hypoglycemia risk volume" (HypoRV). We reanalyzed the CGM data from the ASPIRE In-Home study, a randomized, controlled trial of a sensor-integrated pump system with a low glucose threshold suspend feature (SIP+TS), using these new metrics and compared them to standard metrics of hypoglycemia. IntHypo and HypoRV provide additional insights into the benefit of a SIP+TS system on glycemic exposure when compared to the standard reporting methods. In addition, the visual display of these parameters provides a unique and intuitive way to understand the impact of a diabetes intervention on a cohort of subjects as well as on individual patients. The IntHypo and HypoRV are new and enhanced ways of analyzing CGM-derived data in diabetes intervention studies which could lead to new insights in diabetes management. They require validation using existing, ongoing, or planned studies to determine whether they are superior to existing metrics.

  3. Generalization of rapidly recurring seizures is suppressed in mice lacking glial cell line-derived neurotrophic factor family receptor alpha2.

    PubMed

    Nanobashvili, A; Kokaia, Z; Lindvall, O

    2003-01-01

    Recent experimental evidence indicates that neurotrophic factors play a role in the pathophysiology of epilepsy. The objective of this study was to explore whether signaling through one of the glial cell line-derived neurotrophic factor family receptors, GFRalpha2, influences the severity of kindling-evoked, rapidly recurring seizures and the subsequent development of permanent hyperexcitability. We applied the rapid kindling model to adult mice, using 40 threshold stimulations delivered with 5-min interval in the ventral hippocampus. Generalized seizures were fewer and developed later in response to kindling stimulations in mice lacking GFRalpha2. However, GFRalpha2 gene deletion did not influence the acquisition of the permanent abnormal excitability as assessed 4 weeks later. In situ hybridization revealed marked and dynamic changes of GFRalpha2 mRNA levels in several forebrain areas following the stimulus-evoked seizures. Our findings provide evidence that signaling through the GFRalpha2 receptor contributes to seizure generalization in rapid kindling.

  4. How to Assess the Value of Medicines?

    PubMed Central

    Simoens, Steven

    2010-01-01

    This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066

  5. How to assess the value of medicines?

    PubMed

    Simoens, Steven

    2010-01-01

    This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.

  6. Network-level reproduction number and extinction threshold for vector-borne diseases.

    PubMed

    Xue, Ling; Scoglio, Caterina

    2015-06-01

    The basic reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or not. Thresholds for disease extinction contribute crucial knowledge of disease control, elimination, and mitigation of infectious diseases. Relationships between basic reproduction numbers of two deterministic network-based ordinary differential equation vector-host models, and extinction thresholds of corresponding stochastic continuous-time Markov chain models are derived under some assumptions. Numerical simulation results for malaria and Rift Valley fever transmission on heterogeneous networks are in agreement with analytical results without any assumptions, reinforcing that the relationships may always exist and proposing a mathematical problem for proving existence of the relationships in general. Moreover, numerical simulations show that the basic reproduction number does not monotonically increase or decrease with the extinction threshold. Consistent trends of extinction probability observed through numerical simulations provide novel insights into mitigation strategies to increase the disease extinction probability. Research findings may improve understandings of thresholds for disease persistence in order to control vector-borne diseases.

  7. Very high laser-damage threshold of polymer-derived Si(B)CN-carbon nanotube composite coatings.

    PubMed

    Bhandavat, R; Feldman, A; Cromer, C; Lehman, J; Singh, G

    2013-04-10

    We study the laser irradiance behavior and resulting structural evolution of polymer-derived silicon-boron-carbonitride (Si(B)CN) functionalized multiwall carbon nanotube (MWCNT) composite spray coatings on copper substrate. We report a damage threshold value of 15 kWcm(-2) and an optical absorbance of 0.97 after irradiation. This is an order of magnitude improvement over MWCNT (1.4 kWcm(-2), 0.76), SWCNT (0.8 kWcm(-2), 0.65) and carbon paint (0.1 kWcm(-2), 0.87) coatings previously tested at 10.6 μm (2.5 kW CO2 laser) exposure. Electron microscopy, Raman spectroscopy, and X-ray photoelectron spectroscopy suggests partial oxidation of Si(B)CN forming a stable protective SiO2 phase upon irradiation.

  8. Higher-than-predicted saltation threshold wind speeds on Titan.

    PubMed

    Burr, Devon M; Bridges, Nathan T; Marshall, John R; Smith, James K; White, Bruce R; Emery, Joshua P

    2015-01-01

    Titan, the largest satellite of Saturn, exhibits extensive aeolian, that is, wind-formed, dunes, features previously identified exclusively on Earth, Mars and Venus. Wind tunnel data collected under ambient and planetary-analogue conditions inform our models of aeolian processes on the terrestrial planets. However, the accuracy of these widely used formulations in predicting the threshold wind speeds required to move sand by saltation, or by short bounces, has not been tested under conditions relevant for non-terrestrial planets. Here we derive saltation threshold wind speeds under the thick-atmosphere, low-gravity and low-sediment-density conditions on Titan, using a high-pressure wind tunnel refurbished to simulate the appropriate kinematic viscosity for the near-surface atmosphere of Titan. The experimentally derived saltation threshold wind speeds are higher than those predicted by models based on terrestrial-analogue experiments, indicating the limitations of these models for such extreme conditions. The models can be reconciled with the experimental results by inclusion of the extremely low ratio of particle density to fluid density on Titan. Whereas the density ratio term enables accurate modelling of aeolian entrainment in thick atmospheres, such as those inferred for some extrasolar planets, our results also indicate that for environments with high density ratios, such as in jets on icy satellites or in tenuous atmospheres or exospheres, the correction for low-density-ratio conditions is not required.

  9. High speed point derivative microseismic detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhl, J.E.; Warpinski, N.R.; Whetten, E.B.

    A high speed microseismic event detector constructed in accordance with the present invention uses a point derivative comb to quickly and accurately detect microseismic events. Compressional and shear waves impinging upon microseismic receiver stations disposed to collect waves are converted into digital data and analyzed using a point derivative comb including assurance of quiet periods prior to declaration of microseismic events. If a sufficient number of quiet periods have passed, the square of a two point derivative of the incoming digital signal is compared to a trip level threshold exceeding the determined noise level to declare a valid trial event.more » The squaring of the derivative emphasizes the differences between noise and signal, and the valid event is preferably declared when the trip threshold has been exceeded over a temporal comb width to realize a comb over a given time period. Once a trial event has been declared, the event is verified through a spatial comb, which applies the temporal event comb to additional stations. The detector according to the present invention quickly and accurately detects initial compressional waves indicative of a microseismic event which typically exceed the ambient cultural noise level by a small amount, and distinguishes the waves from subsequent larger amplitude shear waves. 9 figs.« less

  10. Real-time monitoring and short-term forecasting of drought in Norway

    NASA Astrophysics Data System (ADS)

    Kwok Wong, Wai; Hisdal, Hege

    2013-04-01

    Drought is considered to be one of the most costly natural disasters. Drought monitoring and forecasting are thus important for sound water management. In this study hydrological drought characteristics applicable for real-time monitoring and short-term forecasting of drought in Norway were developed. A spatially distributed hydrological model (HBV) implemented in a Web-based GIS framework provides a platform for drought analyses and visualizations. A number of national drought maps can be produced, which is a simple and effective way to communicate drought conditions to decision makers and the public. The HBV model is driven by precipitation and air temperature data. On a daily time step it calculates the water balance for 1 x 1 km2 grid cells characterized by their elevation and land use. Drought duration and areal drought coverage for runoff and subsurface storage (sum of soil moisture and groundwater) were derived. The threshold level method was used to specify drought conditions on a grid cell basis. The daily 10th percentile thresholds were derived from seven-day windows centered on that calendar day from the reference period 1981-2010 (threshold not exceeded 10% of the time). Each individual grid cell was examined to determine if it was below its respective threshold level. Daily drought-stricken areas can then be easily identified when visualized on a map. The drought duration can also be tracked and calculated by a retrospective analysis. Real-time observations from synoptic stations interpolated to a regular grid of 1 km resolution constituted the forcing data for the current situation. 9-day meteorological forecasts were used as input to the HBV model to obtain short-term hydrological drought forecasts. Downscaled precipitation and temperature fields from two different atmospheric models were applied. The first two days of the forecast period adopted the forecasts from Unified Model (UM4) while the following seven days were based on the 9-day forecasts from ECMWF. The approach has been tested and is now available on the Web for operational water management.

  11. Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds

    PubMed Central

    Lazar, Aurel A.; Pnevmatikakis, Eftychios A.

    2013-01-01

    We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610

  12. Prompt merger collapse and the maximum mass of neutron stars.

    PubMed

    Bauswein, A; Baumgarte, T W; Janka, H-T

    2013-09-27

    We perform hydrodynamical simulations of neutron-star mergers for a large sample of temperature-dependent nuclear equations of state and determine the threshold mass above which the merger remnant promptly collapses to form a black hole. We find that, depending on the equation of state, the threshold mass is larger than the maximum mass of a nonrotating star in isolation by between 30 and 70 percent. Our simulations also show that the ratio between the threshold mass and maximum mass is tightly correlated with the compactness of the nonrotating maximum-mass configuration. We speculate on how this relation can be used to derive constraints on neutron-star properties from future observations.

  13. A Cloud Mask for AIRS

    NASA Technical Reports Server (NTRS)

    Brubaker, N.; Jedlovec, G. J.

    2004-01-01

    With the preliminary release of AIRS Level 1 and 2 data to the scientific community, there is a growing need for an accurate AIRS cloud mask for data assimilation studies and in producing products derived from cloud free radiances. Current cloud information provided with the AIRS data are limited or based on simplified threshold tests. A multispectral cloud detection approach has been developed for AIRS that utilizes the hyper-spectral capabilities to detect clouds based on specific cloud signatures across the short wave and long wave infrared window regions. This new AIRS cloud mask has been validated against the existing AIRS Level 2 cloud product and cloud information derived from MODIS. Preliminary results for both day and night applications over the continental U.S. are encouraging. Details of the cloud detection approach and validation results will be presented at the conference.

  14. Wetting in Color: Designing a colorometric indicator for wettability

    NASA Astrophysics Data System (ADS)

    Raymond, Kevin; Burgess, Ian B.; Koay, Natalie; Kolle, Mathias; Loncar, Marko; Aizenberg, Joanna

    2012-02-01

    Colorimetric litmus tests such as pH paper have enjoyed wide commercial success due to their inexpensive production and exceptional ease of use. While such indicators commonly rely on a specific photochemical response to an analyte, we exploit structural color, derived from coherent scattering from wavelength-scale porosity rather than molecular absorption or luminescence, to create a Wetting-in-Color-Kit (WICK). This inexpensive and highly selective colorimetric indicator for organic liquids employs chemically encoded inverse-opal photonic crystals to translate minute differences in liquids' wettability to macroscopically distinct, easy-to-visualize color patterns. The highly symmetric re-entrant inter-pore geometry imparts a highly specific wetting threshold for liquids. We developed surface modification techniques to generate built-in chemistry gradients within the porous network. These let us tailor the wettability threshold to specific liquids across a continuous range. As wetting is a generic fluidic phenomenon, we envision that WICK could be suitable for applications in authentication or identification of unknown liquids across a broad range of industries.

  15. Surface-emitting circular DFB, disk- and ring- Bragg resonator lasers with chirped gratings: a unified theory and comparative study.

    PubMed

    Sun, Xiankai; Yariv, Amnon

    2008-06-09

    We have developed a theory that unifies the analysis of the modal properties of surface-emitting chirped circular grating lasers. This theory is based on solving the resonance conditions which involve two types of reflectivities of chirped circular gratings. This approach is shown to be in agreement with previous derivations which use the characteristic equations. Utilizing this unified analysis, we obtain the modal properties of circular DFB, disk-, and ring- Bragg resonator lasers. We also compare the threshold gain, single mode range, quality factor, emission efficiency, and modal area of these types of circular grating lasers. It is demonstrated that, under similar conditions, disk Bragg resonator lasers have the highest quality factor, the highest emission efficiency, and the smallest modal area, indicating their suitability in low-threshold, high-efficiency, ultracompact laser design, while ring Bragg resonator lasers have a large single mode range, high emission efficiency, and large modal area, indicating their suitability for high-efficiency, large-area, high-power applications.

  16. Minimum Transendothelial Electrical Resistance Thresholds for the Study of Small and Large Molecule Drug Transport in a Human in Vitro Blood-Brain Barrier Model.

    PubMed

    Mantle, Jennifer L; Min, Lie; Lee, Kelvin H

    2016-12-05

    A human cell-based in vitro model that can accurately predict drug penetration into the brain as well as metrics to assess these in vitro models are valuable for the development of new therapeutics. Here, human induced pluripotent stem cells (hPSCs) are differentiated into a polarized monolayer that express blood-brain barrier (BBB)-specific proteins and have transendothelial electrical resistance (TEER) values greater than 2500 Ω·cm 2 . By assessing the permeabilities of several known drugs, a benchmarking system to evaluate brain permeability of drugs was established. Furthermore, relationships between TEER and permeability to both small and large molecules were established, demonstrating that different minimum TEER thresholds must be achieved to study the brain transport of these two classes of drugs. This work demonstrates that this hPSC-derived BBB model exhibits an in vivo-like phenotype, and the benchmarks established here are useful for assessing functionality of other in vitro BBB models.

  17. Derivation of groundwater threshold values for analysis of impacts predicted at potential carbon sequestration sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Last, G. V.; Murray, C. J.; Bott, Y.

    2016-06-01

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts to groundwater quality due to carbon dioxide (CO 2) or brine leakage, should it occur from deep CO 2 storage reservoirs. These efforts targeted two classes of aquifer – an unconfined fractured carbonate aquifer based on the Edwards Aquifer in Texas, and a confined alluvium aquifer based on the High Plains Aquifer in Kansas. Hypothetical leakage scenarios focus on wellbores as the most likely conduits from the storage reservoir to an underground source of drinking water (USDW). To facilitate evaluationmore » of potential degradation of the USDWs, threshold values, below which there would be no predicted impacts, were determined for each of these two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities. Results demonstrate the importance of establishing baseline groundwater quality conditions that capture the spatial and temporal variability of the USDWs prior to CO 2 injection and storage.« less

  18. Point estimation following two-stage adaptive threshold enrichment clinical trials.

    PubMed

    Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel

    2018-05-31

    Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  19. Assessing regional and interspecific variation in threshold responses of forest breeding birds through broad scale analyses.

    PubMed

    van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L

    2013-01-01

    Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation.

  20. An experimental operative system for shallow landslide and flash flood warning based on rainfall thresholds and soil moisture modelling

    NASA Astrophysics Data System (ADS)

    Brigandı, G.; Aronica, G. T.; Basile, G.; Pasotti, L.; Panebianco, M.

    2012-04-01

    On November 2011 a thunderstorms became almost exceptional over the North-East part of the Sicily Region (Italy) producing local heavy rainfall, mud-debris flow and flash flooding. The storm was concentrated on the Tyrrhenian sea coast near the city of Barcellona within the Longano catchment. Main focus of the paper is to present an experimental operative system for alerting extreme hydrometeorological events by using a methodology based on the combined use of rainfall thresholds, soil moisture indexes and quantitative precipitation forecasting. As matter of fact, shallow landslide and flash flood warning is a key element to improve the Civil Protection achievements to mitigate damages and safeguard the security of people. It is a rather complicated task, particularly in those catchments with flashy response where even brief anticipations are important and welcomed. It is well known how the triggering of shallow landslides is strongly influenced by the initial soil moisture conditions of catchments. Therefore, the early warning system here applied is based on the combined use of rainfall thresholds, derived both for flash flood and for landslide, and soil moisture conditions; the system is composed of several basic component related to antecedent soil moisture conditions, real-time rainfall monitoring and antecedent rainfall. Soil moisture conditions were estimated using an Antecedent Precipitation Index (API), similar to this widely used for defining soil moisture conditions via Antecedent Moisture conditions index AMC. Rainfall threshold for landslides were derived using historical and statistical analysis. Finally, rainfall thresholds for flash flooding were derived using an Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall. After the implementation and calibration of the model, a testing phase was carried out by using real data collected for the November 2001 event in the Longano catchment. Moreover, in order to test the capability of the system to forecast thise event, Quantitative Precipitation Forecasting provided by the SILAM (Sicily Limited Area Model), a meteorological model run by SIAS (Sicilian Agrometeorological Service) with a forecast horizon up to 144 hours, have been used to run the system.

  1. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    PubMed Central

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804

  2. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    PubMed

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Automated retrieval of cloud and aerosol properties from the ARM Raman lidar, part 1: feature detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorsen, Tyler J.; Fu, Qiang; Newsom, Rob K.

    A Feature detection and EXtinction retrieval (FEX) algorithm for the Atmospheric Radiation Measurement (ARM) program’s Raman lidar (RL) has been developed. Presented here is part 1 of the FEX algorithm: the detection of features including both clouds and aerosols. The approach of FEX is to use multiple quantities— scattering ratios derived using elastic and nitro-gen channel signals from two fields of view, the scattering ratio derived using only the elastic channel, and the total volume depolarization ratio— to identify features using range-dependent detection thresholds. FEX is designed to be context-sensitive with thresholds determined for each profile by calculating the expectedmore » clear-sky signal and noise. The use of multiple quantities pro-vides complementary depictions of cloud and aerosol locations and allows for consistency checks to improve the accuracy of the feature mask. The depolarization ratio is shown to be particularly effective at detecting optically-thin features containing non-spherical particles such as cirrus clouds. Improve-ments over the existing ARM RL cloud mask are shown. The performance of FEX is validated against a collocated micropulse lidar and observations from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite over the ARM Darwin, Australia site. While we focus on a specific lidar system, the FEX framework presented here is suitable for other Raman or high spectral resolution lidars.« less

  4. Analytical Deriving of the Field Capacity through Soil Bundle Model

    NASA Astrophysics Data System (ADS)

    Arnone, E.; Viola, F.; Antinoro, C.; Noto, L. V.

    2015-12-01

    The concept of field capacity as soil hydraulic parameter is widely used in many hydrological applications. Althought its recurring usage, its definition is not univocal. Traditionally, field capacity has been related to the amount of water that remains in the soil after the excess water has drained away and the water downward movement experiences a significant decresase. Quantifying the drainage of excess of water may be vague and several definitions, often subjective, have been proposed. These definitions are based on fixed thresholds either of time, pressure, or flux to which the field capacity condition is associated. The flux-based definition identifies the field capacity as the soil moisture value corresponding to an arbitrary fixed threshold of free drainage flux. Recently, many works have investigated the flux-based definition by varying either the drainage threshold, the geometry setting and mainly the description of the drainage flux. Most of these methods are based on the simulation of the flux through a porous medium by using the Darcy's law or Richard's equation. Using the above-mentioned flux-based definition, in this work we propose an alternative analytical approach for deriving the field capacity based on a bundle-of-tubes model. The pore space of a porous medium is conceptualized as a bundle of capillary tubes of given length of different radii, derived from a known distribution. The drainage from a single capillary tube is given by the analytical solution of the differential equation describing the water height evolution within the capillary tube. This equation is based on the Poiseuille's law and describes the drainage flux with time as a function of tube radius. The drainage process is then integrated for any portion of soil taking into account the tube radius distribution which in turns depends on the soil type. This methodology allows to analytically derive the dynamics of drainage water flux for any soil type and consequently to define the soil field capacity as the latter reachs a given threshold value. The theoretical model also accounts for the tortuosity which characterizes the water pathways in real soils, but neglects the voids mutual interconnections.

  5. Current Knowledge on the Use of Computational Toxicology in Hazard Assessment of Metallic Engineered Nanomaterials.

    PubMed

    Chen, Guangchao; Peijnenburg, Willie; Xiao, Yinlong; Vijver, Martina G

    2017-07-12

    As listed by the European Chemicals Agency, the three elements in evaluating the hazards of engineered nanomaterials (ENMs) include the integration and evaluation of toxicity data, categorization and labeling of ENMs, and derivation of hazard threshold levels for human health and the environment. Assessing the hazards of ENMs solely based on laboratory tests is time-consuming, resource intensive, and constrained by ethical considerations. The adoption of computational toxicology into this task has recently become a priority. Alternative approaches such as (quantitative) structure-activity relationships ((Q)SAR) and read-across are of significant help in predicting nanotoxicity and filling data gaps, and in classifying the hazards of ENMs to individual species. Thereupon, the species sensitivity distribution (SSD) approach is able to serve the establishment of ENM hazard thresholds sufficiently protecting the ecosystem. This article critically reviews the current knowledge on the development of in silico models in predicting and classifying the hazard of metallic ENMs, and the development of SSDs for metallic ENMs. Further discussion includes the significance of well-curated experimental datasets and the interpretation of toxicity mechanisms of metallic ENMs based on reported models. An outlook is also given on future directions of research in this frontier.

  6. Development and validation of a dual sensing scheme to improve accuracy of bradycardia and pause detection in an insertable cardiac monitor.

    PubMed

    Passman, Rod S; Rogers, John D; Sarkar, Shantanu; Reiland, Jerry; Reisfeld, Erin; Koehler, Jodi; Mittal, Suneet

    2017-07-01

    Undersensing of premature ventricular beats and low-amplitude R waves are primary causes for inappropriate bradycardia and pause detections in insertable cardiac monitors (ICMs). The purpose of this study was to develop and validate an enhanced algorithm to reduce inappropriately detected bradycardia and pause episodes. Independent data sets to develop and validate the enhanced algorithm were derived from a database of ICM-detected bradycardia and pause episodes in de-identified patients monitored for unexplained syncope. The original algorithm uses an auto-adjusting sensitivity threshold for R-wave sensing to detect tachycardia and avoid T-wave oversensing. In the enhanced algorithm, a second sensing threshold is used with a long blanking and fixed lower sensitivity threshold, looking for evidence of undersensed signals. Data reported includes percent change in appropriate and inappropriate bradycardia and pause detections as well as changes in episode detection sensitivity and positive predictive value with the enhanced algorithm. The validation data set, from 663 consecutive patients, consisted of 4904 (161 patients) bradycardia and 2582 (133 patients) pause episodes, of which 2976 (61%) and 996 (39%) were appropriately detected bradycardia and pause episodes. The enhanced algorithm reduced inappropriate bradycardia and pause episodes by 95% and 47%, respectively, with 1.7% and 0.6% reduction in appropriate episodes, respectively. The average episode positive predictive value improved by 62% (P < .001) for bradycardia detection and by 26% (P < .001) for pause detection, with an average relative sensitivity of 95% (P < .001) and 99% (P = .5), respectively. The enhanced dual sense algorithm for bradycardia and pause detection in ICMs substantially reduced inappropriate episode detection with a minimal reduction in true episode detection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. In-vivo singlet oxygen threshold doses for PDT

    PubMed Central

    Zhu, Timothy C.; Kim, Michele M.; Liang, Xing; Finlay, Jarod C.; Busch, Theresa M.

    2015-01-01

    Objective Dosimetry of singlet oxygen (1O2) is of particular interest because it is the major cytotoxic agent causing biological effects for type-II photosensitizers during photodynamic therapy (PDT). An in-vivo model to determine the singlet oxygen threshold dose, [1O2]rx,sh, for PDT was developed. Material and methods An in-vivo radiation-induced fibrosarcoma (RIF) tumor mouse model was used to correlate the radius of necrosis to the calculation based on explicit PDT dosimetry of light fluence distribution, tissue optical properties, and photosensitizer concentrations. Inputs to the model include five photosensitizer-specific photochemical parameters along with [1O2]rx,sh. Photosensitizer-specific model parameters were determined for benzoporphyrin derivative monoacid ring A (BPD) and compared with two other type-II photosensitizers, Photofrin® and m-tetrahydroxyphenylchlorin (mTHPC) from the literature. Results The mean values (standard deviation) of the in-vivo [1O2]rx,sh are approximately 0.56 (0.26) and 0.72 (0.21) mM (or 3.6×107 and 4.6×107 singlet oxygen per cell to reduce the cell survival to 1/e) for Photofrin® and BPD, respectively, assuming that the fraction of generated singlet oxygen that interacts with the cell is 1. While the values for the photochemical parameters (ξ, σ, g, β) used for BPD were preliminary and may need further refinement, there is reasonable confidence for the values of the singlet oxygen threshold doses. Discussion In comparison, the [1O2]rx,sh value derived from in-vivo mouse study was reported to be 0.4 mM for mTHPC-PDT. However, the singlet oxygen required per cell is reported to be 9×108 per cell per 1/e fractional kill in an in-vitro mTHPC-PDT study on a rat prostate cancer cell line (MLL cells) and is reported to be 7.9 mM for a multicell in-vitro EMT6/Ro spheroid model for mTHPC-PDT. A theoretical analysis is provided to relate the number of in-vitro singlet oxygen required per cell to reach cell killing of 1/e to in-vivo singlet oxygen threshold dose (in mM). The sensitivity of threshold singlet oxygen dose for our experiment is examined. The possible influence of vascular vs. apoptotic cell killing mechanisms on the singlet oxygen threshold dose is discussed by comparing [1O2]rx,sh for BPD with 3 hr and 15 min drug-light-intervals, with the later being known to have a dominantly vascular effect. Conclusions The experimental results of threshold singlet oxygen concentration in an in-vivo RIF tumor model for Photofrin®, BPD, and mTHPC are about 20 times smaller than those observed in vitro. These results are consistent with knowledge that factors other than singlet oxygen-mediated tumor cell killing can contribute to PDT damage in-vivo. PMID:25927018

  8. In-vivo singlet oxygen threshold doses for PDT.

    PubMed

    Zhu, Timothy C; Kim, Michele M; Liang, Xing; Finlay, Jarod C; Busch, Theresa M

    2015-02-01

    Dosimetry of singlet oxygen ( 1 O 2 ) is of particular interest because it is the major cytotoxic agent causing biological effects for type-II photosensitizers during photodynamic therapy (PDT). An in-vivo model to determine the singlet oxygen threshold dose, [ 1 O 2 ] rx,sh , for PDT was developed. An in-vivo radiation-induced fibrosarcoma (RIF) tumor mouse model was used to correlate the radius of necrosis to the calculation based on explicit PDT dosimetry of light fluence distribution, tissue optical properties, and photosensitizer concentrations. Inputs to the model include five photosensitizer-specific photochemical parameters along with [ 1 O 2 ] rx,sh . Photosensitizer-specific model parameters were determined for benzoporphyrin derivative monoacid ring A (BPD) and compared with two other type-II photosensitizers, Photofrin ® and m-tetrahydroxyphenylchlorin (mTHPC) from the literature. The mean values (standard deviation) of the in-vivo [ 1 O 2 ] rx,sh are approximately 0.56 (0.26) and 0.72 (0.21) mM (or 3.6×10 7 and 4.6×10 7 singlet oxygen per cell to reduce the cell survival to 1/e) for Photofrin ® and BPD, respectively, assuming that the fraction of generated singlet oxygen that interacts with the cell is 1. While the values for the photochemical parameters (ξ, σ, g , β) used for BPD were preliminary and may need further refinement, there is reasonable confidence for the values of the singlet oxygen threshold doses. In comparison, the [ 1 O 2 ] rx,sh value derived from in-vivo mouse study was reported to be 0.4 mM for mTHPC-PDT. However, the singlet oxygen required per cell is reported to be 9×10 8 per cell per 1/ e fractional kill in an in-vitro mTHPC-PDT study on a rat prostate cancer cell line (MLL cells) and is reported to be 7.9 mM for a multicell in-vitro EMT6/Ro spheroid model for mTHPC-PDT. A theoretical analysis is provided to relate the number of in-vitro singlet oxygen required per cell to reach cell killing of 1/ e to in-vivo singlet oxygen threshold dose (in mM). The sensitivity of threshold singlet oxygen dose for our experiment is examined. The possible influence of vascular vs. apoptotic cell killing mechanisms on the singlet oxygen threshold dose is discussed by comparing [ 1 O 2 ] rx,sh for BPD with 3 hr and 15 min drug-light-intervals, with the later being known to have a dominantly vascular effect. The experimental results of threshold singlet oxygen concentration in an in-vivo RIF tumor model for Photofrin ® , BPD, and mTHPC are about 20 times smaller than those observed in vitro . These results are consistent with knowledge that factors other than singlet oxygen-mediated tumor cell killing can contribute to PDT damage in-vivo .

  9. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  10. Laser Ablation of Poly(methylmethacrylate) Doped with Aromatic Compounds: Laser Intensity Dependence of Absorption Coefficient

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Niino, Hiroyuki; Yabe, Akira

    1999-02-01

    We developed a novel method of obtaining an absorption coefficient which depends on the laser intensity, since a single-photon absorption coefficient of a polymer could not be applied to laser ablation. The relationship between the nonlinear absorption coefficient and the laser intensity was derived from experimental data of transmission and incident laser intensities. Using the nonlinear absorption coefficient of poly(methylmethacrylate) doped with benzil and pyrene, we succeeded in fitting the relationship of etch depth and laser intensity, obtained experimentally, and discussed the energy absorbed by the polymer at the threshold fluence.

  11. Evaluation of runaway-electron effects on plasma-facing components for NET

    NASA Astrophysics Data System (ADS)

    Bolt, H.; Calén, H.

    1991-03-01

    Runaway electrons which are generated during disruptions can cause serious damage to plasma facing components in a next generation device like NET. A study was performed to quantify the response of NET plasma facing components to runaway-electron impact. For the determination of the energy deposition in the component materials Monte Carlo computations were performed. Since the subsurface metal structures can be strongly heated under runaway-electron impact from the computed results damage threshold values for the thermal excursions were derived. These damage thresholds are strongly dependent on the materials selection and the component design. For a carbonmolybdenum divertor with 10 and 20 mm carbon armour thickness and 1 degree electron incidence the damage thresholds are 100 MJ/m 2 and 220 MJ/m 2. The thresholds for a carbon-copper divertor under the same conditions are about 50% lower. On the first wall damage is anticipated for energy depositions above 180 MJ/m 2.

  12. Effect of threshold disorder on the quorum percolation model

    NASA Astrophysics Data System (ADS)

    Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel

    2016-07-01

    We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.

  13. [Definition of low threshold volumes for quality assurance: conceptual and methodological issues involved in the definition and evaluation of thresholds for volume outcome relations in clinical care].

    PubMed

    Wetzel, Hermann

    2006-01-01

    In a large number of mostly retrospective association studies, a statistical relationship between volume and quality of health care has been reported. However, the relevance of these results is frequently limited by methodological shortcomings. In this article, criteria for the evidence and definition of thresholds for volume-outcome relations are proposed, e.g. the specification of relevant outcomes for quality indicators, analysis of volume as a continuous variable with an adequate case-mix and risk adjustment, accounting for cluster effects and considering mathematical models for the derivation of cut-off values. Moreover, volume thresholds are regarded as surrogate parameters for the indirect classification of the quality of care, whose diagnostic validity and effectiveness in improving health care quality need to be evaluated in prospective studies.

  14. Volcano early warning system based on MSG-SEVIRI multispectral data

    NASA Astrophysics Data System (ADS)

    Ganci, Gaetana; Vicari, Annamaria; Del Negro, Ciro

    2010-05-01

    Spaceborne remote sensing of high-temperature volcanic features offers an excellent opportunity to monitor the onset and development of new eruptive activity. Particularly, images with lower spatial but higher temporal resolution from meteorological satellites have been proved to be a sound instrument for continuous monitoring of volcanic activity, even though the relevant volcanic characteristics are much smaller than the nominal pixel size. The launch of Spinning Enhanced Visible and Infrared Imager (SEVIRI), on August 2002, onboard the geosynchronous platforms MSG1 and MSG2, has opened a new perspective for near real-time volcano monitoring by providing images at 15 minutes interval. Indeed, in spite of the low spatial resolution (3 km2 at nadir), the high frequency of observations afforded by the MSG SEVIRI was recently applied both for forest fire detection and for the monitoring of effusive volcanoes in Europe and Africa. Our Laboratory of Technologies (TecnoLab) at INGV-CT has been developing methods and know-how for the automated acquisition and management of MSG SEVIRI data. To provide a basis for real-time response during eruptive events, we designed and developed the automated system called HOTSAT. Our algorithm takes advantages from both spectral and spatial comparisons. Firstly, we use an adaptive thresholding procedure based on the computation of the spatial standard deviation derived from the immediately neighboring of each pixel to detect "potential" hot pixels. Secondly, it is required to further assess as true or false hotspot detections base on other thresholds test derived from the SEVIRI middle infrared (MIR, 3.9 μm) brightness temperatures taking into account its statistic behavior. Following these procedures, all the computations are based on dynamic thresholds reducing the number of false alarm due to atmospheric conditions. Our algorithm allows also the derivation of radiative power at all "hot" pixels. This is carried out using the MIR radiance method introduced by Wooster et al. [2003] for forest fires. It's based on an approximation of the Plank's Law as a power law. No assumption is made on the thermal structure of the pixel. The radiant flux, i.e. the fire radiative power, is proportional to the calibrated radiance associated to the hot part of the pixel computed as the difference between the observed hotspot pixel radiance in the SEVIRI MIR channel and the background radiance that would have been observed at the same location in the absence of thermal anomalies. The HOTSAT early warning system based on SEVIRI multispectral data is now suitable to be employed in an operational system of volcano monitoring. To validate and test the system some real cases on Mt Etna are presented.

  15. Threshold Evaluation of Emergency Risk Communication for Health Risks Related to Hazardous Ambient Temperature.

    PubMed

    Liu, Yang; Hoppe, Brenda O; Convertino, Matteo

    2018-04-10

    Emergency risk communication (ERC) programs that activate when the ambient temperature is expected to cross certain extreme thresholds are widely used to manage relevant public health risks. In practice, however, the effectiveness of these thresholds has rarely been examined. The goal of this study is to test if the activation criteria based on extreme temperature thresholds, both cold and heat, capture elevated health risks for all-cause and cause-specific mortality and morbidity in the Minneapolis-St. Paul Metropolitan Area. A distributed lag nonlinear model (DLNM) combined with a quasi-Poisson generalized linear model is used to derive the exposure-response functions between daily maximum heat index and mortality (1998-2014) and morbidity (emergency department visits; 2007-2014). Specific causes considered include cardiovascular, respiratory, renal diseases, and diabetes. Six extreme temperature thresholds, corresponding to 1st-3rd and 97th-99th percentiles of local exposure history, are examined. All six extreme temperature thresholds capture significantly increased relative risks for all-cause mortality and morbidity. However, the cause-specific analyses reveal heterogeneity. Extreme cold thresholds capture increased mortality and morbidity risks for cardiovascular and respiratory diseases and extreme heat thresholds for renal disease. Percentile-based extreme temperature thresholds are appropriate for initiating ERC targeting the general population. Tailoring ERC by specific causes may protect some but not all individuals with health conditions exacerbated by hazardous ambient temperature exposure. © 2018 Society for Risk Analysis.

  16. Bayesian methods for estimating GEBVs of threshold traits

    PubMed Central

    Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q

    2013-01-01

    Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458

  17. 24 CFR 599.301 - Initial determination of threshold requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Initial determination of threshold... Nominating Renewal Communities § 599.301 Initial determination of threshold requirements. (a) Two threshold... meets both of the following thresholds: (1) Eligibility of the nominated area. This threshold is met if...

  18. Modified cable equation incorporating transverse polarization of neuronal membranes for accurate coupling of electric fields.

    PubMed

    Wang, Boshuo; Aberra, Aman S; Grill, Warren M; Peterchev, Angel V

    2018-04-01

    We present a theory and computational methods to incorporate transverse polarization of neuronal membranes into the cable equation to account for the secondary electric field generated by the membrane in response to transverse electric fields. The effect of transverse polarization on nonlinear neuronal activation thresholds is quantified and discussed in the context of previous studies using linear membrane models. The response of neuronal membranes to applied electric fields is derived under two time scales and a unified solution of transverse polarization is given for spherical and cylindrical cell geometries. The solution is incorporated into the cable equation re-derived using an asymptotic model that separates the longitudinal and transverse dimensions. Two numerical methods are proposed to implement the modified cable equation. Several common neural stimulation scenarios are tested using two nonlinear membrane models to compare thresholds of the conventional and modified cable equations. The implementations of the modified cable equation incorporating transverse polarization are validated against previous results in the literature. The test cases show that transverse polarization has limited effect on activation thresholds. The transverse field only affects thresholds of unmyelinated axons for short pulses and in low-gradient field distributions, whereas myelinated axons are mostly unaffected. The modified cable equation captures the membrane's behavior on different time scales and models more accurately the coupling between electric fields and neurons. It addresses the limitations of the conventional cable equation and allows sound theoretical interpretations. The implementation provides simple methods that are compatible with current simulation approaches to study the effect of transverse polarization on nonlinear membranes. The minimal influence by transverse polarization on axonal activation thresholds for the nonlinear membrane models indicates that predictions of stronger effects in linear membrane models with a fixed activation threshold are inaccurate. Thus, the conventional cable equation works well for most neuroengineering applications, and the presented modeling approach is well suited to address the exceptions.

  19. Novel analyses of long-term data provide a scientific basis for chlorophyll-a thresholds in San Francisco Bay

    NASA Astrophysics Data System (ADS)

    Sutula, Martha; Kudela, Raphael; Hagy, James D.; Harding, Lawrence W.; Senn, David; Cloern, James E.; Bricker, Suzanne; Berg, Gry Mine; Beck, Marcus

    2017-10-01

    San Francisco Bay (SFB), USA, is highly enriched in nitrogen and phosphorus, but has been resistant to the classic symptoms of eutrophication associated with over-production of phytoplankton. Observations in recent years suggest that this resistance may be weakening, shown by: significant increases of chlorophyll-a (chl-a) and decreases of dissolved oxygen (DO), common occurrences of phytoplankton taxa that can form Harmful Algal Blooms (HAB), and algal toxins in water and mussels reaching levels of concern. As a result, managers now ask: what levels of chl-a in SFB constitute tipping points of phytoplankton biomass beyond which water quality will become degraded, requiring significant nutrient reductions to avoid impairments? We analyzed data for DO, phytoplankton species composition, chl-a, and algal toxins to derive quantitative relationships between three indicators (HAB abundance, toxin concentrations, DO) and chl-a. Quantile regressions relating HAB abundance and DO to chl-a were significant, indicating SFB is at increased risk of adverse HAB and low DO levels if chl-a continues to increase. Conditional probability analysis (CPA) showed chl-a of 13 mg m-3 as a "protective" threshold below which probabilities for exceeding alert levels for HAB abundance and toxins were reduced. This threshold was similar to chl-a of 13-16 mg m-3 that would meet a SFB-wide 80% saturation Water Quality Criterion (WQC) for DO. Higher "at risk" chl-a thresholds from 25 to 40 mg m-3 corresponded to 0.5 probability of exceeding alert levels for HAB abundance, and for DO below a WQC of 5.0 mg L-1 designated for lower South Bay (LSB) and South Bay (SB). We submit these thresholds as a basis to assess eutrophication status of SFB and to inform nutrient management actions. This approach is transferrable to other estuaries to derive chl-a thresholds protective against eutrophication.

  20. A universal approach to determine footfall timings from kinematics of a single foot marker in hoofed animals

    PubMed Central

    Clayton, Hilary M.

    2015-01-01

    The study of animal movement commonly requires the segmentation of continuous data streams into individual strides. The use of forceplates and foot-mounted accelerometers readily allows the detection of the foot-on and foot-off events that define a stride. However, when relying on optical methods such as motion capture, there is lack of validated robust, universally applicable stride event detection methods. To date, no method has been validated for movement on a circle, while algorithms are commonly specific to front/hind limbs or gait. In this study, we aimed to develop and validate kinematic stride segmentation methods applicable to movement on straight line and circle at walk and trot, which exclusively rely on a single, dorsal hoof marker. The advantage of such marker placement is the robustness to marker loss and occlusion. Eight horses walked and trotted on a straight line and in a circle over an array of multiple forceplates. Kinetic events were detected based on the vertical force profile and used as the reference values. Kinematic events were detected based on displacement, velocity or acceleration signals of the dorsal hoof marker depending on the algorithm using (i) defined thresholds associated with derived movement signals and (ii) specific events in the derived movement signals. Method comparison was performed by calculating limits of agreement, accuracy, between-horse precision and within-horse precision based on differences between kinetic and kinematic event. In addition, we examined the effect of force thresholds ranging from 50 to 150 N on the timings of kinetic events. The two approaches resulted in very good and comparable performance: of the 3,074 processed footfall events, 95% of individual foot on and foot off events differed by no more than 26 ms from the kinetic event, with average accuracy between −11 and 10 ms and average within- and between horse precision ≤8 ms. While the event-based method may be less likely to suffer from scaling effects, on soft ground the threshold-based method may prove more valuable. While we found that use of velocity thresholds for foot on detection results in biased event estimates for the foot on the inside of the circle at trot, adjusting thresholds for this condition negated the effect. For the final four algorithms, we found no noteworthy bias between conditions or between front- and hind-foot timings. Different force thresholds in the range of 50 to 150 N had the greatest systematic effect on foot-off estimates in the hind limbs (up to on average 16 ms per condition), being greater than the effect on foot-on estimates or foot-off estimates in the forelimbs (up to on average ±7 ms per condition). PMID:26157641

  1. Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.

    2004-05-01

    Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.

  2. A Windshear Hazard Index

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hinton, David A.; Bowles, Roland L.

    2000-01-01

    An aircraft exposed to hazardous low-level windshear may suffer a critical loss of airspeed and altitude, thus endangering its ability to remain airborne. In order to characterize this hazard, a nondimensional index was developed based oil aerodynamic principals and understanding of windshear phenomena, 'This paper reviews the development and application of the Bowles F-tactor. which is now used by onboard sensors for the detection of hazardous windshear. It was developed and tested during NASA/I:AA's airborne windshear program and is now required for FAA certification of onboard radar windshear detection systems. Reviewed in this paper are: 1) definition of windshear and description of atmospheric phenomena that may cause hazardous windshear. 2) derivation and discussion of the F-factor. 3) development of the F-factor hazard threshold, 4) its testing during field deployments, and 5) its use in accident reconstructions,

  3. A local PDE model of aggregation formation in bacterial colonies

    NASA Astrophysics Data System (ADS)

    Chavy-Waddy, Paul-Christopher; Kolokolnikov, Theodore

    2016-10-01

    We study pattern formation in a model of cyanobacteria motion recently proposed by Galante, Wisen, Bhaya and Levy. By taking a continuum limit of their model, we derive a novel fourth-order nonlinear parabolic PDE equation that governs the behaviour of the model. This PDE is {{u}t}=-{{u}xx}-{{u}xxxx}+α {{≤ft(\\frac{{{u}x}{{u}xx}}{u}\\right)}x} . We then derive the instability thresholds for the onset of pattern formation. We also compute analytically the spatial profiles of the steady state aggregation density. These profiles are shown to be of the form \\text{sec}{{\\text{h}}p} where the exponent p is related to the parameters of the model. Full numerical simulations give a favorable comparison between the continuum and the underlying discrete system, and show that the aggregation profiles are stable above the critical threshold.

  4. Determining the Applicability of Threshold of Toxicological Concern Approaches to Substances Found in Foods

    PubMed Central

    Canady, Richard; Lane, Richard; Paoli, Greg; Wilson, Margaret; Bialk, Heidi; Hermansky, Steven; Kobielush, Brent; Lee, Ji-Eun; Llewellyn, Craig; Scimeca, Joseph

    2013-01-01

    Threshold of Toxicological Concern (TTC) decision-support methods present a pragmatic approach to using data from well-characterized chemicals and protective estimates of exposure in a stepwise fashion to inform decisions regarding low-level exposures to chemicals for which few data exist. It is based on structural and functional categorizations of chemicals derived from decades of animal testing with a wide variety of chemicals. Expertise is required to use the TTC methods, and there are situations in which its use is clearly inappropriate or not currently supported. To facilitate proper use of the TTC, this paper describes issues to be considered by risk managers when faced with the situation of an unexpected substance in food. Case studies are provided to illustrate the implementation of these considerations, demonstrating the steps taken in deciding whether it would be appropriate to apply the TTC approach in each case. By appropriately applying the methods, employing the appropriate scientific expertise, and combining use with the conservative assumptions embedded within the derivation of the thresholds, the TTC can realize its potential to protect public health and to contribute to efficient use of resources in food safety risk management. PMID:24090142

  5. Consensus sediment quality guidelines for polycyclic aromatic hydrocarbon mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swartz, R.C.

    1999-04-01

    Sediment quality guidelines (SQGs) for polycyclic aromatic hydrocarbons (PAHs) have been derived from a variety of laboratory, field, and theoretical foundations. They include the screening level concentration, effects ranges-low and -median, equilibrium partitioning concentrations, apparent effects threshold, {Sigma}PAH model, and threshold and probable effects levels. The resolution of controversial differences among the PAH SQGs lies in an understanding of the effects of mixtures. Polycyclic aromatic hydrocarbons virtually always occur in field-collected sediment as a complex mixture of covarying compounds. When expressed as a mixture concentration, that is, total PAH (TPAH), the guidelines form three clusters that were intended in theirmore » original derivations to represent threshold (TEC = 290 {micro}g/g organic carbon [OC]), median (MEC = 1,800 {micro}g/g OC), and extreme (EEC = 10,000 {micro}g/g OC) effects concentrations. The TEC/MEC/EEC consensus guidelines provide a unifying synthesis of other SQGs, reflect causal rather than correlative effects, account for mixtures, and predict sediment toxicity and benthic community perturbations at sites of PAH contamination. The TEC offers the most useful SQG because PAH mixtures are unlikely to cause adverse effects on benthic ecosystems below the TEC.« less

  6. Review: A Position Paper on Selenium in Ecotoxicology: A Procedure for Deriving Site-Specific Water Quality Criteria

    Treesearch

    A. Dennis Lemly

    1997-01-01

    This paper describes a method for deriving site-specific water quality criteria for selenium using a two-step process: (1) gather information on selenium residues and biological effects at the site and in down-gradient systems and (2) examine criteria based on the degree of bioaccumulation, the relationship between mea-sured residues and threshold concentrations for...

  7. SU-C-BRA-02: Gradient Based Method of Target Delineation On PET/MR Image of Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dance, M; Chera, B; Falchook, A

    2015-06-15

    Purpose: Validate the consistency of a gradient-based segmentation tool to facilitate accurate delineation of PET/CT-based GTVs in head and neck cancers by comparing against hybrid PET/MR-derived GTV contours. Materials and Methods: A total of 18 head and neck target volumes (10 primary and 8 nodal) were retrospectively contoured using a gradient-based segmentation tool by two observers. Each observer independently contoured each target five times. Inter-observer variability was evaluated via absolute percent differences. Intra-observer variability was examined by percentage uncertainty. All target volumes were also contoured using the SUV percent threshold method. The thresholds were explored case by case so itsmore » derived volume matched with the gradient-based volume. Dice similarity coefficients (DSC) were calculated to determine overlap of PET/CT GTVs and PET/MR GTVs. Results: The Levene’s test showed there was no statistically significant difference of the variances between the observer’s gradient-derived contours. However, the absolute difference between the observer’s volumes was 10.83%, with a range from 0.39% up to 42.89%. PET-avid regions with qualitatively non-uniform shapes and intensity levels had a higher absolute percent difference near 25%, while regions with uniform shapes and intensity levels had an absolute percent difference of 2% between observers. The average percentage uncertainty between observers was 4.83% and 7%. As the volume of the gradient-derived contours increased, the SUV threshold percent needed to match the volume decreased. Dice coefficients showed good agreement of the PET/CT and PET/MR GTVs with an average DSC value across all volumes at 0.69. Conclusion: Gradient-based segmentation of PET volume showed good consistency in general but can vary considerably for non-uniform target shapes and intensity levels. PET/CT-derived GTV contours stemming from the gradient-based tool show good agreement with the anatomically and metabolically more accurate PET/MR-derived GTV contours, but tumor delineation accuracy can be further improved with the use PET/MR.« less

  8. Do Optimal Prognostic Thresholds in Continuous Physiological Variables Really Exist? Analysis of Origin of Apparent Thresholds, with Systematic Review for Peak Oxygen Consumption, Ejection Fraction and BNP

    PubMed Central

    Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.

    2014-01-01

    Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020

  9. Coupled channel effects on resonance states of positronic alkali atom

    NASA Astrophysics Data System (ADS)

    Yamashita, Takuma; Kino, Yasushi

    2018-01-01

    S-wave Feshbach resonance states belonging to dipole series in positronic alkali atoms (e+Li, e+Na, e+K, e+Rb and e+Cs) are studied by coupled-channel calculations within a three-body model. Resonance energies and widths below a dissociation threshold of alkali-ion and positronium are calculated with a complex scaling method. Extended model potentials that provide positronic pseudo-alkali-atoms are introduced to investigate the relationship between the resonance states and dissociation thresholds based on a three-body dynamics. Resonances of the dipole series below a dissociation threshold of alkali-atom and positron would have some associations with atomic energy levels that results in longer resonance lifetimes than the prediction of the analytical law derived from the ion-dipole interaction.

  10. Algorithmic detectability threshold of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  11. On the Raman threshold of passive large mode area fibers

    NASA Astrophysics Data System (ADS)

    Jauregui, Cesar; Limpert, Jens; Tünnermann, Andreas

    2011-02-01

    The output power of fiber optic laser systems has been exponentially increasing in the last years. However, non-linear effects, and in particular stimulated Raman scattering (SRS), are threatening to seriously limit the development pace in the near future. SRS can take place anywhere along the laser system, however it is actually the passive delivery fiber at the end of the system, the section where SRS is most likely to occur. The common way to combat this problem is to use the so-called Large Mode Area (LMA) fibers. However, these fibers are expensive and have a multimode nature that will either reduce the beam quality of the laser output or require a careful excitation of the fundamental mode. Furthermore, the larger the core area, the more complicated it will be to sustain single-mode operation. Therefore, it is becoming increasingly important to be able to determine which is the minimum core area required in the delivery fiber to avoid SRS. This calculation is usually carried out using the conventional formula for the Raman Threshold published by R.G. Smith in 1972: Pth =16Aeff gRLeff . In this work we demonstrate that this formula and the conclusions derived from it are inaccurate for short (several meters long) LMA fibers. For example, one widely spread belief (obtained from this expression) is that there is no dependence of the Raman intensity threshold (Ith=Pth/Aeff) on the mode area. However, our calculations show otherwise. Additionally, we have obtained an improved Raman threshold formula valid for short LMA fibers.

  12. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  13. Derivation of predicted no-effect concentration and ecological risk for atrazine better based on reproductive fitness.

    PubMed

    Zheng, Lei; Zhang, Yizhang; Yan, Zhenguang; Zhang, Juan; Li, Linlin; Zhu, Yan; Zhang, Yahui; Zheng, Xin; Wu, Jiangyue; Liu, Zhengtao

    2017-08-01

    Atrazine (ATZ) is an herbicide most commonly used in China and other regions of the world. It is reported toxic to aquatic organisms, and frequently occurs at relatively high concentrations. Currently, ATZ has been proved to affect reproduction of aquatic species at much lower levels. So it is controversial to perform ecological risk assessment using predicted no-effect concentrations (PENCs) derived from traditional endpoints, which fail to provide adequate protection to aquatic organisms. In this study, PNECs of ATZ were derived based on six endpoints of survival, growth, behavior, biochemistry, genetics and reproduction. The PNEC derived from reproductive lesion was 0.044μg ATZ L -1 , which was obviously lower than that derived from other endpoints. In addition, a tiered ecological risk assessment was conducted in the Taizi River based on six PNECs derived from six categories of toxicity endpoints. Results of these two methods of ecological risk assessment were consistent with each other, and the risk level of ATZ to aquatic organisms reached highest as taking reproductive fitness into account. The joint probability indicated that severe ecological risk rooting in reproduction might exist 93.9% and 99.9% of surface water in the Taizi River, while 5% threshold (HC 5 ) and 1% threshold (HC 1 ) were set up to protect aquatic organisms, respectively. We hope the present work could provide valuable information to manage and control ATZ pollution. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Visual Cortical Function in Very Low Birth Weight Infants without Retinal or Cerebral Pathology

    PubMed Central

    Hou, Chuan; Norcia, Anthony M.; Madan, Ashima; Tith, Solina; Agarwal, Rashi

    2011-01-01

    Purpose. Preterm infants are at high risk of visual and neural developmental deficits. However, the development of visual cortical function in preterm infants with no retinal or neurologic morbidity has not been well defined. To determine whether premature birth itself alters visual cortical function, swept parameter visual evoked potential (sVEP) responses of healthy preterm infants were compared with those of term infants. Methods. Fifty-two term infants and 58 very low birth weight (VLBW) infants without significant retinopathy of prematurity or neurologic morbidities were enrolled. Recruited VLBW infants were between 26 and 33 weeks of gestational age, with birth weights of less than 1500 g. Spatial frequency, contrast, and vernier offset sweep VEP tuning functions were measured at 5 to 7 months' corrected age. Acuity and contrast thresholds were derived by extrapolating the tuning functions to 0 amplitude. These thresholds and suprathreshold response amplitudes were compared between groups. Results. Preterm infants showed increased thresholds (indicating decreased sensitivity to visual stimuli) and reductions in amplitudes for all three measures. These changes in cortical responsiveness were larger in the <30 weeks ' gestational age subgroup than in the ≥30 weeks' gestational age subgroup. Conclusions. Preterm infants with VLBW had measurable and significant changes in cortical responsiveness that were correlated with gestational age. These results suggest that premature birth in the absence of identifiable retinal or neurologic abnormalities has a significant effect on visual cortical sensitivity at 5 to 7 months' of corrected age and that gestational age is an important factor in visual development. PMID:22025567

  15. Perceived discrimination and health outcomes a gender comparison among Asian-Americans nationwide.

    PubMed

    Hahm, Hyeouk Chris; Ozonoff, Al; Gaumond, Jillian; Sue, Stanley

    2010-09-01

    We examined whether similarities and differences exist in the association between perceived discrimination and poor mental and physical health among Asian-American adult women and men. We also tested whether Asian-American women would have a lower perceived discrimination threshold for developing negative health outcomes than Asian-American men. Data were derived from the National Latino and Asian-American Study (2002-2003). A nationally representative sample of Asian-American adults (1,075 women and 972 men) was examined. There were more gender similarities than differences in the strong association between discrimination and health. More prominent gender differences were found for the specific level of discrimination and its potential health effects. Specifically, for both Asian women and men, a high level of perceived discrimination showed stronger associations with mental health than with physical health outcomes. And yet, compared with men, the threshold of discrimination was lower for women in affecting mental and physical health status. The findings underscore that a high level of discrimination was associated with negative mental and physical health outcomes for both women and men. However, women had more negative mental and physical health outcomes when exposed to a lower threshold of discrimination than men. These findings suggest that failing to examine women and men separately in discrimination research may no longer be appropriate among the Asian-American population. Future research should focus attention on the biological, social, and political mechanisms that mitigate the adverse health effects of discrimination in order to develop a more comprehensive approach to eliminate disparities in health. 2010 Jacobs Institute of Women

  16. The impact of manual threshold selection in medical additive manufacturing.

    PubMed

    van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan

    2017-04-01

    Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.

  17. Demand for Colonoscopy in Colorectal Cancer Screening Using a Quantitative Fecal Immunochemical Test and Age/Sex-Specific Thresholds for Test Positivity.

    PubMed

    Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi

    2018-06-01

    Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity. Cancer Epidemiol Biomarkers Prev; 27(6); 704-9. ©2018 AACR . ©2018 American Association for Cancer Research.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elhadj, Selim; Yoo, Jae-hyuck; Negres, Raluca A.

    The optical damage performance of electrically conductive gallium nitride (GaN) and indium tin oxide (ITO) films is addressed using large area, high power laser beam exposures at 1064 nm sub-bandgap wavelength. Analysis of the laser damage process assumes that onset of damage (threshold) is determined by the absorption and heating of a nanoscale region of a characteristic size reaching a critical temperature. We use this model to rationalize semi-quantitatively the pulse width scaling of the damage threshold from picosecond to nanosecond timescales, along with the pulse width dependence of the damage threshold probability derived by fitting large beam damage densitymore » data. Multi-shot exposures were used to address lifetime performance degradation described by an empirical expression based on the single exposure damage model. A damage threshold degradation of at least 50% was observed for both materials. Overall, the GaN films tested had 5-10 × higher optical damage thresholds than the ITO films tested for comparable transmission and electrical conductivity. This route to optically robust, large aperture transparent electrodes and power optoelectronics may thus involve use of next generation widegap semiconductors such as GaN.« less

  19. Universal phase transition in community detectability under a stochastic block model.

    PubMed

    Chen, Pin-Yu; Hero, Alfred O

    2015-03-01

    We prove the existence of an asymptotic phase-transition threshold on community detectability for the spectral modularity method [M. E. J. Newman, Phys. Rev. E 74, 036104 (2006) and Proc. Natl. Acad. Sci. (USA) 103, 8577 (2006)] under a stochastic block model. The phase transition on community detectability occurs as the intercommunity edge connection probability p grows. This phase transition separates a subcritical regime of small p, where modularity-based community detection successfully identifies the communities, from a supercritical regime of large p where successful community detection is impossible. We show that, as the community sizes become large, the asymptotic phase-transition threshold p* is equal to √[p1p2], where pi(i=1,2) is the within-community edge connection probability. Thus the phase-transition threshold is universal in the sense that it does not depend on the ratio of community sizes. The universal phase-transition phenomenon is validated by simulations for moderately sized communities. Using the derived expression for the phase-transition threshold, we propose an empirical method for estimating this threshold from real-world data.

  20. Education-Adjusted Normality Thresholds for FDG-PET in the Diagnosis of Alzheimer Disease.

    PubMed

    Mainta, Ismini C; Trombella, Sara; Morbelli, Silvia; Frisoni, Giovanni B; Garibotto, Valentina

    2018-06-05

    A corollary of the reserve hypothesis is that what is regarded as pathological cortical metabolism in patients might vary according to education. The aim of this study is to assess the incremental diagnostic value of education-adjusted over unadjusted thresholds on the diagnostic accuracy of FDG-PET as a biomarker for Alzheimer disease (AD). We compared cortical metabolism in 90 healthy controls and 181 AD patients from the Alzheimer Disease Neuroimaging Initiative (ADNI) database. The AUC of the ROC curve did not differ significantly between the whole group and the higher-education patients or the lower-education subjects. The threshold of wMetaROI values providing 80% sensitivity was lower in higher-education patients and higher in the lower-education patients, compared to the standard threshold derived over the whole AD collective, without, however, significant changes in sensitivity and specificity. These data show that education, as a proxy of reserve, is not a major confounder in the diagnostic accuracy of FDG-PET in AD and the adoption of education-adjusted thresholds is not required in daily practice. © 2018 S. Karger AG, Basel.

  1. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  2. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  3. The Development of the Text Reception Threshold Test: A Visual Analogue of the Speech Reception Threshold Test

    ERIC Educational Resources Information Center

    Zekveld, Adriana A.; George, Erwin L. J.; Kramer, Sophia E.; Goverts, S. Theo; Houtgast, Tammo

    2007-01-01

    Purpose: In this study, the authors aimed to develop a visual analogue of the widely used Speech Reception Threshold (SRT; R. Plomp & A. M. Mimpen, 1979b) test. The Text Reception Threshold (TRT) test, in which visually presented sentences are masked by a bar pattern, enables the quantification of modality-aspecific variance in speech-in-noise…

  4. Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.

    2003-01-01

    A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.

  5. When is Chemical Similarity Significant? The Statistical Distribution of Chemical Similarity Scores and Its Extreme Values

    PubMed Central

    Baldi, Pierre

    2010-01-01

    As repositories of chemical molecules continue to expand and become more open, it becomes increasingly important to develop tools to search them efficiently and assess the statistical significance of chemical similarity scores. Here we develop a general framework for understanding, modeling, predicting, and approximating the distribution of chemical similarity scores and its extreme values in large databases. The framework can be applied to different chemical representations and similarity measures but is demonstrated here using the most common binary fingerprints with the Tanimoto similarity measure. After introducing several probabilistic models of fingerprints, including the Conditional Gaussian Uniform model, we show that the distribution of Tanimoto scores can be approximated by the distribution of the ratio of two correlated Normal random variables associated with the corresponding unions and intersections. This remains true also when the distribution of similarity scores is conditioned on the size of the query molecules in order to derive more fine-grained results and improve chemical retrieval. The corresponding extreme value distributions for the maximum scores are approximated by Weibull distributions. From these various distributions and their analytical forms, Z-scores, E-values, and p-values are derived to assess the significance of similarity scores. In addition, the framework allows one to predict also the value of standard chemical retrieval metrics, such as Sensitivity and Specificity at fixed thresholds, or ROC (Receiver Operating Characteristic) curves at multiple thresholds, and to detect outliers in the form of atypical molecules. Numerous and diverse experiments carried in part with large sets of molecules from the ChemDB show remarkable agreement between theory and empirical results. PMID:20540577

  6. A Two-Biomarker Model Predicts Mortality in the Critically Ill with Sepsis.

    PubMed

    Mikacenic, Carmen; Price, Brenda L; Harju-Baker, Susanna; O'Mahony, D Shane; Robinson-Cohen, Cassianne; Radella, Frank; Hahn, William O; Katz, Ronit; Christiani, David C; Himmelfarb, Jonathan; Liles, W Conrad; Wurfel, Mark M

    2017-10-15

    Improving the prospective identification of patients with systemic inflammatory response syndrome (SIRS) and sepsis at low risk for organ dysfunction and death is a major clinical challenge. To develop and validate a multibiomarker-based prediction model for 28-day mortality in critically ill patients with SIRS and sepsis. A derivation cohort (n = 888) and internal test cohort (n = 278) were taken from a prospective study of critically ill intensive care unit (ICU) patients meeting two of four SIRS criteria at an academic medical center for whom plasma was obtained within 24 hours. The validation cohort (n = 759) was taken from a prospective cohort enrolled at another academic medical center ICU for whom plasma was obtained within 48 hours. We measured concentrations of angiopoietin-1, angiopoietin-2, IL-6, IL-8, soluble tumor necrosis factor receptor-1, soluble vascular cell adhesion molecule-1, granulocyte colony-stimulating factor, and soluble Fas. We identified a two-biomarker model in the derivation cohort that predicted mortality (area under the receiver operator characteristic curve [AUC], 0.79; 95% confidence interval [CI], 0.74-0.83). It performed well in the internal test cohort (AUC, 0.75; 95% CI, 0.65-0.85) and the external validation cohort (AUC, 0.77; 95% CI, 0.72-0.83). We determined a model score threshold demonstrating high negative predictive value (0.95) for death. In addition to a low risk of death, patients below this threshold had shorter ICU length of stay, lower incidence of acute kidney injury, acute respiratory distress syndrome, and need for vasopressors. We have developed a simple, robust biomarker-based model that identifies patients with SIRS/sepsis at low risk for death and organ dysfunction.

  7. 24 CFR 954.104 - Performance thresholds.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Performance thresholds. 954.104... DEVELOPMENT INDIAN HOME PROGRAM Applying for Assistance § 954.104 Performance thresholds. Applicants must have... HOME program must have performed adequately. In cases of previously documented deficient performance...

  8. 24 CFR 954.104 - Performance thresholds.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Performance thresholds. 954.104... DEVELOPMENT INDIAN HOME PROGRAM Applying for Assistance § 954.104 Performance thresholds. Applicants must have... HOME program must have performed adequately. In cases of previously documented deficient performance...

  9. Acoustic Reflexes in Normal-Hearing Adults, Typically Developing Children, and Children with Suspected Auditory Processing Disorder: Thresholds, Real-Ear Corrections, and the Role of Static Compliance on Estimates.

    PubMed

    Saxena, Udit; Allan, Chris; Allen, Prudence

    2017-06-01

    Previous studies have suggested elevated reflex thresholds in children with auditory processing disorders (APDs). However, some aspects of the child's ear such as ear canal volume and static compliance of the middle ear could possibly affect the measurements of reflex thresholds and thus impact its interpretation. Sound levels used to elicit reflexes in a child's ear may be higher than predicted by calibration in a standard 2-cc coupler, and lower static compliance could make visualization of very small changes in impedance at threshold difficult. For this purpose, it is important to evaluate threshold data with consideration of differences between children and adults. A set of studies were conducted. The first compared reflex thresholds obtained using standard clinical procedures in children with suspected APD to that of typically developing children and adults to test the replicability of previous studies. The second study examined the impact of ear canal volume on estimates of reflex thresholds by applying real-ear corrections. Lastly, the relationship between static compliance and reflex threshold estimates was explored. The research is a set of case-control studies with a repeated measures design. The first study included data from 20 normal-hearing adults, 28 typically developing children, and 66 children suspected of having an APD. The second study included 28 normal-hearing adults and 30 typically developing children. In the first study, crossed and uncrossed reflex thresholds were measured in 5-dB step size. Reflex thresholds were analyzed using repeated measures analysis of variance (RM-ANOVA). In the second study, uncrossed reflex thresholds, real-ear correction, ear canal volume, and static compliance were measured. Reflex thresholds were measured using a 1-dB step size. The effect of real-ear correction and static compliance on reflex threshold was examined using RM-ANOVA and Pearson correlation coefficient, respectively. Study 1 replicated previous studies showing elevated reflex thresholds in many children with suspected APD when compared to data from adults using standard clinical procedures, especially in the crossed condition. The thresholds measured in children with suspected APD tended to be higher than those measured in the typically developing children. There were no significant differences between the typically developing children and adults. However, when real-ear calibrated stimulus levels were used, it was found that children's thresholds were elicited at higher levels than in the adults. A significant relationship between reflex thresholds and static compliance was found in the adult data, showing a trend for higher thresholds in ears with lower static compliance, but no such relationship was found in the data from the children. This study suggests that reflex measures in children should be adjusted for real-ear-to-coupler differences before interpretation. The data in children with suspected APD support previous studies suggesting abnormalities in reflex thresholds. The lack of correlation between threshold and static compliance estimates in children as was observed in the adults may suggest a nonmechanical explanation for age and clinically related effects. American Academy of Audiology

  10. On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?

    PubMed

    Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro

    2016-01-01

    Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Assessing Regional and Interspecific Variation in Threshold Responses of Forest Breeding Birds through Broad Scale Analyses

    PubMed Central

    van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L.

    2013-01-01

    Background Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. Methodology/Principal Findings We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45–87.96% forest cover for persistence and 50.82–91.02% for extinction dynamics. Conclusions/Significance Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that threshold values cannot simply be transferred across regions or interpreted as clear-cut targets for ecosystem management and conservation. PMID:23409106

  12. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    PubMed

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on understanding the distributional characteristics of such uncertainty. Our approach provides a tool to improve decision making. © 2013 Society for Conservation Biology.

  13. Amplitude and polarization asymmetries in a ring laser

    NASA Technical Reports Server (NTRS)

    Campbell, L. L.; Buholz, N. E.

    1971-01-01

    Asymmetric amplitude effects between the oppositely directed traveling waves in a He-Ne ring laser are analyzed both theoretically and experimentally. These effects make it possible to detect angular orientations of an inner-cavity bar with respect to the plane of the ring cavity. The amplitude asymmetries occur when a birefringent bar is placed in the three-mirror ring cavity, and an axial magnetic field is applied to the active medium. A simplified theoretical analysis is performed by using a first order perturbation theory to derive an expression for the polarization of the active medium, and a set of self-consistent equations are derived to predict threshold conditions. Polarization asymmetries between the oppositely directed waves are also predicted. Amplitude asymmetries similar in nature to those predicted at threshold occur when the laser is operating in 12-15 free-running modes, and polarization asymmetry occurs simultaneously.

  14. Generation of cyclotron harmonic waves in the ionospheric modification experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janabi, A.H.A.; Kumar, A.; Sharma, R.P.

    1994-02-01

    In the present paper, the parametric decay instability of the pump X-mode into electron Bernstein wave (EBW) near second harmonics of electron cyclotron frequency and IBW at different harmonics ([omega] < n[omega][sub ci];n = 2, 3, 4) is examined. Expressions are derived for homogeneous threshold, growth rate and convective threshold for this instability. Applications and relevances of the present investigation to ionospheric modification experiment in the F-layer of the ionosphere as well as during intense electron cyclotron resonance heating in the upcoming MTX tokamak have been given.

  15. A Novel Optical/digital Processing System for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Boone, Bradley G.; Shukla, Oodaye B.

    1993-01-01

    This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.

  16. Pool desiccation and developmental thresholds in the common frog, Rana temporaria.

    PubMed

    Lind, Martin I; Persbo, Frida; Johansson, Frank

    2008-05-07

    The developmental threshold is the minimum size or condition that a developing organism must have reached in order for a life-history transition to occur. Although developmental thresholds have been observed for many organisms, inter-population variation among natural populations has not been examined. Since isolated populations can be subjected to strong divergent selection, population divergence in developmental thresholds can be predicted if environmental conditions favour fast or slow developmental time in different populations. Amphibian metamorphosis is a well-studied life-history transition, and using a common garden approach we compared the development time and the developmental threshold of metamorphosis in four island populations of the common frog Rana temporaria: two populations originating from islands with only temporary breeding pools and two from islands with permanent pools. As predicted, tadpoles from time-constrained temporary pools had a genetically shorter development time than those from permanent pools. Furthermore, the variation in development time among females from temporary pools was low, consistent with the action of selection on rapid development in this environment. However, there were no clear differences in the developmental thresholds between the populations, indicating that the main response to life in a temporary pool is to shorten the development time.

  17. 24 CFR 1003.302 - Project specific threshold requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false Project specific threshold requirements. 1003.302 Section 1003.302 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN... Purpose Grant Application and Selection Process § 1003.302 Project specific threshold requirements. (a...

  18. Trabecular bone score (TBS): Method and applications.

    PubMed

    Martineau, P; Leslie, W D

    2017-11-01

    Trabecular bone score (TBS) is a texture index derived from standard lumbar spine dual energy X-ray absorptiometry (DXA) images and provides information about the underlying bone independent of the bone mineral density (BMD). Several salient observations have emerged. Numerous studies have examined the relationship between TBS and fracture risk and have shown that lower TBS values are associated with increased risk for major osteoporotic fracture in postmenopausal women and older men, with this result being independent of BMD values and other clinical risk factors. Therefore, despite being derived from standard DXA images, the information contained in TBS is independent and complementary to the information provided by BMD and the FRAX® tool. A procedure to generate TBS-adjusted FRAX probabilities has become available with the resultant predicted fracture risks shown to be more accurate than the standard FRAX tool. With these developments, TBS has emerged as a clinical tool for improved fracture risk prediction and guiding decisions regarding treatment initiation, particularly for patients with FRAX probabilities around an intervention threshold. In this article, we review the development, validation, clinical application, and limitations of TBS. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Evaluation of quantification methods for real-time PCR minor groove binding hybridization probe assays.

    PubMed

    Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V

    2007-02-01

    Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.

  20. A Simplified Approach to Cloud Masking with VIIRS in the S-NPP/JPSS Era

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Lafontaine, Frank J.

    2014-01-01

    The quantitative detection of clouds in satellite imagery has a number of important applications in weather analysis. The proper interpretation of satellite imagery for improved situational awareness depends on knowing where the clouds are at all times of the day. Additionally, many products derived from infrared measurements need accurate cloud information to mask out regions where retrieval of geophysical parameters in the atmosphere or on the surface are not possible. Thus, the accurate detection of the presence of clouds in satellite imagery on a global basis is important to the product developers and the operational weather community to support their decision-making process. This abstract describes an application of a two-channel bispectral composite threshold (BCT) approach applied to VIIRS imagery. The simplified BCT approach uses only the 10.76 and 3.75 micrometer spectral channels in two spectral tests; a straightforward infrared threshold test with the longwave channel and a shortwave minus longwave channel difference test. The key to the success of this approach as demonstrated in past applications to GOES and MODIS data is the generation of temporally and spatially dependent thresholds used in the tests from a previous number of days at similar observations to the current data. The presentation will present an overview of the approach and intercomparison results with other satellites, methods, and against verification data.

  1. An impact analysis of the application of the threshold of toxicological concern concept to pharmaceuticals.

    PubMed

    Delaney, Edward J

    2007-11-01

    The recent application of the threshold of toxicological concern (TTC) concept to the regulation of pharmaceuticals in the European Union is analyzed. The derivation of TTC and the threshold of regulation that followed it were originally intended to provide makers of food contact materials greater flexibility with their products, while allowing the CFSAN branch of FDA to conserve its resources for more important issues. A reanalysis of the scientific data employed by EMEA regulators to rationalize its 1.5 mcg default genotoxic impurity limit is presented to demonstrate (a) that direct translation of conclusions relevant to food consumption are unduly influenced by many classes of potent carcinogens of historic concern which would be impossible to generate unknowingly as pharmaceutical impurities, and (b) that the majority of reactive chemicals that would be useful to synthetic chemists are among the least potent carcinogens in the underpinning supportive analyses. Evidence is further presented to show that implementation and acceptance of a 1.5 mcg TTC-based total limit on such impurities can be expected to impede pharmaceutical research and development efficiency while providing an insignificant cancer risk-avoidance benefit to patients who require pharmaceutical treatments. The conclusion drawn is that a significantly higher default limit can readily be defended that would be both in keeping with TTC principles and the best interest of patients.

  2. Developing a driving Safety Index using a Delphi stated preference experiment.

    PubMed

    Jamson, Samantha; Wardman, Mark; Batley, Richard; Carsten, Oliver

    2008-03-01

    Whilst empirical evidence is available concerning the effect of some aspects of driving behaviour on safety (e.g. speed choice), there is scant knowledge about safety thresholds, i.e. the point at which behaviour can be considered unsafe. Furthermore, it is almost impossible to ascertain the interaction between various aspects of driving behaviour. For example, how might drivers' lateral control of a vehicle be mediated by their speed choice-are the effects additive or do they cancel each other out. Complex experimental or observational studies would need to be undertaken to establish the nature of such effects. As an alternative, a Delphi study was undertaken to use expert judgement as a way of deriving a first approximation of these threshold and combinatory effects. Using a stated preference technique, road safety professionals make judgements about drivers' safe or unsafe behaviour. The aim was to understand the relative weightings that are assigned to a number of driver behaviours and thereby to construct a Safety Index. As expected, experts were able to establish thresholds, above (or below) which changes to the behavioural parameters had minimal impact on safety. This provided us with a Safety Index, based on a model that had face validity and a convincing range of values. However, the experts found the task of combining these driver behaviours more difficult, reflecting the elusive nature of safety estimates. Suggestions for future validation of our Safety Index are provided.

  3. Regulation of non-relevant metabolites of plant protection products in drinking and groundwater in the EU: Current status and way forward.

    PubMed

    Laabs, V; Leake, C; Botham, P; Melching-Kollmuß, S

    2015-10-01

    Non-relevant metabolites are defined in the EU regulation for plant protection product authorization and a detailed definition of non-relevant metabolites is given in an EU Commission DG Sanco (now DG SANTE - Health and Food Safety) guidance document. However, in water legislation at EU and member state level non-relevant metabolites of pesticides are either not specifically regulated or diverse threshold values are applied. Based on their inherent properties, non-relevant metabolites should be regulated based on substance-specific and toxicity-based limit values in drinking and groundwater like other anthropogenic chemicals. Yet, if a general limit value for non-relevant metabolites in drinking and groundwater is favored, an application of a Threshold of Toxicological Concern (TTC) concept for Cramer class III compounds leads to a threshold value of 4.5 μg L(-1). This general value is exemplarily shown to be protective for non-relevant metabolites, based on individual drinking water limit values derived for a set of 56 non-relevant metabolites. A consistent definition of non-relevant metabolites of plant protection products, as well as their uniform regulation in drinking and groundwater in the EU, is important to achieve legal clarity for all stakeholders and to establish planning security for development of plant protection products for the European market. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Gauging the likelihood of stable cavitation from ultrasound contrast agents

    NASA Astrophysics Data System (ADS)

    Bader, Kenneth B.; Holland, Christy K.

    2013-01-01

    The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.

  5. A ratiometric threshold for determining presence of cancer during fluorescence-guided surgery.

    PubMed

    Warram, Jason M; de Boer, Esther; Moore, Lindsay S; Schmalbach, Cecelia E; Withrow, Kirk P; Carroll, William R; Richman, Joshua S; Morlandt, Anthony B; Brandwein-Gensler, Margaret; Rosenthal, Eben L

    2015-07-01

    Fluorescence-guided imaging to assist in identification of malignant margins has the potential to dramatically improve oncologic surgery. However, a standardized method for quantitative assessment of disease-specific fluorescence has not been investigated. Introduced here is a ratiometric threshold derived from mean fluorescent tissue intensity that can be used to semi-quantitatively delineate tumor from normal tissue. Open-field and a closed-field imaging devices were used to quantify fluorescence in punch biopsy tissues sampled from primary tumors collected during a phase 1 trial evaluating the safety of cetuximab-IRDye800 in patients (n = 11) undergoing surgical intervention for head and neck cancer. Fluorescence ratios were calculated using mean fluorescence intensity (MFI) from punch biopsy normalized by MFI of patient-matched tissues. Ratios were compared to pathological assessment and a ratiometric threshold was established to predict presence of cancer. During open-field imaging using an intraoperative device, the threshold for muscle normalized tumor fluorescence was found to be 2.7, which produced a sensitivity of 90.5% and specificity of 78.6% for delineating disease tissue. The skin-normalized threshold generated greater sensitivity (92.9%) and specificity (81.0%). Successful implementation of a semi-quantitative threshold can provide a scientific methodology for delineating disease from normal tissue during fluorescence-guided resection of cancer. © 2015 Wiley Periodicals, Inc.

  6. Assessment of the Anticonvulsant Potency of Ursolic Acid in Seizure Threshold Tests in Mice.

    PubMed

    Nieoczym, Dorota; Socała, Katarzyna; Wlaź, Piotr

    2018-05-01

    Ursolic acid (UA) is a plant derived compound which is also a component of the standard human diet. It possesses a wide range of pharmacological properties, i.e., antioxidant, anti-inflammatory, antimicrobial and antitumor, which have been used in folk medicine for centuries. Moreover, influence of UA on central nervous system-related processes, i.e., pain, anxiety and depression, was proved in experimental studies. UA also revealed anticonvulsant properties in animal models of epilepsy and seizures. The aim of the present study was to investigate the influence of UA on seizure thresholds in three acute seizure models in mice, i.e., the 6 Hz-induced psychomotor seizure threshold test, the maximal electroshock threshold (MEST) test and the timed intravenous pentylenetetrazole (iv PTZ) infusion test. We also examined its effect on the muscular strength (assessed in the grip strength test) and motor coordination (estimated in the chimney test) in mice. UA at doses of 50 and 100 mg/kg significantly increased the seizure thresholds in the 6 Hz and MEST tests. The studied compound did not influence the seizure thresholds in the iv PTZ test. Moreover, UA did not affect the motor coordination and muscular strength in mice. UA displays only a weak anticonvulsant potential which is dependent on the used seizure model.

  7. Gauging the likelihood of stable cavitation from ultrasound contrast agents.

    PubMed

    Bader, Kenneth B; Holland, Christy K

    2013-01-07

    The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.

  8. Identifying the Threshold of Dominant Controls on Fire Spread in a Boreal Forest Landscape of Northeast China

    PubMed Central

    Liu, Zhihua; Yang, Jian; He, Hong S.

    2013-01-01

    The relative importance of fuel, topography, and weather on fire spread varies at different spatial scales, but how the relative importance of these controls respond to changing spatial scales is poorly understood. We designed a “moving window” resampling technique that allowed us to quantify the relative importance of controls on fire spread at continuous spatial scales using boosted regression trees methods. This quantification allowed us to identify the threshold value for fire size at which the dominant control switches from fuel at small sizes to weather at large sizes. Topography had a fluctuating effect on fire spread across the spatial scales, explaining 20–30% of relative importance. With increasing fire size, the dominant control switched from bottom-up controls (fuel and topography) to top-down controls (weather). Our analysis suggested that there is a threshold for fire size, above which fires are driven primarily by weather and more likely lead to larger fire size. We suggest that this threshold, which may be ecosystem-specific, can be identified using our “moving window” resampling technique. Although the threshold derived from this analytical method may rely heavily on the sampling technique, our study introduced an easily implemented approach to identify scale thresholds in wildfire regimes. PMID:23383247

  9. Gauging the likelihood of stable cavitation from ultrasound contrast agents

    PubMed Central

    Bader, Kenneth B; Holland, Christy K

    2015-01-01

    The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs. PMID:23221109

  10. Allogeneic Transplantation of Müller-Derived Retinal Ganglion Cells Improves Retinal Function in a Feline Model of Ganglion Cell Depletion

    PubMed Central

    Becker, Silke; Eastlake, Karen; Jayaram, Hari; Jones, Megan F.; Brown, Robert A.; McLellan, Gillian J.; Charteris, David G.; Khaw, Peng T.

    2016-01-01

    Human Müller glia with stem cell characteristics (hMGSCs) have been shown to improve retinal function upon transplantation into rat models of retinal ganglion cell (RGC) depletion. However, their translational potential may depend upon successful engraftment and improvement of retinal function in experimental models with anatomical and functional features resembling those of the human eye. We investigated the effect of allogeneic transplantation of feline Müller glia with the ability to differentiate into cells expressing RGC markers, following ablation of RGCs by N-methyl-d-aspartate (NMDA). Unlike previous observations in the rat, transplantation of hMGSC-derived RGCs into the feline vitreous formed aggregates and elicited a severe inflammatory response without improving visual function. In contrast, allogeneic transplantation of feline MGSC (fMGSC)-derived RGCs into the vitrectomized eye improved the scotopic threshold response (STR) of the electroretinogram (ERG). Despite causing functional improvement, the cells did not attach onto the retina and formed aggregates on peripheral vitreous remnants, suggesting that vitreous may constitute a barrier for cell attachment onto the retina. This was confirmed by observations that cellular scaffolds of compressed collagen and enriched preparations of fMGSC-derived RGCs facilitated cell attachment. Although cells did not migrate into the RGC layer or the optic nerve, they significantly improved the STR and the photopic negative response of the ERG, indicative of increased RGC function. These results suggest that MGSCs have a neuroprotective ability that promotes partial recovery of impaired RGC function and indicate that cell attachment onto the retina may be necessary for transplanted cells to confer neuroprotection to the retina. Significance Müller glia with stem cell characteristics are present in the adult human retina, but they do not have regenerative ability. These cells, however, have potential for development of cell therapies to treat retinal disease. Using a feline model of retinal ganglion cell (RGC) depletion, cell grafting methods to improve RGC function have been developed. Using cellular scaffolds, allogeneic transplantation of Müller glia-derived RGC promoted cell attachment onto the retina and enhanced retinal function, as judged by improvement of the photopic negative and scotopic threshold responses of the electroretinogram. The results suggest that the improvement of RGC function observed may be ascribed to the neuroprotective ability of these cells and indicate that attachment of the transplanted cells onto the retina is required to promote effective neuroprotection. PMID:26718648

  11. Automated Sargassum Detection for Landsat Imagery

    NASA Astrophysics Data System (ADS)

    McCarthy, S.; Gallegos, S. C.; Armstrong, D.

    2016-02-01

    We implemented a system to automatically detect Sargassum, a floating seaweed, in 30-meter LANDSAT-8 Operational Land Imager (OLI) imagery. Our algorithm for Sargassum detection is an extended form of Hu's approach to derive a floating algae index (FAI) [1]. Hu's algorithm was developed for Moderate Resolution Imaging Spectroradiometer (MODIS) data, but we extended it for use with the OLI bands centered at 655, 865, and 1609 nm, which are comparable to the MODIS bands located at 645, 859, and 1640 nm. We also developed a high resolution true color product to mask cloud pixels in the OLI scene by applying a threshold to top of the atmosphere (TOA) radiances in the red (655 nm), green (561 nm), and blue (443 nm) wavelengths, as well as a method for removing false positive identifications of Sargassum in the imagery. Hu's algorithm derives a FAI for each Sargassum identified pixel. Our algorithm is currently set to only flag the presence of Sargassum in an OLI pixel by classifying any pixel with a FAI > 0.0 as Sargassum. Additionally, our system geo-locates the flagged Sargassum pixels identified in the OLI imagery into the U.S. Navy Global HYCOM model grid. One element of the model grid covers an area 0.125 degrees of latitude by 0.125 degrees of longitude. To resolve the differences in spatial coverage between Landsat and HYCOM, a scheme was developed to calculate the percentage of pixels flagged within the grid element and if above a threshold, it will be flagged as Sargassum. This work is a part of a larger system, sponsored by NASA/Applied Science and Technology Project at J.C. Stennis Space Center, to forecast when and where Sargassum will land on shore. The focus area of this work is currently the Texas coast. Plans call for extending our efforts into the Caribbean. References: [1] Hu, Chuanmin. A novel ocean color index to detect floating algae in the global oceans. Remote Sensing of Environment 113 (2009) 2118-2129.

  12. Damage threshold from large retinal spot size repetitive-pulse laser exposures.

    PubMed

    Lund, Brian J; Lund, David J; Edsall, Peter R

    2014-10-01

    The retinal damage thresholds for large spot size, multiple-pulse exposures to a Q-switched, frequency doubled Nd:YAG laser (532 nm wavelength, 7 ns pulses) have been measured for 100 μm and 500 μm retinal irradiance diameters. The ED50, expressed as energy per pulse, varies only weakly with the number of pulses, n, for these extended spot sizes. The previously reported threshold for a multiple-pulse exposure for a 900 μm retinal spot size also shows the same weak dependence on the number of pulses. The multiple-pulse ED50 for an extended spot-size exposure does not follow the n dependence exhibited by small spot size exposures produced by a collimated beam. Curves derived by using probability-summation models provide a better fit to the data.

  13. Stimulated Raman scattering of sub-millimeter waves in bismuth

    NASA Astrophysics Data System (ADS)

    Kumar, Pawan; Tripathi, V. K.

    2007-12-01

    A high-power sub-millimeter wave propagating through bismuth, a semimetal with non-spherical energy surfaces, parametrically excites a space-charge mode and a back-scattered electromagnetic wave. The free carrier density perturbation associated with the space-charge wave couples with the oscillatory velocity due to the pump to derive the scattered wave. The scattered and pump waves exert a pondermotive force on electrons and holes, driving the space-charge wave. The collisional damping of the decay waves determines the threshold for the parametric instability. The threshold intensity for 20 μm wavelength pump turns out to be ˜2×1012 W/cm2. Above the threshold, the growth rate scales increase with ωo, attain a maximum around ωo=6.5ωp, and, after this, falls off.

  14. Critical rainfall conditions for the initiation of torrential flows. Results from the Rebaixader catchment (Central Pyrenees)

    NASA Astrophysics Data System (ADS)

    Abancó, Clàudia; Hürlimann, Marcel; Moya, José; Berenguer, Marc

    2016-10-01

    Torrential flows like debris flows or debris floods are fast movements formed by a mix of water and different amounts of unsorted solid material. They generally occur in steep torrents and pose high risk in mountainous areas. Rainfall is their most common triggering factor and the analysis of the critical rainfall conditions is a fundamental research task. Due to their wide use in warning systems, rainfall thresholds for the triggering of torrential flows are an important outcome of such analysis and are empirically derived using data from past events. In 2009, a monitoring system was installed in the Rebaixader catchment, Central Pyrenees (Spain). Since then, rainfall data of 25 torrential flows (;TRIG rainfalls;) were recorded, with a 5-min sampling frequency. Other 142 rainfalls that did not trigger torrential flows (;NonTRIG rainfalls;) were also collected and analyzed. The goal of this work was threefold: (i) characterize rainfall episodes in the Rebaixader catchment and compare rainfall data that triggered torrential flows and others that did not; (ii) define and test Intensity-Duration (ID) thresholds using rainfall data measured inside the catchment by with different techniques; (iii) analyze how the criterion used for defining the rainfall duration and the spatial variability of rainfall influences the value obtained for the thresholds. The statistical analysis of the rainfall characteristics showed that the parameters that discriminate better the TRIG and NonTRIG rainfalls are the rainfall intensities, the mean rainfall and the total rainfall amount. The antecedent rainfall was not significantly different between TRIG and NonTRIG rainfalls, as it can be expected when the source material is very pervious (a sandy glacial soil in the study site). Thresholds were derived from data collected at one rain gauge located inside the catchment. Two different methods were applied to calculate the duration and intensity of rainfall: (i) using total duration, Dtot, and mean intensity, Imean, of the rainfall event, and (ii) using floating durations, D, and intensities, Ifl, based on the maximum values over floating periods of different duration. The resulting thresholds are considerably different (Imean = 6.20 Dtot-0.36 and Ifl_90% = 5.49 D-0.75, respectively) showing a strong dependence on the applied methodology. On the other hand, the definition of the thresholds is affected by several types of uncertainties. Data from both rain gauges and weather radar were used to analyze the uncertainty associated with the spatial variability of the triggering rainfalls. The analysis indicates that the precipitation recorded by the nearby rain gauges can introduce major uncertainties, especially for convective summer storms. Thus, incorporating radar rainfall can significantly improve the accuracy of the measured triggering rainfall. Finally, thresholds were also derived according to three different criteria for the definition of the duration of the triggering rainfall: (i) the duration until the peak intensity, (ii) the duration until the end of the rainfall; and, (iii) the duration until the trigger of the torrential flow. An important contribution of this work is the assessment of the threshold relationships obtained using the third definition of duration. Moreover, important differences are observed in the obtained thresholds, showing that ID relationships are significantly dependent on the applied methodology.

  15. Application of Biotic Ligand and Toxic Unit modeling approaches to predict improvements in zooplankton species richness in smelter-damaged lakes near Sudbury, Ontario.

    PubMed

    Khan, Farhan R; Keller, W Bill; Yan, Norman D; Welsh, Paul G; Wood, Chris M; McGeer, James C

    2012-02-07

    Using a 30-year record of biological and water chemistry data collected from seven lakes near smelters in Sudbury (Ontario, Canada) we examined the link between reductions of Cu, Ni, and Zn concentrations and zooplankton species richness. The toxicity of the metal mixtures was assessed using an additive Toxic Unit (TU) approach. Four TU models were developed based on total metal concentrations (TM-TU); free ion concentrations (FI-TU); acute LC50s calculated from the Biotic Ligand Model (BLM-TU); and chronic LC50s (acute LC50s adjusted by metal-specific acute-to-chronic ratios, cBLM-TU). All models significantly correlated reductions in metal concentrations to increased zooplankton species richness over time (p < 0.01) with a rank based on r(2) values of cBLM-TU > BLM-TU = FI-TU > TM-TU. Lake-wise comparisons within each model showed that the BLM-TU and cBLM-TU models provided the best description of recovery across all seven lakes. These two models were used to calculate thresholds for chemical and biological recovery using data from reference lakes in the same region. A threshold value of TU = 1 derived from the cBLM-TU provided the most accurate description of recovery. Overall, BLM-based TU models that integrate site-specific water chemistry-derived estimates of toxicity offer a useful predictor of biological recovery.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnese, R.; Anderson, A. J.; Aralis, T.

    The SuperCDMS experiment is designed to directly detect WIMPs (Weakly Interacting Massive Particles) that may constitute the dark matter in our galaxy. During its operation at the Soudan Underground Laboratory, germanium detectors were run in the CDMSlite (Cryogenic Dark Matter Search low ionization threshold experiment) mode to gather data sets with sensitivity specifically for WIMPs with massesmore » $${<}10$$ GeV/$c^2$. In this mode, a large detector-bias voltage is applied to amplify the phonon signals produced by drifting charges. This paper presents studies of the experimental noise and its effect on the achievable energy threshold, which is demonstrated to be as low as 56 eV$$_{\\text{ee}}$$ (electron equivalent energy). The detector biasing configuration is described in detail, with analysis corrections for voltage variations to the level of a few percent. Detailed studies of the electric-field geometry, and the resulting successful development of a fiducial parameter, eliminate poorly measured events, yielding an energy resolution ranging from $${\\sim}$$9 eV$$_{\\text{ee}}$$ at 0 keV to 101 eV$$_{\\text{ee}}$$ at $${\\sim}$$10 keV$$_{\\text{ee}}$$. New results are derived for astrophysical uncertainties relevant to the WIMP-search limits, specifically examining how they are affected by variations in the most probable WIMP velocity and the galactic escape velocity. These variations become more important for WIMP masses below 10 GeV/$c^2$. Finally, new limits on spin-dependent low-mass WIMP-nucleon interactions are derived, with new parameter space excluded for WIMP masses $${\\lesssim}$$3 GeV/$c^2$.« less

  17. Brain-derived neurotrophic factor/neurotrophin 3 regulate axon initial segment location and affect neuronal excitability in cultured hippocampal neurons.

    PubMed

    Guo, Yu; Su, Zi-Jun; Chen, Yi-Kun; Chai, Zhen

    2017-07-01

    Plasticity of the axon initial segment (AIS) has aroused great interest in recent years because it regulates action potential initiation and neuronal excitability. AIS plasticity manifests as modulation of ion channels or variation in AIS structure. However, the mechanisms underlying structural plasticity of the AIS are not well understood. Here, we combined immunofluorescence, patch-clamp recordings, and pharmacological methods in cultured hippocampal neurons to investigate the factors participating in AIS structural plasticity during development. With lowered neuronal density, the distance between the AIS and the soma increased, while neuronal excitability decreased, as shown by the increased action potential threshold and current threshold for firing an action potential. This variation in the location of the AIS was associated with cellular secretory substances, including brain-derived neurotrophic factor (BDNF) and neurotrophin 3 (NT3). Indeed, blocking BDNF and NT3 with TrkB-Fc eliminated the effect of conditioned medium collected from high-density cultures on AIS relocation. Elevating the extracellular concentration of BDNF or NT3 promoted movement of the AIS proximally to the soma and increased neuronal excitability. Furthermore, knockdown of neurotrophin receptors TrkB and TrkC caused distal movement of the AIS. Our results demonstrate that BDNF and NT3 regulate AIS location and neuronal excitability. These regulatory functions of neurotrophic factors provide insight into the molecular mechanisms underlying AIS biology. © 2017 International Society for Neurochemistry.

  18. Flagging optically shallow pixels for improved analysis of ocean color data

    NASA Astrophysics Data System (ADS)

    McKinna, L. I. W.; Werdell, J.; Knowles, D., Jr.

    2016-02-01

    Ocean color remote-sensing is routinely used to derive marine geophysical parameters from sensor-observed water-leaving radiances. However, in clear geometrically shallow regions, traditional ocean color algorithms can be confounded by light reflected from the seafloor. Such regions are typically referred to as "optically shallow". When performing spatiotemporal analyses of ocean color datasets, optically shallow features such as coral reefs can lead to unexpected regional biases. Benthic contamination of the water-leaving radiance is dependent on bathymetry, water clarity and seafloor albedo. Thus, a prototype ocean color processing flag called OPTSHAL has been developed that takes all three variables into account. In the method described here, the optical depth of the water column at 547 nm, ζ(547), is predicted from known bathymetry and estimated inherent optical properties. If ζ(547) is less then the pre-defined threshold, a pixel is flagged as optically shallow. Radiative transfer modeling was used to identify the appropriate threshold value of ζ(547) for a generic benthic sand albedo. OPTSHAL has been evaluated within the NASA Ocean Biology Processing Group's L2GEN code. Using MODIS Aqua imagery, OPTSHAL was tested in two regions: (i) the Pedro Bank south-west of Jamaica, and (ii) the Great Barrier Reef, Australia. It is anticipated that OPTSHAL will benefit end-users when quality controlling derived ocean color products. Further, OPTSHAL may prove useful as a mechanism for switching between optically deep and shallow algorithms during ocean color processing.

  19. Frequency-Locked Detector Threshold Setting Criteria Based on Mean-Time-To-Lose-Lock (MTLL) for GPS Receivers

    PubMed Central

    Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa

    2017-01-01

    Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio (C/N0) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C/N0 can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis. PMID:29207546

  20. Ultrasonically triggered ignition at liquid surfaces.

    PubMed

    Simon, Lars Hendrik; Meyer, Lennart; Wilkens, Volker; Beyer, Michael

    2015-01-01

    Ultrasound is considered to be an ignition source according to international standards, setting a threshold value of 1mW/mm(2) [1] which is based on theoretical estimations but which lacks experimental verification. Therefore, it is assumed that this threshold includes a large safety margin. At the same time, ultrasound is used in a variety of industrial applications where it can come into contact with explosive atmospheres. However, until now, no explosion accidents have been reported in connection with ultrasound, so it has been unclear if the current threshold value is reasonable. Within this paper, it is shown that focused ultrasound coupled into a liquid can in fact ignite explosive atmospheres if a specific target positioned at a liquid's surface converts the acoustic energy into a hot spot. Based on ignition tests, conditions could be derived that are necessary for an ultrasonically triggered explosion. These conditions show that the current threshold value can be significantly augmented. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Security of a semi-quantum protocol where reflections contribute to the secret key

    NASA Astrophysics Data System (ADS)

    Krawec, Walter O.

    2016-05-01

    In this paper, we provide a proof of unconditional security for a semi-quantum key distribution protocol introduced in a previous work. This particular protocol demonstrated the possibility of using X basis states to contribute to the raw key of the two users (as opposed to using only direct measurement results) even though a semi-quantum participant cannot directly manipulate such states. In this work, we provide a complete proof of security by deriving a lower bound of the protocol's key rate in the asymptotic scenario. Using this bound, we are able to find an error threshold value such that for all error rates less than this threshold, it is guaranteed that A and B may distill a secure secret key; for error rates larger than this threshold, A and B should abort. We demonstrate that this error threshold compares favorably to several fully quantum protocols. We also comment on some interesting observations about the behavior of this protocol under certain noise scenarios.

  2. Frequency-Locked Detector Threshold Setting Criteria Based on Mean-Time-To-Lose-Lock (MTLL) for GPS Receivers.

    PubMed

    Jin, Tian; Yuan, Heliang; Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa

    2017-12-04

    Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio ( C / N ₀) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C / N ₀ can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis.

  3. Analysis of the interrelationship of the pulmonary irritation and elicitation thresholds in rats sensitized with 1,6-hexamethylene diisocyanate (HDI)

    PubMed Central

    Pauluhn, Jürgen

    2015-01-01

    Abstract This paper summarizes a range of experimental data central for developing a science-based approach for hazard identification of monomeric and polymeric aliphatic 1,6-hexamethylene diisocyanate (HDI). The dose–response curve of HDI-induced pulmonary responses in naïve or dermally sensitized rats after one or several inhalation priming exposures was examined in the Brown Norway (BN) rat asthma model. Emphasis was directed to demonstrate the need and the difficulty in selecting an appropriate pulmonary dose when much of the inhaled chemically reactive vapor may concentration dependently be retained in the upper airways of obligate nose-breathing rats. The course taken acknowledges the experimental challenges in identifying an elicitation threshold for HDI-monomer near or above the saturated vapor concentration or in the presence of a HDI-polymer aerosol. The inhalation threshold dose on elicitation was determined based on a fixed concentration (C) × variable exposure duration (t) protocol for improving inhalation dosimetry of the lower airways. Neutrophilic granulocytes (PMN) in bronchoalveolar lavage (BAL) fluid in equally inhalation primed naïve and dermally sensitized rats were used to define the inhalation elicitation threshold C × t. Sensitized rats elaborated markedly increased PMN challenged sensitized rats relative to equally challenged naïve rats at 5625 mg HDI/m3 × min (75 mg/m3 for 75 min). PMN were essentially indistinguishable at 900 mg HDI/m3 × min. By applying adjustment factors accounting for both inter-species differences in inhalation dosimetry and intra-species susceptibility, the workplace human-equivalent threshold C × t was estimated to be in the range of the current ACGIH TLV® of HDI. Thus, this rat “asthma” model was suitable to demonstrate elicitation thresholds for HDI-vapor after one or several inhalation priming exposures and seems to be suitable to derive occupational exposure values (OELs) for diisocyanates in general. PMID:25924102

  4. Gravity wave control on ESF day-to-day variability: An empirical approach

    NASA Astrophysics Data System (ADS)

    Aswathy, R. P.; Manju, G.

    2017-06-01

    The gravity wave control on the daily variation in nighttime ionization irregularity occurrence is studied using ionosonde data for the period 2002-2007 at magnetic equatorial location Trivandrum. Recent studies during low solar activity period have revealed that the seed perturbations should have the threshold amplitude required to trigger equatorial spread F (ESF), at a particular altitude and that this threshold amplitude undergoes seasonal and solar cycle changes. In the present study, the altitude variation of the threshold seed perturbations is examined for autumnal equinox of different years. Thereafter, a unique empirical model, incorporating the electrodynamical effects and the gravity wave modulation, is developed. Using the model the threshold curve for autumnal equinox season of any year may be delineated if the solar flux index (F10.7) is known. The empirical model is validated using the data for high, moderate, and low solar epochs in 2001, 2004, and 1995, respectively. This model has the potential to be developed further, to forecast ESF incidence, if the base height of ionosphere is in the altitude region where electrodynamics controls the occurrence of ESF. ESF irregularities are harmful for communication and navigation systems, and therefore, research is ongoing globally to predict them. In this context, this study is crucial for evolving a methodology to predict communication as well as navigation outages.Plain Language SummaryThe manifestation of nocturnal ionospheric irregularities at magnetic equatorial regions poses a major hazard for communication and navigation systems. It is therefore essential to arrive at prediction methodologies for these irregularities. The present study puts forth a novel empirical model which, using only solar flux index, successfully differentiates between days with and without nocturnal ionization irregularity occurrence. The model-derived curve is obtained such that the days with and without occurrence of irregularities lie below and above the curve. The model is validated with data from the years 2001 (high solar activity), 2004 (moderate solar activity), and 1995 (low solar activity) which have not been used in the model development. Presently, the model is developed for autumnal equinox season, but the model development will be undertaken for other seasons also in a future work so that the seasonal variability is also incorporated. This model thus holds the potential to be developed into a full-fledged model which can predict occurrence of nocturnal ionospheric irregularities. Globally, concerted efforts are underway to predict these ionospheric irregularities. Hence, this study is extremely important from the point of view of predicting communication and navigation outages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19427841','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19427841"><span>Ischemic lesion volume determination on diffusion weighted images vs. apparent diffusion coefficient maps.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bråtane, Bernt Tore; Bastan, Birgul; Fisher, Marc; Bouley, James; Henninger, Nils</p> <p>2009-07-07</p> <p>Though diffusion weighted imaging (DWI) is frequently used for identifying the ischemic lesion in focal cerebral ischemia, the understanding of spatiotemporal evolution patterns observed with different analysis methods remains imprecise. DWI and calculated apparent diffusion coefficient (ADC) maps were serially obtained in rat stroke models (MCAO): permanent, 90 min, and 180 min temporary MCAO. Lesion volumes were analyzed in a blinded and randomized manner by 2 investigators using (i) a previously validated ADC threshold, (ii) visual determination of hypointense regions on ADC maps, and (iii) visual determination of hyperintense regions on DWI. Lesion volumes were correlated with 24 hour 2,3,5-triphenyltetrazoliumchloride (TTC)-derived infarct volumes. TTC-derived infarct volumes were not significantly different from the ADC and DWI-derived lesion volumes at the last imaging time points except for significantly smaller DWI lesions in the pMCAO model (p=0.02). Volumetric calculation based on TTC-derived infarct also correlated significantly stronger to volumetric calculation based on last imaging time point derived lesions on ADC maps than DWI (p<0.05). Following reperfusion, lesion volumes on the ADC maps significantly reduced but no change was observed on DWI. Visually determined lesion volumes on ADC maps and DWI by both investigators correlated significantly with threshold-derived lesion volumes on ADC maps with the former method demonstrating a stronger correlation. There was also a better interrater agreement for ADC map analysis than for DWI analysis. Ischemic lesion determination by ADC was more accurate in final infarct prediction, rater independent, and provided exclusive information on ischemic lesion reversibility.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22614573','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22614573"><span>Image change detection using paradoxical theory for patient follow-up quantitation and therapy assessment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>David, Simon; Visvikis, Dimitris; Quellec, Gwénolé; Le Rest, Catherine Cheze; Fernandez, Philippe; Allard, Michèle; Roux, Christian; Hatt, Mathieu</p> <p>2012-09-01</p> <p>In clinical oncology, positron emission tomography (PET) imaging can be used to assess therapeutic response by quantifying the evolution of semi-quantitative values such as standardized uptake value, early during treatment or after treatment. Current guidelines do not include metabolically active tumor volume (MATV) measurements and derived parameters such as total lesion glycolysis (TLG) to characterize the response to the treatment. To achieve automatic MATV variation estimation during treatment, we propose an approach based on the change detection principle using the recent paradoxical theory, which models imprecision, uncertainty, and conflict between sources. It was applied here simultaneously to pre- and post-treatment PET scans. The proposed method was applied to both simulated and clinical datasets, and its performance was compared to adaptive thresholding applied separately on pre- and post-treatment PET scans. On simulated datasets, the adaptive threshold was associated with significantly higher classification errors than the developed approach. On clinical datasets, the proposed method led to results more consistent with the known partial responder status of these patients. The method requires accurate rigid registration of both scans which can be obtained only in specific body regions and does not explicitly model uptake heterogeneity. In further investigations, the change detection of intra-MATV tracer uptake heterogeneity will be developed by incorporating textural features into the proposed approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ChPhC..42b3002Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ChPhC..42b3002Y"><span>Limits on light WIMPs with a 1 kg-scale germanium detector at 160 eVee physics threshold at the China Jinping Underground Laboratory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Li-Tao; Li, Hau-Bin; Yue, Qian; Kang, Ke-Jun; Cheng, Jian-Ping; Li, Yuan-Jing; Tsz-King Wong, Henry; Aǧartioǧlu, M.; An, Hai-Peng; Chang, Jian-Ping; Chen, Jing-Han; Chen, Yun-Hua; Deng, Zhi; Du, Qiang; Gong, Hui; He, Li; Hu, Jin-Wei; Hu, Qing-Dong; Huang, Han-Xiong; Jia, Li-Ping; Jiang, Hao; Li, Hong; Li, Jian-Min; Li, Jin; Li, Xia; Li, Xue-Qian; Li, Yu-Lan; Lin, Fong-Kay; Lin, Shin-Ted; Liu, Shu-Kui; Liu, Zhong-Zhi; Ma, Hao; Ma, Jing-Lu; Pan, Hui; Ren, Jie; Ruan, Xi-Chao; Sevda, B.; Sharma, Vivek; Shen, Man-Bin; Singh, Lakhwinder; Singh, Manoj Kumar; Tang, Chang-Jian; Tang, Wei-You; Tian, Yang; Wang, Ji-Min; Wang, Li; Wang, Qing; Wang, Yi; Wu, Shi-Yong; Wu, Yu-Cheng; Xing, Hao-Yang; Xu, Yin; Xue, Tao; Yang, Song-Wei; Yi, Nan; Yu, Chun-Xu; Yu, Hai-Jun; Yue, Jian-Feng; Zeng, Xiong-Hui; Zeng, Ming; Zeng, Zhi; Zhang, Yun-Hua; Zhao, Ming-Gang; Zhao, Wei; Zhou, Ji-Fang; Zhou, Zu-Ying; Zhu, Jing-Jun; Zhu, Zhong-Hua; CDEX Collaboration</p> <p>2018-01-01</p> <p>We report results of a search for light weakly interacting massive particle (WIMP) dark matter from the CDEX-1 experiment at the China Jinping Underground Laboratory (CJPL). Constraints on WIMP-nucleon spin-independent (SI) and spin-dependent (SD) couplings are derived with a physics threshold of 160 eVee, from an exposure of 737.1 kg-days. The SI and SD limits extend the lower reach of light WIMPs to 2 GeV and improve over our earlier bounds at WIMP mass less than 6 GeV. Supported by the National Key Research and Development Program of China (2017YFA0402200, 2017YFA0402201), the National Natural Science Foundation of China (11175099, 11275107, 11475117, 11475099, 11475092, 11675088), the National Basic Research Program of China (973 Program) (2010CB833006). We thank the support of grants from the Tsinghua University Initiative Scientific Research Program (20121088494, 20151080354) and the Academia Sinica Investigator Award 2011-15, contracts 103-2112-M-001-024 and 104-2112-M-001-038-MY3 from the Ministry of Science and Technology of Taiwan.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..95c2308C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..95c2308C"><span>Optimizing interconnections to maximize the spectral radius of interdependent networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Huashan; Zhao, Xiuyan; Liu, Feng; Xu, Shouhuai; Lu, Wenlian</p> <p>2017-03-01</p> <p>The spectral radius (i.e., the largest eigenvalue) of the adjacency matrices of complex networks is an important quantity that governs the behavior of many dynamic processes on the networks, such as synchronization and epidemics. Studies in the literature focused on bounding this quantity. In this paper, we investigate how to maximize the spectral radius of interdependent networks by optimally linking k internetwork connections (or interconnections for short). We derive formulas for the estimation of the spectral radius of interdependent networks and employ these results to develop a suite of algorithms that are applicable to different parameter regimes. In particular, a simple algorithm is to link the k nodes with the largest k eigenvector centralities in one network to the node in the other network with a certain property related to both networks. We demonstrate the applicability of our algorithms via extensive simulations. We discuss the physical implications of the results, including how the optimal interconnections can more effectively decrease the threshold of epidemic spreading in the susceptible-infected-susceptible model and the threshold of synchronization of coupled Kuramoto oscillators.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920006132','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920006132"><span>The effects of suction on the nonlinear stability of the three-dimensional boundary layer above a rotating disc</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bassom, Andrew P.; Seddougui, Sharon O.</p> <p>1991-01-01</p> <p>There exist two types of stationary instability of the flow over a rotating disc corresponding to the upper branch, inviscid mode and the lower branch mode, which has a triple deck structure, of the neutral stability curve. A theoretical study of the linear problem and an account of the weakly nonlinear properties of the lower branch modes have been undertaken by Hall and MacKerrell respectively. Motivated by recent reports of experimental sightings of the lower branch mode and an examination of the role of suction on the linear stability properties of the flow here, the effects are studied of suction on the nonlinear disturbance described by MacKerrell. The additional analysis required in order to incorporate suction is relatively straightforward and enables the derivation of an amplitude equation which describes the evolution of the mode. For each value of the suction, a threshold value of the disturbance amplitude is obtained; modes of size greater than this threshold grow without limit as they develop away from the point of neutral stability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SuMi..111..983N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SuMi..111..983N"><span>Accurate analytical modeling of junctionless DG-MOSFET by green's function approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nandi, Ashutosh; Pandey, Nilesh</p> <p>2017-11-01</p> <p>An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996rftu.proc..679D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996rftu.proc..679D"><span>A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.</p> <p>1996-02-01</p> <p>The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1377405','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1377405"><span>Cloud Type Classification (cldtype) Value-Added Product</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Flynn, Donna; Shi, Yan; Lim, K-S</p> <p></p> <p>The Cloud Type (cldtype) value-added product (VAP) provides an automated cloud type classification based on macrophysical quantities derived from vertically pointing lidar and radar. Up to 10 layers of clouds are classified into seven cloud types based on predetermined and site-specific thresholds of cloud top, base and thickness. Examples of thresholds for selected U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility sites are provided in Tables 1 and 2. Inputs for the cldtype VAP include lidar and radar cloud boundaries obtained from the Active Remotely Sensed Cloud Location (ARSCL) and Surface Meteorological Systems (MET) data. Rainmore » rates from MET are used to determine when radar signal attenuation precludes accurate cloud detection. Temporal resolution and vertical resolution for cldtype are 1 minute and 30 m respectively and match the resolution of ARSCL. The cldtype classification is an initial step for further categorization of clouds. It was developed for use by the Shallow Cumulus VAP to identify potential periods of interest to the LASSO model and is intended to find clouds of interest for a variety of users.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...854...16E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...854...16E"><span>On the Appearance of Thresholds in the Dynamical Model of Star Formation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elmegreen, Bruce G.</p> <p>2018-02-01</p> <p>The Kennicutt–Schmidt (KS) relationship between the surface density of the star formation rate (SFR) and the gas surface density has three distinct power laws that may result from one model in which gas collapses at a fixed fraction of the dynamical rate. The power-law slope is 1 when the observed gas has a characteristic density for detection, 1.5 for total gas when the thickness is about constant as in the main disks of galaxies, and 2 for total gas when the thickness is regulated by self-gravity and the velocity dispersion is about constant, as in the outer parts of spirals, dwarf irregulars, and giant molecular clouds. The observed scaling of the star formation efficiency (SFR per unit CO) with the dense gas fraction (HCN/CO) is derived from the KS relationship when one tracer (HCN) is on the linear part and the other (CO) is on the 1.5 part. Observations of a threshold density or column density with a constant SFR per unit gas mass above the threshold are proposed to be selection effects, as are observations of star formation in only the dense parts of clouds. The model allows a derivation of all three KS relations using the probability distribution function of density with no thresholds for star formation. Failed galaxies and systems with sub-KS SFRs are predicted to have gas that is dominated by an equilibrium warm phase where the thermal Jeans length exceeds the Toomre length. A squared relation is predicted for molecular gas-dominated young galaxies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41I1569R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41I1569R"><span>How does precipitation become runoff? Comparison of hydrologic thresholds across hillslope and catchment scales</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ross, C.; Ali, G.; Oswald, C. J.; McMillan, H. K.; Walter, K.</p> <p>2017-12-01</p> <p>A hydrologic threshold is a critical point in time when runoff behavior rapidly changes, often in response to the activation of specific storage-driven or intensity-driven processes. Hydrologic thresholds can be viewed as characteristic signatures of hydrosystems, which makes them useful for site comparison as long as their presence (or lack thereof) can be evaluated in a standard manner across a range of environments. While several previous studies have successfully identified thresholds at a variety of individual sites, only a limited number have compared dynamics prevailing at the hillslope versus catchment scale, or distinguished the role of storage versus intensity thresholds. The objective of this study was therefore to examine precipitation input thresholds as well as "precipitation minus evapotranspiration" thresholds in environments with contrasted climatic and geographic characteristics. Historical climate and hydrometric datasets were consolidated for one hillslope site located at the Panola Mountain Research Watershed (Southeastern USA) and catchments located in the HJ Andrew's Experimental Forest (Northwestern USA), the Catfish Creek Watershed (Canadian prairies), the Experimental Lakes Area (Canadian boreal ecozone), the Tarrawarra catchment (Australia) and the Mahurangi catchment (New Zealand). Individual precipitation-runoff events were delineated using the newly introduced software HydRun to derive event-specific hydrograph parameters as well surrogate measures of antecedent moisture conditions and evapotranspiration in an automated and consistent manner. Various hydrograph parameters were then plotted against those surrogate measures to detect and evaluate site-specific threshold dynamics. Preliminary results show that a range of threshold shapes (e.g., "hockey stick", heaviside and dirac) were observed across sites. The influence of antecedent precipitation on threshold magnitude and shape also appeared stronger at sites with lower topographic relief and drier climate. Future analyses will focus on the interaction between storage and intensity thresholds in order to evaluate the importance of considering both for comparative hydrological studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70019361','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70019361"><span>Stress/strain changes and triggered seismicity at The Geysers, California</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gomberg, J.; Davis, S.</p> <p>1996-01-01</p> <p>The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency or equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996JGR...101..733G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996JGR...101..733G"><span>Stress/strain changes and triggered seismicity at The Geysers, California</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gomberg, Joan; Davis, Scott</p> <p>1996-01-01</p> <p>The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency, or, equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29558036','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29558036"><span>Comparison of the anticonvulsant potency of various diuretic drugs in the maximal electroshock-induced seizure threshold test in mice.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Załuska, Katarzyna; Kondrat-Wróbel, Maria W; Łuszczki, Jarogniew J</p> <p>2018-05-01</p> <p>The coexistence of seizures and arterial hypertension requires an adequate and efficacious treatment involving both protection from seizures and reduction of high arterial blood pressure. Accumulating evidence indicates that some diuretic drugs (with a well-established position in the treatment of arterial hypertension) also possess anticonvulsant properties in various experimental models of epilepsy. The aim of this study was to assess the anticonvulsant potency of 6 commonly used diuretic drugs (i.e., amiloride, ethacrynic acid, furosemide, hydrochlorothiazide, indapamide, and spironolactone) in the maximal electroshock-induced seizure threshold (MEST) test in mice. Doses of the studied diuretics and their corresponding threshold increases were linearly related, allowing for the determination of doses which increase the threshold for electroconvulsions in drug-treated animals by 20% (TID20 values) over the threshold in control animals. Amiloride, hydrochlorothiazide and indapamide administered systemically (intraperitoneally - i.p.) increased the threshold for maximal electroconvulsions in mice, and the experimentally-derived TID20 values in the maximal electroshock seizure threshold test were 30.2 mg/kg for amiloride, 68.2 mg/kg for hydrochlorothiazide and 3.9 mg/kg for indapamide. In contrast, ethacrynic acid (up to 100 mg/kg), furosemide (up to 100 mg/kg) and spironolactone (up to 50 mg/kg) administered i.p. had no significant impact on the threshold for electroconvulsions in mice. The studied diuretics can be arranged with respect to their anticonvulsant potency in the MEST test as follows: indapamide > amiloride > hydrochlorothiazide. No anticonvulsant effects were observed for ethacrynic acid, furosemide or spironolactone in the MEST test in mice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70190789','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70190789"><span>Novel analyses of long-term data provide a scientific basis for chlorophyll-a thresholds in San Francisco Bay</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Sutula, Martha; Kudela, Raphael; Hagy, James D.; Harding, Lawrence W.; Senn, David; Cloern, James E.; Bricker, Suzanne B.; Beck, Marcus W.; Berg, Gry Mine</p> <p>2017-01-01</p> <p>San Francisco Bay (SFB), USA, is highly enriched in nitrogen and phosphorus, but has been resistant to the classic symptoms of eutrophication associated with over-production of phytoplankton. Observations in recent years suggest that this resistance may be weakening, shown by: significant increases of chlorophyll-a (chl-a) and decreases of dissolved oxygen (DO), common occurrences of phytoplankton taxa that can form Harmful Algal Blooms (HAB), and algal toxins in water and mussels reaching levels of concern. As a result, managers now ask: what levels of chl-a in SFB constitute tipping points of phytoplankton biomass beyond which water quality will become degraded, requiring significant nutrient reductions to avoid impairments? We analyzed data for DO, phytoplankton species composition, chl-a, and algal toxins to derive quantitative relationships between three indicators (HAB abundance, toxin concentrations, DO) and chl-a. Quantile regressions relating HAB abundance and DO to chl-a were significant, indicating SFB is at increased risk of adverse HAB and low DO levels if chl-a continues to increase. Conditional probability analysis (CPA) showed chl-a of 13 mg m−3 as a “protective” threshold below which probabilities for exceeding alert levels for HAB abundance and toxins were reduced. This threshold was similar to chl-a of 13–16 mg m−3 that would meet a SFB-wide 80% saturation Water Quality Criterion (WQC) for DO. Higher “at risk” chl-a thresholds from 25 to 40 mg m−3 corresponded to 0.5 probability of exceeding alert levels for HAB abundance, and for DO below a WQC of 5.0 mg L−1 designated for lower South Bay (LSB) and South Bay (SB). We submit these thresholds as a basis to assess eutrophication status of SFB and to inform nutrient management actions. This approach is transferrable to other estuaries to derive chl-a thresholds protective against eutrophication.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28933010','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28933010"><span>Economic Evaluation of Dupilumab for the Treatment of Moderate-to-Severe Atopic Dermatitis in Adults.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kuznik, Andreas; Bégo-Le-Bagousse, Gaëlle; Eckert, Laurent; Gadkari, Abhijit; Simpson, Eric; Graham, Christopher N; Miles, LaStella; Mastey, Vera; Mahajan, Puneet; Sullivan, Sean D</p> <p>2017-12-01</p> <p>Dupilumab significantly improves signs and symptoms of atopic dermatitis (AD), including pruritus, symptoms of anxiety and depression, and health-related quality of life versus placebo in adults with moderate-to-severe AD. Since the cost-effectiveness of dupilumab has not been evaluated, the objective of this analysis was to estimate a value-based price range in which dupilumab would be considered cost-effective compared with supportive care (SC) for treatment of moderate-to-severe AD in an adult population. A health economic model was developed to evaluate from the US payer perspective the long-term costs and benefits of dupilumab treatment administered every other week (q2w). Dupilumab q2w was compared with SC; robustness of assumptions and results were tested using sensitivity and scenario analyses. Clinical data were derived from the dupilumab LIBERTY AD SOLO trials; healthcare use and cost data were from health insurance claims histories of adult patients with AD. The annual price of maintenance therapy with dupilumab to be considered cost-effective was estimated for decision thresholds of US$100,000 and $150,000 per quality-adjusted life-year (QALY) gained. In the base case, the annual maintenance price for dupilumab therapy to be considered cost-effective would be $28,770 at a $100,000 per QALY gained threshold, and $39,940 at a $150,000 threshold. Results were generally robust to parameter variations in one-way and probabilistic sensitivity analyses. Dupilumab q2w compared with SC is cost-effective for the treatment of moderate-to-severe AD in US adults at an annual price of maintenance therapy in the range of $29,000-$40,000 at the $100,000-$150,000 per QALY thresholds. Sanofi and Regeneron Pharmaceuticals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGC13E0824H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGC13E0824H"><span>Implications of climate change on winter road networks in Ontario's Far North and northern Manitoba, Canada, based on climate model projections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hori, Y.; Cheng, V. Y. S.; Gough, W. A.</p> <p>2017-12-01</p> <p>A network of winter roads in northern Canada connects a number of remote First Nations communities to all-season roads and rails. The extent of the winter road networks depends on the geographic features, socio-economic activities, and the numbers of remote First Nations so that it differs among the provinces. The most extensive winter road networks below the 60th parallel south are located in Ontario and Manitoba, serving 32 and 18 communities respectively. In recent years, a warmer climate has resulted in a shorter winter road season and an increase in unreliable road conditions; thus, limiting access among remote communities. This study focused on examining the future freezing degree-days (FDDs) accumulations during the winter road season at selected locations throughout Ontario's Far North and northern Manitoba using recent climate model projections from the multi-model ensembles of General Circulation Models (GCMs) under the Representative Concentration Pathway (RCP) scenarios. First, the non-parametric Mann-Kendall correlation test and the Theil-Sen method were used to identify any statistically significant trends between FDDs and time for the base period (1981-2010). Second, future climate scenarios are developed for the study areas using statistical downscaling methods. This study also examined the lowest threshold of FDDs during the winter road construction in a future period. Our previous study established the lowest threshold of 380 FDDs, which derived from the relationship between the FDDs and the opening dates of James Bay Winter Road near the Hudson-James Bay coast. Thus, this study applied the threshold measure as a conservative estimate of the minimum threshold of FDDs to examine the effects of climate change on the winter road construction period.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24056205','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24056205"><span>The novel antiepileptic drug imepitoin compares favourably to other GABA-mimetic drugs in a seizure threshold model in mice and dogs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Löscher, Wolfgang; Hoffmann, Katrin; Twele, Friederike; Potschka, Heidrun; Töllner, Kathrin</p> <p>2013-11-01</p> <p>Recently, the imidazolinone derivative imepitoin has been approved for treatment of canine epilepsy. Imepitoin acts as a low-affinity partial agonist at the benzodiazepine (BZD) site of the GABAA receptor and is the first compound with such mechanism that has been developed as an antiepileptic drug (AED). This mechanism offers several advantages compared to full agonists, including less severe adverse effects and a lack of tolerance and dependence liability, which has been demonstrated in rodents, dogs, and nonhuman primates. In clinical trials in epileptic dogs, imepitoin was shown to be an effective and safe AED. Recently, seizures in dogs have been proposed as a translational platform for human therapeutic trials on new epilepsy treatments. In the present study, we compared the anticonvulsant efficacy of imepitoin, phenobarbital and the high-affinity partial BZD agonist abecarnil in the timed i.v. pentylenetetrazole (PTZ) seizure threshold test in dogs and, for comparison, in mice. Furthermore, adverse effects of treatments were compared in both species. All drugs dose-dependently increased the PTZ threshold in both species, but anticonvulsant efficacy was higher in dogs than mice. At the doses selected for this study, imepitoin was slightly less potent than phenobarbital in increasing seizure threshold, but markedly more tolerable in both species. Effective doses of imepitoin in the PTZ seizure model were in the same range as those suppressing spontaneous recurrent seizures in epileptic dogs. The study demonstrates that low-affinity partial agonists at the benzodiazepine site of the GABAA receptor, such as imepitoin, offer advantages as a new category of AEDs. Copyright © 2013 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19..161W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19..161W"><span>A Fully Automated Supraglacial lake area and volume Tracking ("FAST") algorithm: development and application using MODIS imagery of West Greenland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Williamson, Andrew; Arnold, Neil; Banwell, Alison; Willis, Ian</p> <p>2017-04-01</p> <p>Supraglacial lakes (SGLs) on the Greenland Ice Sheet (GrIS) influence ice dynamics if draining rapidly by hydrofracture, which can occur in under 24 hours. MODerate-resolution Imaging Spectroradiometer (MODIS) data are often used to investigate SGLs, including calculating SGL area changes through time, but no existing work presents a method that tracks changes in individual (and total) SGL volume in MODIS imagery over a melt season. Here, we present such a method. First, we tested three automated approaches to derive SGL areas from MODIS imagery by comparing calculated areas for the Paakitsoq and Store Glacier regions in West Greenland with areas derived from Landsat-8 (LS8) images. Second, we applied a physically-based depth-calculation algorithm to the pixels within the SGL boundaries from the best performing method, and validated the resultant depths with those calculated using the same method applied to LS8 imagery. Our results indicated that SGL areas are most accurately generated using dynamic thresholding of MODIS band 1 (red) with a 0.640 threshold value. Calculated SGL area, depth and volume values from MODIS were closely comparable to those derived from LS8. The best performing area- and depth-detection methods were then incorporated into a Fully Automated SGL Tracking ("FAST") algorithm that tracks individual SGLs between successive MODIS images. It identified 43 (Paakitsoq) and 19 (Store Glacier) rapidly draining SGLs during 2014, representing 21% and 15% of the respective total SGL populations, including some clusters of rapidly draining SGLs. We found no relationship between the water volumes contained within these rapidly draining SGLs and the ice thicknesses beneath them, indicating that a critical water volume linearly related to ice thickness cannot explain the incidence of rapid drainage. The FAST algorithm, which we believe to be the most comprehensive SGL tracking algorithm developed to date, has the potential to investigate statistical relationships between SGL areas, volumes and drainage events over wide areas of the GrIS, and over multiple seasons. It could also allow further insights into factors that may trigger rapid SGL drainage.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=89280&keyword=cpa&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=89280&keyword=cpa&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>CONDITIONAL PROBABILITY ANALYSIS APPROACH FOR IDENTIFYING BIOLOGICAL THRESHOLD OF IMPACT FOR SEDIMENTATION: APPICATION TO FRESHWATER STREAMS IN OREGON COAST RANGE ECOREGION</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>A conditional probability analysis (CPA) approach has been developed for identifying biological thresholds of impact for use in the development of geographic-specific water quality criteria for protection of aquatic life. This approach expresses the threshold as the likelihood ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29677935','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29677935"><span>Identifying Obstacles and Research Gaps of Telemedicine Projects: Approach for a State-of-the-Art Analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Harst, Lorenz; Timpel, Patrick; Otto, Lena; Wollschlaeger, Bastian; Richter, Peggy; Schlieter, Hannes</p> <p>2018-01-01</p> <p>This paper presents an approach for an evaluation of finished telemedicine projects using qualitative methods. Telemedicine applications are said to improve the performance of health care systems. While there are countless telemedicine projects, the vast majority never makes the threshold from testing to implementation and diffusion. Projects were collected from German project databases in the area of telemedicine following systematically developed criteria. In a testing phase, ten projects were subject to a qualitative content analysis to identify limitations, need for further research, and lessons learned. Using Mayring's method of inductive category development, six categories of possible future research were derived. Thus, the proposed method is an important contribution to diffusion and translation research regarding telemedicine, as it is applicable to a systematic research of databases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3977307','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3977307"><span>Tuning the threshold voltage of carbon nanotube transistors by n-type molecular doping for robust and flexible complementary circuits</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Huiliang; Wei, Peng; Li, Yaoxuan; Han, Jeff; Lee, Hye Ryoung; Naab, Benjamin D.; Liu, Nan; Wang, Chenggong; Adijanto, Eric; Tee, Benjamin C.-K.; Morishita, Satoshi; Li, Qiaochu; Gao, Yongli; Cui, Yi; Bao, Zhenan</p> <p>2014-01-01</p> <p>Tuning the threshold voltage of a transistor is crucial for realizing robust digital circuits. For silicon transistors, the threshold voltage can be accurately controlled by doping. However, it remains challenging to tune the threshold voltage of single-wall nanotube (SWNT) thin-film transistors. Here, we report a facile method to controllably n-dope SWNTs using 1H-benzoimidazole derivatives processed via either solution coating or vacuum deposition. The threshold voltages of our polythiophene-sorted SWNT thin-film transistors can be tuned accurately and continuously over a wide range. Photoelectron spectroscopy measurements confirmed that the SWNT Fermi level shifted to the conduction band edge with increasing doping concentration. Using this doping approach, we proceeded to fabricate SWNT complementary inverters by inkjet printing of the dopants. We observed an unprecedented noise margin of 28 V at VDD = 80 V (70% of 1/2VDD) and a gain of 85. Additionally, robust SWNT complementary metal−oxide−semiconductor inverter (noise margin 72% of 1/2VDD) and logic gates with rail-to-rail output voltage swing and subnanowatt power consumption were fabricated onto a highly flexible substrate. PMID:24639537</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1367998-optical-damage-performance-conductive-widegap-semiconductors-spatial-temporal-lifetime-modeling','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1367998-optical-damage-performance-conductive-widegap-semiconductors-spatial-temporal-lifetime-modeling"><span>Optical damage performance of conductive widegap semiconductors: spatial, temporal, and lifetime modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Elhadj, Selim; Yoo, Jae-hyuck; Negres, Raluca A.; ...</p> <p>2016-12-19</p> <p>The optical damage performance of electrically conductive gallium nitride (GaN) and indium tin oxide (ITO) films is addressed using large area, high power laser beam exposures at 1064 nm sub-bandgap wavelength. Analysis of the laser damage process assumes that onset of damage (threshold) is determined by the absorption and heating of a nanoscale region of a characteristic size reaching a critical temperature. We use this model to rationalize semi-quantitatively the pulse width scaling of the damage threshold from picosecond to nanosecond timescales, along with the pulse width dependence of the damage threshold probability derived by fitting large beam damage densitymore » data. Multi-shot exposures were used to address lifetime performance degradation described by an empirical expression based on the single exposure damage model. A damage threshold degradation of at least 50% was observed for both materials. Overall, the GaN films tested had 5-10 × higher optical damage thresholds than the ITO films tested for comparable transmission and electrical conductivity. This route to optically robust, large aperture transparent electrodes and power optoelectronics may thus involve use of next generation widegap semiconductors such as GaN.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5948827','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5948827"><span>A New Vegetation Segmentation Approach for Cropped Fields Based on Threshold Detection from Hue Histograms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hassanein, Mohamed; El-Sheimy, Naser</p> <p>2018-01-01</p> <p>Over the last decade, the use of unmanned aerial vehicle (UAV) technology has evolved significantly in different applications as it provides a special platform capable of combining the benefits of terrestrial and aerial remote sensing. Therefore, such technology has been established as an important source of data collection for different precision agriculture (PA) applications such as crop health monitoring and weed management. Generally, these PA applications depend on performing a vegetation segmentation process as an initial step, which aims to detect the vegetation objects in collected agriculture fields’ images. The main result of the vegetation segmentation process is a binary image, where vegetations are presented in white color and the remaining objects are presented in black. Such process could easily be performed using different vegetation indexes derived from multispectral imagery. Recently, to expand the use of UAV imagery systems for PA applications, it was important to reduce the cost of such systems through using low-cost RGB cameras Thus, developing vegetation segmentation techniques for RGB images is a challenging problem. The proposed paper introduces a new vegetation segmentation methodology for low-cost UAV RGB images, which depends on using Hue color channel. The proposed methodology follows the assumption that the colors in any agriculture field image can be distributed into vegetation and non-vegetations colors. Therefore, four main steps are developed to detect five different threshold values using the hue histogram of the RGB image, these thresholds are capable to discriminate the dominant color, either vegetation or non-vegetation, within the agriculture field image. The achieved results for implementing the proposed methodology showed its ability to generate accurate and stable vegetation segmentation performance with mean accuracy equal to 87.29% and standard deviation as 12.5%. PMID:29670055</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70190477','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70190477"><span>Value of a dual-polarized gap-filling radar in support of southern California post-fire debris-flow warnings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Jorgensen, David P.; Hanshaw, Maiana N.; Schmidt, Kevin M.; Laber, Jayme L; Staley, Dennis M.; Kean, Jason W.; Restrepo, Pedro J.</p> <p>2011-01-01</p> <p>A portable truck-mounted C-band Doppler weather radar was deployed to observe rainfall over the Station Fire burn area near Los Angeles, California, during the winter of 2009/10 to assist with debris-flow warning decisions. The deployments were a component of a joint NOAA–U.S. Geological Survey (USGS) research effort to improve definition of the rainfall conditions that trigger debris flows from steep topography within recent wildfire burn areas. A procedure was implemented to blend various dual-polarized estimators of precipitation (for radar observations taken below the freezing level) using threshold values for differential reflectivity and specific differential phase shift that improves the accuracy of the rainfall estimates over a specific burn area sited with terrestrial tipping-bucket rain gauges. The portable radar outperformed local Weather Surveillance Radar-1988 Doppler (WSR-88D) National Weather Service network radars in detecting rainfall capable of initiating post-fire runoff-generated debris flows. The network radars underestimated hourly precipitation totals by about 50%. Consistent with intensity–duration threshold curves determined from past debris-flow events in burned areas in Southern California, the portable radar-derived rainfall rates exceeded the empirical thresholds over a wider range of storm durations with a higher spatial resolution than local National Weather Service operational radars. Moreover, the truck-mounted C-band radar dual-polarimetric-derived estimates of rainfall intensity provided a better guide to the expected severity of debris-flow events, based on criteria derived from previous events using rain gauge data, than traditional radar-derived rainfall approaches using reflectivity–rainfall relationships for either the portable or operational network WSR-88D radars. Part of the reason for the improvement was due to siting the radar closer to the burn zone than the WSR-88Ds, but use of the dual-polarimetric variables improved the rainfall estimation by ~12% over the use of traditional Z–R relationships.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003ASAJ..113..643C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003ASAJ..113..643C"><span>A comparison of the fragmentation thresholds and inertial cavitation doses of different ultrasound contrast agents</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Wen-Shiang; Matula, Thomas J.; Brayman, Andrew A.; Crum, Lawrence A.</p> <p>2003-01-01</p> <p>Contrast bubble destruction is important in several new diagnostic and therapeutic applications. The pressure threshold of destruction is determined by the shell material, while the propensity for of the bubbles to undergo inertial cavitation (IC) depends both on the gas and shell properties of the ultrasound contrast agent (UCA). The ultrasonic fragmentation thresholds of three specific UCAs (Optison, Sonazoid, and biSpheres), each with different shell and gas properties, were determined under various acoustic conditions. The acoustic emissions generated by the agents, or their derivatives, characteristic of IC after fragmentation, was also compared, using cumulated broadband-noise emissions (IC ``dose''). Albumin-shelled Optison and surfactant-shelled Sonazoid had low fragmentation thresholds (mean=0.13 and 0.15 MPa at 1.1 MHz, 0.48 and 0.58 MPa at 3.5 MHz, respectively), while polymer-shelled biSpheres had a significant higher threshold (mean=0.19 and 0.23 MPa at 1.1 MHz, 0.73 and 0.96 MPa for thin- and thick-shell biSpheres at 3.5 MHz, respectively, p<0.01). At comparable initial concentrations, surfactant-shelled Sonazoid produced a much larger IC dose after shell destruction than did either biSpheres or Optison (p<0.01). Thick-shelled biSpheres had the highest fragmentation threshold and produced the lowest IC dose. More than two and five acoustic cycles, respectively, were necessary for the thin- and thick-shell biSpheres to reach a steady-state fragmentation threshold.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5839139','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5839139"><span>Comparison of two insulin assays for first-phase insulin release in type 1 diabetes prediction and prevention studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mahon, Jeffrey L.; Beam, Craig A.; Marcovina, Santica M.; Boulware, David C.; Palmer, Jerry P.; Winter, William E.; Skyler, Jay S.; Krischer, Jeffrey P.</p> <p>2018-01-01</p> <p>Background Detection of below-threshold first-phase insulin release or FPIR (1 + 3 minute insulin concentrations during an intravenous glucose tolerance test [IVGTT]) is important in type 1 diabetes prediction and prevention studies including the TrialNet Oral Insulin Prevention Trial. We assessed whether an insulin immunoenzymometric assay (IEMA) could replace the less practical but current standard of a radioimmunoassay (RIA) for FPIR. Methods One hundred thirty-three islet autoantibody positive relatives of persons with type 1 diabetes underwent 161 IVGTTs. Insulin concentrations were measured by both assays in 1056 paired samples. A rule classifying FPIR (below-threshold, above-threshold, uncertain) by the IEMA was derived and validated against FPIR by the RIA. Results The insulin IEMA-based rule accurately classified below- and above-threshold FPIRs by the RIA in 110/161 (68%) IVGTTs, but was uncertain in 51/161 (32%) tests for which FPIR by RIA is needed. An uncertain FPIR by the IEMA was more likely among below-threshold vs above-threshold FPIRs by the RIA (64% [30/47] vs. 18% [21/114], respectively; p < 0.05). Conclusions An insulin IEMA for FPIR in subjects at risk for type 1 diabetes accurately determined below- and above-threshold FPIRs in 2/3 of tests relative to the current standard of the insulin RIA, but could not reliably classify the remaining FPIRs. TrialNet is limiting the insulin RIA for FPIR to the latter given the practical advantages of the more specific IEMA. PMID:21843518</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25151288','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25151288"><span>Echocardiography underestimates stroke volume and aortic valve area: implications for patients with small-area low-gradient aortic stenosis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chin, Calvin W L; Khaw, Hwan J; Luo, Elton; Tan, Shuwei; White, Audrey C; Newby, David E; Dweck, Marc R</p> <p>2014-09-01</p> <p>Discordance between small aortic valve area (AVA; < 1.0 cm(2)) and low mean pressure gradient (MPG; < 40 mm Hg) affects a third of patients with moderate or severe aortic stenosis (AS). We hypothesized that this is largely due to inaccurate echocardiographic measurements of the left ventricular outflow tract area (LVOTarea) and stroke volume alongside inconsistencies in recommended thresholds. One hundred thirty-three patients with mild to severe AS and 33 control individuals underwent comprehensive echocardiography and cardiovascular magnetic resonance imaging (MRI). Stroke volume and LVOTarea were calculated using echocardiography and MRI, and the effects on AVA estimation were assessed. The relationship between AVA and MPG measurements was then modelled with nonlinear regression and consistent thresholds for these parameters calculated. Finally the effect of these modified AVA measurements and novel thresholds on the number of patients with small-area low-gradient AS was investigated. Compared with MRI, echocardiography underestimated LVOTarea (n = 40; -0.7 cm(2); 95% confidence interval [CI], -2.6 to 1.3), stroke volumes (-6.5 mL/m(2); 95% CI, -28.9 to 16.0) and consequently, AVA (-0.23 cm(2); 95% CI, -1.01 to 0.59). Moreover, an AVA of 1.0 cm(2) corresponded to MPG of 24 mm Hg based on echocardiographic measurements and 37 mm Hg after correction with MRI-derived stroke volumes. Based on conventional measures, 56 patients had discordant small-area low-gradient AS. Using MRI-derived stroke volumes and the revised thresholds, a 48% reduction in discordance was observed (n = 29). Echocardiography underestimated LVOTarea, stroke volume, and therefore AVA, compared with MRI. The thresholds based on current guidelines were also inconsistent. In combination, these factors explain > 40% of patients with discordant small-area low-gradient AS. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ThApC.tmp...77L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ThApC.tmp...77L"><span>Return period curves for extreme 5-min rainfall amounts at the Barcelona urban network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lana, X.; Casas-Castillo, M. C.; Serra, C.; Rodríguez-Solà, R.; Redaño, A.; Burgueño, A.; Martínez, M. D.</p> <p>2018-03-01</p> <p>Heavy rainfall episodes are relatively common in the conurbation of Barcelona and neighbouring cities (NE Spain), usually due to storms generated by convective phenomena in summer and eastern and south-eastern advections in autumn. Prevention of local flood episodes and right design of urban drainage have to take into account the rainfall intensity spread instead of a simple evaluation of daily rainfall amounts. The database comes from 5-min rain amounts recorded by tipping buckets in the Barcelona urban network along the years 1994-2009. From these data, extreme 5-min rain amounts are selected applying the peaks-over-threshold method for thresholds derived from both 95% percentile and the mean excess plot. The return period curves are derived from their statistical distribution for every gauge, describing with detail expected extreme 5-min rain amounts across the urban network. These curves are compared with those derived from annual extreme time series. In this way, areas in Barcelona submitted to different levels of flood risk from the point of view of rainfall intensity are detected. Additionally, global time trends on extreme 5-min rain amounts are quantified for the whole network and found as not statistically significant.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PPCF...60b5021G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PPCF...60b5021G"><span>Stable and unstable roots of ion temperature gradient driven mode using curvature modified plasma dispersion functions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gültekin, Ö.; Gürcan, Ö. D.</p> <p>2018-02-01</p> <p>Basic, local kinetic theory of ion temperature gradient driven (ITG) mode, with adiabatic electrons is reconsidered. Standard unstable, purely oscillating as well as damped solutions of the local dispersion relation are obtained using a bracketing technique that uses the argument principle. This method requires computing the plasma dielectric function and its derivatives, which are implemented here using modified plasma dispersion functions with curvature and their derivatives, and allows bracketing/following the zeros of the plasma dielectric function which corresponds to different roots of the ITG dispersion relation. We provide an open source implementation of the derivatives of modified plasma dispersion functions with curvature, which are used in this formulation. Studying the local ITG dispersion, we find that near the threshold of instability the unstable branch is rather asymmetric with oscillating solutions towards lower wave numbers (i.e. drift waves), and damped solutions toward higher wave numbers. This suggests a process akin to inverse cascade by coupling to the oscillating branch towards lower wave numbers may play a role in the nonlinear evolution of the ITG, near the instability threshold. Also, using the algorithm, the linear wave diffusion is estimated for the marginally stable ITG mode.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.147o2713N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.147o2713N"><span>The threshold algorithm: Description of the methodology and new developments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Neelamraju, Sridhar; Oligschleger, Christina; Schön, J. Christian</p> <p>2017-10-01</p> <p>Understanding the dynamics of complex systems requires the investigation of their energy landscape. In particular, the flow of probability on such landscapes is a central feature in visualizing the time evolution of complex systems. To obtain such flows, and the concomitant stable states of the systems and the generalized barriers among them, the threshold algorithm has been developed. Here, we describe the methodology of this approach starting from the fundamental concepts in complex energy landscapes and present recent new developments, the threshold-minimization algorithm and the molecular dynamics threshold algorithm. For applications of these new algorithms, we draw on landscape studies of three disaccharide molecules: lactose, maltose, and sucrose.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004NuPhB.686..377T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004NuPhB.686..377T"><span>Self-dual random-plaquette gauge model and the quantum toric code</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Takeda, Koujin; Nishimori, Hidetoshi</p> <p>2004-05-01</p> <p>We study the four-dimensional Z2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, pc=0.889972…, and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24229260','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24229260"><span>Back-and-forth micromotion of aqueous droplets in a dc electric field.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kurimura, Tomo; Ichikawa, Masatoshi; Takinoue, Masahiro; Yoshikawa, Kenichi</p> <p>2013-10-01</p> <p>Recently, it was reported that an aqueous droplet in an oil phase exhibited rhythmic back-and-forth motion under stationary dc voltage on the order of 100 V. Here, we demonstrate that the threshold voltage for inducing such oscillation is successfully decreased to the order of 10 V through downsizing of the experimental system. Notably, the threshold electric field tends to decrease with a nonlinear scaling relationship accompanied by the downsizing. We derive a simple theoretical model to interpret the system size dependence of the threshold voltage. This model equation suggests the unique effect of additional noise, which is qualitatively characterized as a coherent resonance by an actual experiment as a kind of coherent resonance. Our result would provide insight into the construction of micrometer-sized self-commutating motors and actuators in microfluidic and micromechanical devices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995PhRvA..52..791N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995PhRvA..52..791N"><span>Pattern formation in a liquid-crystal light valve with feedback, including polarization, saturation, and internal threshold effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Neubecker, R.; Oppo, G.-L.; Thuering, B.; Tschudi, T.</p> <p>1995-07-01</p> <p>The use of liquid-crystal light valves (LCLV's) as nonlinear elements in diffractive optical systems with feedback leads to the formation of a variety of optical patterns. The spectrum of possible spatial instabilities is shown to be even richer when the LCLV's capability for polarization modulation is utilized and internal threshold and saturation effects are considered. We derive a model for the feedback system based on a realistic description of the LCLV's internal function and coupling to a polarizer. Thresholds of pattern formation are compared to the common Kerr-type approximation and show transitions involving rolls, squares, hexagons, and tiled patterns. Numerical and experimental results confirm our theoretical predictions and unveil how patterns and their typical length scales can be easily controlled by changes of the parameters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/3255003','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/3255003"><span>Influence of the hypercycle on the error threshold: a stochastic approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>García-Tejedor, A; Sanz-Nuño, J C; Olarrea, J; Javier de la Rubia, F; Montero, F</p> <p>1988-10-21</p> <p>The role of fluctuations on the error threshold of the hypercycle has been studied by a stochastic approach on a very simplified model. For this model, the master equation was derived and its unique steady state calculated. This state implies the extinction of the system. But the actual time necessary to reach the steady state may be astronomically long whereas for times of experimental interest the system could be near some quasi-stationary states. In order to explore this possibility a Gillespie simulation of the stochastic process has been carried out. These quasi-stationary states correspond to the deterministic steady states of the system. The error threshold shifts towards higher values of the quality factor Q. Moreover, information about the fluctuations around the quasi-stationary states is obtained. The results are discussed in relation to the deterministic states.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22551985','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22551985"><span>Sub-100 fJ and sub-nanosecond thermally driven threshold switching in niobium oxide crosspoint nanodevices.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pickett, Matthew D; Williams, R Stanley</p> <p>2012-06-01</p> <p>We built and measured the dynamical current versus time behavior of nanoscale niobium oxide crosspoint devices which exhibited threshold switching (current-controlled negative differential resistance). The switching speeds of 110 × 110 nm(2) devices were found to be Δt(ON) = 700 ps and Δt(OFF) = 2:3 ns while the switching energies were of the order of 100 fJ. We derived a new dynamical model based on the Joule heating rate of a thermally driven insulator-to-metal phase transition that accurately reproduced the experimental results, and employed the model to estimate the switching time and energy scaling behavior of such devices down to the 10 nm scale. These results indicate that threshold switches could be of practical interest in hybrid CMOS nanoelectronic circuits.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=finance&id=EJ1076873','ERIC'); return false;" href="https://eric.ed.gov/?q=finance&id=EJ1076873"><span>Threshold Concepts in Finance: Student Perspectives</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.</p> <p>2015-01-01</p> <p>Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD0632731','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD0632731"><span>THRESHOLD LOGIC.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p></p> <p>synthesis procedures; a ’best’ method is definitely established. (2) ’Symmetry Types for Threshold Logic’ is a tutorial expositon including a careful...development of the Goto-Takahasi self-dual type ideas. (3) ’Best Threshold Gate Decisions’ reports a comparison, on the 2470 7-argument threshold ...interpretation is shown best. (4) ’ Threshold Gate Networks’ reviews the previously discussed 2-algorithm in geometric terms, describes our FORTRAN</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27794602','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27794602"><span>Structure-Odor Relationship Study on Geraniol, Nerol, and Their Synthesized Oxygenated Derivatives.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elsharif, Shaimaa Awadain; Buettner, Andrea</p> <p>2018-03-14</p> <p>Despite being isomers having the same citrus-like, floral odor, geraniol, 1, and nerol, 3, show different odor thresholds. To date, no systematic studies are at hand elucidating the structural features required for their specific odor properties. Therefore, starting from these two basic structures and their corresponding esters, namely, geranyl acetate, 2, and neryl acetate, 4, a total of 12 oxygenated compounds were synthesized and characterized regarding retention indices (RI), mass spectrometric (MS), and nuclear magnetic resonance (NMR) data. All compounds were individually tested for their odor qualities and odor thresholds in air (OT). Geraniol, the Z-isomer, with an OT of 14 ng/L, was found to be more potent than its E-isomer, nerol, which has an OT of 60 ng/L. However, 8-oxoneryl acetate was the most potent derivative within this study, exhibiting an OT of 8.8 ng/L, whereas 8-oxonerol was the least potent with an OT of 493 ng/L. Interestingly, the 8-oxo derivatives smell musty and fatty, whereas the 8-hydroxy derivatives show odor impressions similar to those of 1 and 3. 8-Carboxygeraniol was found to be odorless, whereas its E-isomer, 8-carboxynerol, showed fatty, waxy, and greasy impressions. Overall, we observed that oxygenation on C-8 affects mainly the odor quality, whereas the E/ Z position of the functional group on C-1 affects the odor potency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4510011','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4510011"><span>A Ratiometric Threshold for Determining Presence of Cancer During Fluorescence-guided Surgery</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Warram, Jason M; de Boer, Esther; Moore, Lindsay S.; Schmalbach, Cecelia E; Withrow, Kirk P; Carroll, William R; Richman, Joshua S; Morlandt, Anthony B; Brandwein-Gensler, Margaret; Rosenthal, Eben L</p> <p>2015-01-01</p> <p>Background&Objective Fluorescence-guided imaging to assist in identification of malignant margins has the potential to dramatically improve oncologic surgery. However a standardized method for quantitative assessment of disease-specific fluorescence has not been investigated. Introduced here is a ratiometric threshold derived from mean fluorescent tissue intensity that can be used to semi-quantitatively delineate tumor from normal tissue. Methods Open-field and a closed-field imaging devices were used to quantify fluorescence in punch biopsy tissues sampled from primary tumors collected during a phase 1 trial evaluating the safety of cetuximab-IRDye800 in patients (n=11) undergoing surgical intervention for head and neck cancer. Fluorescence ratios were calculated using mean fluorescence intensity (MFI) from punch biopsy normalized by MFI of patient-matched tissues. Ratios were compared to pathological assessment and a ratiometric threshold was established to predict presence of cancer. Results During open-field imaging using an intraoperative device, the threshold for muscle normalized tumor fluorescence was found to be 2.7, which produced a sensitivity of 90.5% and specificity of 78.6% for delineating disease tissue. The skin-normalized threshold generated greater sensitivity (92.9%) and specificity (81.0%). Conclusion Successful implementation of a semi-quantitative threshold can provide a scientific methodology for delineating disease from normal tissue during fluorescence-guided resection of cancer. PMID:26074273</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5770835','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5770835"><span>Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Krishnan, Jeyashree; Porta Mana, PierGianLuca; Helias, Moritz; Diesmann, Markus; Di Napoli, Edoardo</p> <p>2018-01-01</p> <p>Spiking neuronal networks are usually simulated with one of three main schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work offers an alternative geometric point of view on neuronal dynamics, and derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but can be optimized in several ways. Comparison confirms earlier results that the imperfect tests rarely miss spikes (less than a fraction 1/108 of missed spikes) in biologically relevant settings. PMID:29379430</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=42413&Lab=NCEA&keyword=TOC&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=42413&Lab=NCEA&keyword=TOC&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Health Effects Assessment for Acenaphthene</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Because of the lack of data for the carcinogenicity and threshold toxicity of acenaphthene risk assessment values cannot be derived. The ambient water quality criterion of 0.2 mg/l is based on organoleptic data, which has no known relationship to potential human health effects. A...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=295311&Lab=NHEERL&keyword=probability+AND+statistics&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=295311&Lab=NHEERL&keyword=probability+AND+statistics&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Deriving protection thresholds for threatened and endangered species potentially exposed to pesticides</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>The Endangered Species Act requires specific and stringent protection to threatened and endangered species and their critical habitat. Therefore, protective methods for risk assessment for such species are needed. Species sensitivity distributions (SSDs) are a common tool used fo...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28117840','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28117840"><span>MiR-137-derived polygenic risk: effects on cognitive performance in patients with schizophrenia and controls.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cosgrove, D; Harold, D; Mothersill, O; Anney, R; Hill, M J; Bray, N J; Blokland, G; Petryshen, T; Richards, A; Mantripragada, K; Owen, M; O'Donovan, M C; Gill, M; Corvin, A; Morris, D W; Donohoe, G</p> <p>2017-01-24</p> <p>Variants at microRNA-137 (MIR137), one of the most strongly associated schizophrenia risk loci identified to date, have been associated with poorer cognitive performance. As microRNA-137 is known to regulate the expression of ~1900 other genes, including several that are independently associated with schizophrenia, we tested whether this gene set was also associated with variation in cognitive performance. Our analysis was based on an empirically derived list of genes whose expression was altered by manipulation of MIR137 expression. This list was cross-referenced with genome-wide schizophrenia association data to construct individual polygenic scores. We then tested, in a sample of 808 patients and 192 controls, whether these risk scores were associated with altered performance on cognitive functions known to be affected in schizophrenia. A subgroup of healthy participants also underwent functional imaging during memory (n=108) and face processing tasks (n=83). Increased polygenic risk within the empirically derived miR-137 regulated gene score was associated with significantly lower performance on intelligence quotient, working memory and episodic memory. These effects were observed most clearly at a polygenic threshold of P=0.05, although significant results were observed at all three thresholds analyzed. This association was found independently for the gene set as a whole, excluding the schizophrenia-associated MIR137 SNP itself. Analysis of the spatial working memory fMRI task further suggested that increased risk score (thresholded at P=10 -5 ) was significantly associated with increased activation of the right inferior occipital gyrus. In conclusion, these data are consistent with emerging evidence that MIR137 associated risk for schizophrenia may relate to its broader downstream genetic effects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4913029','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4913029"><span>Estimating the dim light melatonin onset of adolescents within a 6-h sampling window: the impact of sampling rate and threshold method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Crowley, Stephanie J.; Suh, Christina; Molina, Thomas A.; Fogg, Louis F.; Sharkey, Katherine M.; Carskadon, Mary A.</p> <p>2016-01-01</p> <p>Objective/Background Circadian rhythm sleep-wake disorders often manifest during the adolescent years. Measurement of circadian phase such as the Dim Light Melatonin Onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. Patients/Methods A total of 66 healthy adolescents (26 males) aged 14.8 to 17.8 years participated in a study in which sleep was fixed for one week before they came to the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 mins (13 samples) and the other from samples taken every 60 mins (7 samples). Three standard thresholds (first 3 melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. Agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using a Bland-Altman analysis; agreement between sampling rate DLMOs was defined as ± 1 h. Results and Conclusions Within a 6-h sampling window, 60-min sampling provided DLMO estimates that were within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 pg/mL or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with circadian rhythm sleep-wake disorders. PMID:27318227</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26505105','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26505105"><span>Thresholds of Principle and Preference: Exploring Procedural Variation in Postgraduate Surgical Education.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei</p> <p>2015-11-01</p> <p>Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents' ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents' daily experiences of spotting, mapping, and negotiating their faculty members' thresholds and defending their own emerging thresholds. Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners' development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27131489','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27131489"><span>Epidemic spreading with activity-driven awareness diffusion on multiplex network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming</p> <p>2016-04-01</p> <p>There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016Chaos..26d3110G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016Chaos..26d3110G"><span>Epidemic spreading with activity-driven awareness diffusion on multiplex network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming</p> <p>2016-04-01</p> <p>There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2014-title24-vol3/pdf/CFR-2014-title24-vol3-sec599-301.pdf','CFR2014'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2014-title24-vol3/pdf/CFR-2014-title24-vol3-sec599-301.pdf"><span>24 CFR 599.301 - Initial determination of threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2014&page.go=Go">Code of Federal Regulations, 2014 CFR</a></p> <p></p> <p>2014-04-01</p> <p>... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Initial determination of threshold requirements. 599.301 Section 599.301 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title24-vol3/pdf/CFR-2013-title24-vol3-sec594-7.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title24-vol3/pdf/CFR-2013-title24-vol3-sec594-7.pdf"><span>24 CFR 594.7 - Other threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-04-01</p> <p>... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Other threshold requirements. 594.7 Section 594.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT COMMUNITY FACILITIES JOH...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title24-vol3/pdf/CFR-2012-title24-vol3-sec594-7.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title24-vol3/pdf/CFR-2012-title24-vol3-sec594-7.pdf"><span>24 CFR 594.7 - Other threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-04-01</p> <p>... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Other threshold requirements. 594.7 Section 594.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT COMMUNITY FACILITIES JOH...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title24-vol3/pdf/CFR-2013-title24-vol3-sec599-301.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title24-vol3/pdf/CFR-2013-title24-vol3-sec599-301.pdf"><span>24 CFR 599.301 - Initial determination of threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-04-01</p> <p>... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Initial determination of threshold requirements. 599.301 Section 599.301 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol3/pdf/CFR-2010-title24-vol3-sec599-301.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol3/pdf/CFR-2010-title24-vol3-sec599-301.pdf"><span>24 CFR 599.301 - Initial determination of threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-04-01</p> <p>... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Initial determination of threshold requirements. 599.301 Section 599.301 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol3/pdf/CFR-2010-title24-vol3-sec594-7.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol3/pdf/CFR-2010-title24-vol3-sec594-7.pdf"><span>24 CFR 594.7 - Other threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-04-01</p> <p>... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Other threshold requirements. 594.7 Section 594.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT COMMUNITY FACILITIES JOH...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title24-vol3/pdf/CFR-2012-title24-vol3-sec599-301.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title24-vol3/pdf/CFR-2012-title24-vol3-sec599-301.pdf"><span>24 CFR 599.301 - Initial determination of threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-04-01</p> <p>... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Initial determination of threshold requirements. 599.301 Section 599.301 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2014-title24-vol3/pdf/CFR-2014-title24-vol3-sec594-7.pdf','CFR2014'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2014-title24-vol3/pdf/CFR-2014-title24-vol3-sec594-7.pdf"><span>24 CFR 594.7 - Other threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2014&page.go=Go">Code of Federal Regulations, 2014 CFR</a></p> <p></p> <p>2014-04-01</p> <p>... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Other threshold requirements. 594.7 Section 594.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR COMMUNITY PLANNING AND DEVELOPMENT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT COMMUNITY FACILITIES JOHN...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3944929','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3944929"><span>The diagnosis of sepsis revisited - a challenge for young medical scientists in the 21st century</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>In 1991, a well-meaning consensus group of thought leaders derived a simple definition for sepsis which required the breach of only a few static thresholds. More than 20 years later, this simple definition has calcified to become the gold standard for sepsis protocols and research. Yet sepsis clearly comprises a complex, dynamic, and relational distortion of human life. Given the profound scope of the loss of life worldwide, there is a need to disengage from the simple concepts of the past. There is an acute need to develop 21st century approaches which engage sepsis in its true form, as a complex, dynamic, and relational pattern of death. PMID:24383420</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24229109','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24229109"><span>Scaling laws for ignition at the National Ignition Facility from first principles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheng, Baolian; Kwan, Thomas J T; Wang, Yi-Ming; Batha, Steven H</p> <p>2013-10-01</p> <p>We have developed an analytical physics model from fundamental physics principles and used the reduced one-dimensional model to derive a thermonuclear ignition criterion and implosion energy scaling laws applicable to inertial confinement fusion capsules. The scaling laws relate the fuel pressure and the minimum implosion energy required for ignition to the peak implosion velocity and the equation of state of the pusher and the hot fuel. When a specific low-entropy adiabat path is used for the cold fuel, our scaling laws recover the ignition threshold factor dependence on the implosion velocity, but when a high-entropy adiabat path is chosen, the model agrees with recent measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150000527','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150000527"><span>Process Control for Precipitation Prevention in Space Water Recovery Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sargusingh, Miriam; Callahan, Michael R.; Muirhead, Dean</p> <p>2015-01-01</p> <p>The ability to recover and purify water through physiochemical processes is crucial for realizing long-term human space missions, including both planetary habitation and space travel. Because of their robust nature, rotary distillation systems have been actively pursued by NASA as one of the technologies for water recovery from wastewater primarily comprised of human urine. A specific area of interest is the prevention of the formation of solids that could clog fluid lines and damage rotating equipment. To mitigate the formation of solids, operational constraints are in place that limits such that the concentration of key precipitating ions in the wastewater brine are below the theoretical threshold. This control in effected by limiting the amount of water recovered such that the risk of reaching the precipitation threshold is within acceptable limits. The water recovery limit is based on an empirically derived worst case wastewater composition. During the batch process, water recovery is estimated by monitoring the throughput of the system. NASA Johnson Space Center is working on means of enhancing the process controls to increase water recovery. Options include more precise prediction of the precipitation threshold. To this end, JSC is developing a means of more accurately measuring the constituent of the brine and/or wastewater. Another means would be to more accurately monitor the throughput of the system. In spring of 2015, testing will be performed to test strategies for optimizing water recovery without increasing the risk of solids formation in the brine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=338404&Lab=NHEERL&keyword=web&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=338404&Lab=NHEERL&keyword=web&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Deriving Predicted No-Effect Concentrations in Diverse Geographies for use in eco-TTC Estimations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>cological Thresholds for Toxicologic Concern (eco-TTC) employs an assessment of distributions of Predicted No-Effect Concentrations (PNECs) for compounds following chemical grouping. Grouping can be by mode of action, structural fragments, or by chemical functional use. Thus, eco...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002AIPC..620..735B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002AIPC..620..735B"><span>The HEL Upper Limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Billingsley, J. P.</p> <p>2002-07-01</p> <p>A threshold particle velocity, Vf, derived by Professor E.R. Fitzgerald for the onset of atomic lattice Disintegration Phenomena (LDP) is shown to exceed and/or compare rather well with the maximum experimental Hugoniot Elastic Limit (HEL) particle (mass) velocities (UpHEL) for selected hard strong mineral/ceramic materials.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhPl...18c4701C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhPl...18c4701C"><span>Comment on ``Scalings for radiation from plasma bubbles'' [Phys. Plasmas 17, 056708 (2010)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Corde, S.; Stordeur, A.; Malka, V.</p> <p>2011-03-01</p> <p>Thomas has recently derived scaling laws for x-ray radiation from electrons accelerated in plasma bubbles, as well as a threshold for the self-injection of background electrons into the bubble [A. G. R. Thomas, Phys. Plasmas 17, 056708 (2010)]. To obtain this threshold, the equations of motion for a test electron are studied within the frame of the bubble model, where the bubble is described by prescribed electromagnetic fields and has a perfectly spherical shape. The author affirms that any elliptical trajectory of the form x'2/γp2+y'2=R2 is solution of the equations of motion (in the bubble frame), within the approximation py'2/px'2≪1. In addition, he highlights that his result is different from the work of Kostyukov et al. [Phys. Rev. Lett. 103, 175003 (2009)], and explains the error committed by Kostyukov-Nerush-Pukhov-Seredov (KNPS). In this comment, we show that numerically integrated trajectories, based on the same equations than the analytical work of Thomas, lead to a completely different result for the self-injection threshold, the result published by KNPS [Phys. Rev. Lett. 103, 175003 (2009)]. We explain why the analytical analysis of Thomas fails and we provide a discussion based on numerical simulations which show exactly where the difference arises. We also show that the arguments of Thomas concerning the error of KNPS do not hold, and that their analysis is mathematically correct. Finally, we emphasize that if the KNPS threshold is found not to be verified in PIC (Particle In Cell) simulations or experiments, it is due to a deficiency of the model itself, and not to an error in the mathematical derivation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20080041556&hterms=information+quality&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dinformation%2Bquality','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20080041556&hterms=information+quality&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dinformation%2Bquality"><span>Data Assimilation Experiments using Quality Controlled AIRS Version 5 Temperature Soundings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Susskind, Joel</p> <p>2008-01-01</p> <p>The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AlRS data. Version 5 contains accurate case-by-case error estimates for most derived products, which are also used for quality control. We have conducted forecast impact experiments assimilating AlRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM. Assimilation of quality controlled temperature profiles resulted in significantly improved forecast skill in both the Northern Hemisphere and Southern Hemisphere Extra-Tropics, compared to that obtained from analyses obtained when all data used operationally by NCEP except for AlRS data is assimilated. Experiments using different Quality Control thresholds for assimilation of AlRS temperature retrievals showed that a medium quality control threshold performed better than a tighter threshold, which provided better overall sounding accuracy; or a looser threshold, which provided better spatial coverage of accepted soundings. We are conducting more experiments to further optimize this balance of spatial coverage and sounding accuracy from the data assimilation perspective. In all cases, temperature soundings were assimilated well below cloud level in partially cloudy cases. The positive impact of assimilating AlRS derived atmospheric temperatures all but vanished when only AIRS stratospheric temperatures were assimilated. Forecast skill resulting from assimilation of AlRS radiances uncontaminated by clouds, instead of AlRS temperature soundings, was only slightly better than that resulting from assimilation of only stratospheric AlRS temperatures. This reduction in forecast skill is most likely the result of significant loss of tropospheric information when only AIRS radiances unaffected by clouds are used in the data assimilation process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6144..134P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6144..134P"><span>Analysis of parenchymal patterns using conspicuous spatial frequency features in mammograms applied to the BI-RADS density rating scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Perconti, Philip; Loew, Murray</p> <p>2006-03-01</p> <p>Automatic classification of the density of breast parenchyma is shown using a measure that is correlated to the human observer performance, and compared against the BI-RADS density rating. Increasingly popular in the United States, the Breast Imaging Reporting and Data System (BI-RADS) is used to draw attention to the increased screening difficulty associated with greater breast density; however, the BI-RADS rating scheme is subjective and is not intended as an objective measure of breast density. So, while popular, BI-RADS does not define density classes using a standardized measure, which leads to increased variability among observers. The adaptive thresholding technique is a more quantitative approach for assessing the percentage breast density, but considerable reader interaction is required. We calculate an objective density rating that is derived using a measure of local feature salience. Previously, this measure was shown to correlate well with radiologists' localization and discrimination of true positive and true negative regions-of-interest. Using conspicuous spatial frequency features, an objective density rating is obtained and correlated with adaptive thresholding, and the subjectively ascertained BI-RADS density ratings. Using 100 cases, obtained from the University of South Florida's DDSM database, we show that an automated breast density measure can be derived that is correlated with the interactive thresholding method for continuous percentage breast density, but not with the BI-RADS density rating categories for the selected cases. Comparison between interactive thresholding and the new salience percentage density resulted in a Pearson correlation of 76.7%. Using a four-category scale equivalent to the BI-RADS density categories, a Spearman correlation coefficient of 79.8% was found.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996PhPl....3.1284M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996PhPl....3.1284M"><span>A physical mechanism for the onset of radial electric fields in magnetically confined plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moleti, A.</p> <p>1996-04-01</p> <p>A simple physical mechanism is described, which could trigger the Low-mode to High-mode (L-H) transition. The instantaneous ion density profile is significantly modified by a sudden temperature increase, because Larmor radii and banana orbit widths are proportional to thermal velocity. The electric fields that are observed in H-mode plasmas could be produced in the radial region where a large second derivative of the density profile exists, either by strong additional heating or by the heat pulse associated to a sawtooth crash. The L-H transition threshold for the time derivative of the ion temperature is of the order of magnitude of the values that are measured in the outer part of the plasma by electron temperature fast diagnostics at sawtooth crashes. This model agrees with the experimental evidence that L-H transitions are often triggered by a sawtooth crash, and the predicted dependence of the threshold on plasma parameters is fairly consistent with available data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJE...104.1945V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJE...104.1945V"><span>On the performance of energy detection-based CR with SC diversity over IG channel</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka</p> <p>2017-12-01</p> <p>Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SSCom.269...71P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SSCom.269...71P"><span>Absolute instability of polaron mode in semiconductor magnetoplasma</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paliwal, Ayushi; Dubey, Swati; Ghosh, S.</p> <p>2018-01-01</p> <p>Using coupled mode theory under hydrodynamic regime, a compact dispersion relation is derived for polaron mode in semiconductor magnetoplasma. The propagation and amplification characteristics of the wave are explored in detail. The analysis deals with the behaviour of anomalous threshold and amplification derived from dispersion relation, as function of external parameters like doping concentration and applied magnetic field. The results of this investigation are hoped to be useful in understanding electron-longitudinal optical phonon interplay in polar n-type semiconductor plasmas under the influence of coupled collective cyclotron excitations. The best results in terms of smaller threshold and higher gain of polaron mode could be achieved by choosing moderate doping concentration in the medium at higher magnetic field. For numerical appreciation of the results, relevant data of III-V n-GaAs compound semiconductor at 77 K is used. Present study provides a qualitative picture of polaron mode in magnetized n-type polar semiconductor medium duly shined by a CO2 laser.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17990756','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17990756"><span>Formulating face verification with semidefinite programming.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yan, Shuicheng; Liu, Jianzhuang; Tang, Xiaoou; Huang, Thomas S</p> <p>2007-11-01</p> <p>This paper presents a unified solution to three unsolved problems existing in face verification with subspace learning techniques: selection of verification threshold, automatic determination of subspace dimension, and deducing feature fusing weights. In contrast to previous algorithms which search for the projection matrix directly, our new algorithm investigates a similarity metric matrix (SMM). With a certain verification threshold, this matrix is learned by a semidefinite programming approach, along with the constraints of the kindred pairs with similarity larger than the threshold, and inhomogeneous pairs with similarity smaller than the threshold. Then, the subspace dimension and the feature fusing weights are simultaneously inferred from the singular value decomposition of the derived SMM. In addition, the weighted and tensor extensions are proposed to further improve the algorithmic effectiveness and efficiency, respectively. Essentially, the verification is conducted within an affine subspace in this new algorithm and is, hence, called the affine subspace for verification (ASV). Extensive experiments show that the ASV can achieve encouraging face verification accuracy in comparison to other subspace algorithms, even without the need to explore any parameters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://pubs.water.usgs.gov/wri034205/','USGSPUBS'); return false;" href="http://pubs.water.usgs.gov/wri034205/"><span>Estimating the susceptibility of surface water in Texas to nonpoint-source contamination by use of logistic regression modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Battaglin, William A.; Ulery, Randy L.; Winterstein, Thomas; Welborn, Toby</p> <p>2003-01-01</p> <p>In the State of Texas, surface water (streams, canals, and reservoirs) and ground water are used as sources of public water supply. Surface-water sources of public water supply are susceptible to contamination from point and nonpoint sources. To help protect sources of drinking water and to aid water managers in designing protective yet cost-effective and risk-mitigated monitoring strategies, the Texas Commission on Environmental Quality and the U.S. Geological Survey developed procedures to assess the susceptibility of public water-supply source waters in Texas to the occurrence of 227 contaminants. One component of the assessments is the determination of susceptibility of surface-water sources to nonpoint-source contamination. To accomplish this, water-quality data at 323 monitoring sites were matched with geographic information system-derived watershed- characteristic data for the watersheds upstream from the sites. Logistic regression models then were developed to estimate the probability that a particular contaminant will exceed a threshold concentration specified by the Texas Commission on Environmental Quality. Logistic regression models were developed for 63 of the 227 contaminants. Of the remaining contaminants, 106 were not modeled because monitoring data were available at less than 10 percent of the monitoring sites; 29 were not modeled because there were less than 15 percent detections of the contaminant in the monitoring data; 27 were not modeled because of the lack of any monitoring data; and 2 were not modeled because threshold values were not specified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22599892-gradient-driven-flux-tube-simulations-ion-temperature-gradient-turbulence-close-non-linear-threshold','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22599892-gradient-driven-flux-tube-simulations-ion-temperature-gradient-turbulence-close-non-linear-threshold"><span>Gradient-driven flux-tube simulations of ion temperature gradient turbulence close to the non-linear threshold</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Peeters, A. G.; Rath, F.; Buchholz, R.</p> <p>2016-08-15</p> <p>It is shown that Ion Temperature Gradient turbulence close to the threshold exhibits a long time behaviour, with smaller heat fluxes at later times. This reduction is connected with the slow growth of long wave length zonal flows, and consequently, the numerical dissipation on these flows must be sufficiently small. Close to the nonlinear threshold for turbulence generation, a relatively small dissipation can maintain a turbulent state with a sizeable heat flux, through the damping of the zonal flow. Lowering the dissipation causes the turbulence, for temperature gradients close to the threshold, to be subdued. The heat flux then doesmore » not go smoothly to zero when the threshold is approached from above. Rather, a finite minimum heat flux is obtained below which no fully developed turbulent state exists. The threshold value of the temperature gradient length at which this finite heat flux is obtained is up to 30% larger compared with the threshold value obtained by extrapolating the heat flux to zero, and the cyclone base case is found to be nonlinearly stable. Transport is subdued when a fully developed staircase structure in the E × B shearing rate forms. Just above the threshold, an incomplete staircase develops, and transport is mediated by avalanche structures which propagate through the marginally stable regions.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1413685-low-mass-dark-matter-search-cdmslite','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1413685-low-mass-dark-matter-search-cdmslite"><span>Low-mass dark matter search with CDMSlite</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Agnese, R.; Anderson, A. J.; Aralis, T.; ...</p> <p>2018-01-01</p> <p>The SuperCDMS experiment is designed to directly detect WIMPs (Weakly Interacting Massive Particles) that may constitute the dark matter in our galaxy. During its operation at the Soudan Underground Laboratory, germanium detectors were run in the CDMSlite (Cryogenic Dark Matter Search low ionization threshold experiment) mode to gather data sets with sensitivity specifically for WIMPs with massesmore » $${<}10$$ GeV/$c^2$. In this mode, a large detector-bias voltage is applied to amplify the phonon signals produced by drifting charges. This paper presents studies of the experimental noise and its effect on the achievable energy threshold, which is demonstrated to be as low as 56 eV$$_{\\text{ee}}$$ (electron equivalent energy). The detector biasing configuration is described in detail, with analysis corrections for voltage variations to the level of a few percent. Detailed studies of the electric-field geometry, and the resulting successful development of a fiducial parameter, eliminate poorly measured events, yielding an energy resolution ranging from $${\\sim}$$9 eV$$_{\\text{ee}}$$ at 0 keV to 101 eV$$_{\\text{ee}}$$ at $${\\sim}$$10 keV$$_{\\text{ee}}$$. New results are derived for astrophysical uncertainties relevant to the WIMP-search limits, specifically examining how they are affected by variations in the most probable WIMP velocity and the galactic escape velocity. These variations become more important for WIMP masses below 10 GeV/$c^2$. Finally, new limits on spin-dependent low-mass WIMP-nucleon interactions are derived, with new parameter space excluded for WIMP masses $${\\lesssim}$$3 GeV/$c^2$.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFM.C53D0701M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFM.C53D0701M"><span>Thresholds of glacier hydrologic change and emergent vulnerabilities in a tropical Andean waterscape</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mark, B. G.; Bury, J.; Carey, M.; McKenzie, J. M.; Huh, K. I.; Baraer, M.; Eddy, A.</p> <p>2011-12-01</p> <p>Over the past 50+ years, dramatic glacier mass loss in the Cordillera Blanca, Peru, has been causing downstream hydrologic transformations, with implications for domestic, agricultural and industrial water resources. Coincidental expansion of social and economic development throughout the Santa River watershed, which drains into the Pacific Ocean, raises concerns about sustaining future water supplies. Hydrologic models predict water shortages decades in the future, but conflicts have already arisen in the watershed due to either real or perceived shortages. Moreover, increased water usage since 1950 suggests resilience to presumed thresholds given a concomitant decrease in supply. Therefore modeled thresholds do not align well with historical realities. Our collaborative research couples multiscalar observations of changes in glacier volume, hydrology, and land usage with social and economic data about perceptions of and responses to environmental change. We also examine various water withdrawal mechanisms and institutions transecting the entire watershed: agriculture, land use, irrigation, hydroelectricity generation, and mining. We quantify glacier volume loss using multi-temporal surface elevation maps of selected valley glaciers based on state-of-the-art laser altimetry (LIDAR), ASTER satellite imagery and aerial photogrammetry spanning 1962 to 2008. Results show glacier surface area loss is between 30% and 86%, while measured volume loss is 2 to 12 times greater than empirically derived scaling relationships predict. Based on historical runoff and glacier data, the upper Santa River watershed is found to be on the descending limb of a conceptual multi-decadal hydrograph. The actual distribution of dry season water supply is illustrated based on a 2011 synoptic survey of Santa River discharge from the coastal effluent to headwaters. Our results suggest that critical changes in glacier volume and water supply are not perceived or acknowledged consistently amongst different water users, with the largest contrast along a vertical gradient. We argue that societal forces (e.g. political-economic circumstances, development trends, institutions, urbanization, cultural values, technological innovation, and social relations) have played vital roles in shaping water demands that have allowed water managers to avert perceived thresholds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4340057','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4340057"><span>Predicting successful long-term weight loss from short-term weight-loss outcomes: new insights from a dynamic energy balance model (the POUNDS Lost study)123</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ivanescu, Andrada E; Martin, Corby K; Heymsfield, Steven B; Marshall, Kaitlyn; Bodrato, Victoria E; Williamson, Donald A; Anton, Stephen D; Sacks, Frank M; Ryan, Donna; Bray, George A</p> <p>2015-01-01</p> <p>Background: Currently, early weight-loss predictions of long-term weight-loss success rely on fixed percent-weight-loss thresholds. Objective: The objective was to develop thresholds during the first 3 mo of intervention that include the influence of age, sex, baseline weight, percent weight loss, and deviations from expected weight to predict whether a participant is likely to lose 5% or more body weight by year 1. Design: Data consisting of month 1, 2, 3, and 12 treatment weights were obtained from the 2-y Preventing Obesity Using Novel Dietary Strategies (POUNDS Lost) intervention. Logistic regression models that included covariates of age, height, sex, baseline weight, target energy intake, percent weight loss, and deviation of actual weight from expected were developed for months 1, 2, and 3 that predicted the probability of losing <5% of body weight in 1 y. Receiver operating characteristic (ROC) curves, area under the curve (AUC), and thresholds were calculated for each model. The AUC statistic quantified the ROC curve’s capacity to classify participants likely to lose <5% of their body weight at the end of 1 y. The models yielding the highest AUC were retained as optimal. For comparison with current practice, ROC curves relying solely on percent weight loss were also calculated. Results: Optimal models for months 1, 2, and 3 yielded ROC curves with AUCs of 0.68 (95% CI: 0.63, 0.74), 0.75 (95% CI: 0.71, 0.81), and 0.79 (95% CI: 0.74, 0.84), respectively. Percent weight loss alone was not better at identifying true positives than random chance (AUC ≤0.50). Conclusions: The newly derived models provide a personalized prediction of long-term success from early weight-loss variables. The predictions improve on existing fixed percent-weight-loss thresholds. Future research is needed to explore model application for informing treatment approaches during early intervention. The POUNDS Lost study was registered at clinicaltrials.gov as NCT00072995. PMID:25733628</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23312671','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23312671"><span>Use of the Threshold of Toxicological Concern (TTC) approach for deriving target values for drinking water contaminants.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mons, M N; Heringa, M B; van Genderen, J; Puijker, L M; Brand, W; van Leeuwen, C J; Stoks, P; van der Hoek, J P; van der Kooij, D</p> <p>2013-03-15</p> <p>Ongoing pollution and improving analytical techniques reveal more and more anthropogenic substances in drinking water sources, and incidentally in treated water as well. In fact, complete absence of any trace pollutant in treated drinking water is an illusion as current analytical techniques are capable of detecting very low concentrations. Most of the substances detected lack toxicity data to derive safe levels and have not yet been regulated. Although the concentrations in treated water usually do not have adverse health effects, their presence is still undesired because of customer perception. This leads to the question how sensitive analytical methods need to become for water quality screening, at what levels water suppliers need to take action and how effective treatment methods need to be designed to remove contaminants sufficiently. Therefore, in the Netherlands a clear and consistent approach called 'Drinking Water Quality for the 21st century (Q21)' has been developed within the joint research program of the drinking water companies. Target values for anthropogenic drinking water contaminants were derived by using the recently introduced Threshold of Toxicological Concern (TTC) approach. The target values for individual genotoxic and steroid endocrine chemicals were set at 0.01 μg/L. For all other organic chemicals the target values were set at 0.1 μg/L. The target value for the total sum of genotoxic chemicals, the total sum of steroid hormones and the total sum of all other organic compounds were set at 0.01, 0.01 and 1.0 μg/L, respectively. The Dutch Q21 approach is further supplemented by the standstill-principle and effect-directed testing. The approach is helpful in defining the goals and limits of future treatment process designs and of analytical methods to further improve and ensure the quality of drinking water, without going to unnecessary extents. Copyright © 2013 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24104802','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24104802"><span>Macroscopic response in active nonlinear photonic crystals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alagappan, Gandhi; John, Sajeev; Li, Er Ping</p> <p>2013-09-15</p> <p>We derive macroscopic equations of motion for the slowly varying electric field amplitude in three-dimensional active nonlinear optical nanostructures. We show that the microscopic Maxwell equations and polarization dynamics can be simplified to a macroscopic one-dimensional problem in the direction of group velocity. For a three-level active material, we derive the steady-state equations for normal mode frequency, threshold pumping, nonlinear Bloch mode amplitude, and lasing in photonic crystals. Our analytical results accurately recapture the results of exact numerical methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720021047','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720021047"><span>Relations among pure-tone sound stimuli, neural activity, and the loudness sensation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Howes, W. L.</p> <p>1972-01-01</p> <p>Both the physiological and psychological responses to pure-tone sound stimuli are used to derive formulas which: (1) relate the loudness, loudness level, and sound-pressure level of pure tones; (2) apply continuously over most of the acoustic regime, including the loudness threshold; and (3) contain no undetermined coefficients. Some of the formulas are fundamental for calculating the loudness of any sound. Power-law formulas relating the pure-tone sound stimulus, neural activity, and loudness are derived from published data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970717','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970717"><span>Model for Estimating the Threshold Mechanical Stability of Structural Cartilage Grafts Used in Rhinoplasty</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zemek, Allison; Garg, Rohit; Wong, Brian J. F.</p> <p>2014-01-01</p> <p>Objectives/Hypothesis Characterizing the mechanical properties of structural cartilage grafts used in rhinoplasty is valuable because softer engineered tissues are more time- and cost-efficient to manufacture. The aim of this study is to quantitatively identify the threshold mechanical stability (e.g., Young’s modulus) of columellar, L-strut, and alar cartilage replacement grafts. Study Design Descriptive, focus group survey. Methods Ten mechanical phantoms of identical size (5 × 20 × 2.3 mm) and varying stiffness (0.360 to 0.85 MPa in 0.05 MPa increments) were made from urethane. A focus group of experienced rhinoplasty surgeons (n = 25, 5 to 30 years in practice) were asked to arrange the phantoms in order of increasing stiffness. Then, they were asked to identify the minimum acceptable stiffness that would still result in favorable surgical outcomes for three clinical applications: columellar, L-strut, and lateral crural replacement grafts. Available surgeons were tested again after 1 week to evaluate intra-rater consistency. Results For each surgeon, the threshold stiffness for each clinical application differed from the threshold values derived by logistic regression by no more than 0.05 MPa (accuracy to within 10%). Specific thresholds were 0.56, 0.59, and 0.49 MPa for columellar, L-strut, and alar grafts, respectively. For comparison, human nasal septal cartilage is approximately 0.8 MPa. Conclusions There was little inter- and intra-rater variation of the identified threshold values for adequate graft stiffness. The identified threshold values will be useful for the design of tissue-engineered or semisynthetic cartilage grafts for use in structural nasal surgery. PMID:20513022</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21843518','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21843518"><span>Comparison of two insulin assays for first-phase insulin release in type 1 diabetes prediction and prevention studies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mahon, Jeffrey L; Beam, Craig A; Marcovina, Santica M; Boulware, David C; Palmer, Jerry P; Winter, William E; Skyler, Jay S; Krischer, Jeffrey P</p> <p>2011-11-20</p> <p>Detection of below-threshold first-phase insulin release or FPIR (1+3 minute insulin concentrations during an intravenous glucose tolerance test [IVGTT]) is important in type 1 diabetes prediction and prevention studies including the TrialNet Oral Insulin Prevention Trial. We assessed whether an insulin immunoenzymometric assay (IEMA) could replace the less practical but current standard of a radioimmunoassay (RIA) for FPIR. One hundred thirty-three islet autoantibody positive relatives of persons with type 1 diabetes underwent 161 IVGTTs. Insulin concentrations were measured by both assays in 1056 paired samples. A rule classifying FPIR (below-threshold, above-threshold, uncertain) by the IEMA was derived and validated against FPIR by the RIA. The insulin IEMA-based rule accurately classified below- and above-threshold FPIRs by the RIA in 110/161 (68%) IVGTTs, but was uncertain in 51/161 (32%) tests for which FPIR by RIA is needed. An uncertain FPIR by the IEMA was more likely among below-threshold vs above-threshold FPIRs by the RIA (64% [30/47] vs. 18% [21/114], respectively; p<0.05). An insulin IEMA for FPIR in subjects at risk for type 1 diabetes accurately determined below- and above-threshold FPIRs in 2/3 of tests relative to the current standard of the insulin RIA, but could not reliably classify the remaining FPIRs. TrialNet is limiting the insulin RIA for FPIR to the latter given the practical advantages of the more specific IEMA. Copyright © 2011 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010amos.confE..63M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010amos.confE..63M"><span>Pay Me Now or Pay Me More Later: Start the Development of Active Orbital Debris Removal Now</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McKnight, D.</p> <p>2010-09-01</p> <p>The objective of this paper is to examine when the aerospace community should proceed to develop and deploy active debris removal solutions. A two-prong approach is taken to examine both (1) operational hazard thresholds and (2) economic triggers. Research in the paper reinforces work by previous investigators that show accurately determining a hazard metric, and an appropriate threshold for that metric that triggers an imperative to implement active debris removal options, is difficult to formulate. A new operational hazard threshold defined by the doubling of the “lethal” debris environment coupled with the threshold that would affect insurance premiums is disclosed for the first time. The doubling of the lethal hazard at 850km and the annual probability of collision in the 650-1000km region may both occur as early as 2035. A simple static (i.e. no temporal dimension) economic threshold is derived that provides the clearest indicator that active debris removal solutions development and deployment should start immediately. This straightforward observation is based on the fact that it will always be at least an order of magnitude less expensive, quicker to execute, and operationally beneficial to remove mass from orbit as one large (several thousand kilograms) object rather than as the result of tens of thousands of fragments that would be produced from a catastrophic collision. Additionally, the ratio of lethal fragments to trackable objects is only ~1,000x yet there is a need for the collection efficiency to be ~10,000x so “sweeping” of lethal fragments is not viable. The practicality of the large object removal is tempered by the observation that one may have to remove ~10-50x derelict objects to prevent a single collision. This fact forces the imperative that removal needs to start now due to the delays that will be necessary not only to perfect/deploy approaches to debris removal and establish supporting policies/regulations but also because of the time it takes for the actions to reap benefits. Additionally, if the growth of the lethal hazard grows faster than anticipated it may be necessary to replace some satellites, execute large object removal, and perform medium debris (i.e. lethal fragments) sweeping operations. The sooner the community starts to remove large derelict objects, the more likely satellite damage will be minimized and the less likely that medium debris sweeping will have to be implemented. While the research is focused on starting debris removal, the ensemble of observations reinforces the need to continue to push for as close to 100% compliance to debris mitigation guidelines as possible. This analysis is unique in its pragmatic application of advanced probability concepts, merging of space hazard assessments with space insurance thresholds, and the use of general risk management concepts on the orbital debris hazard control process. It is hoped that this paper provides an impetus for spacefaring organizations to start to actively pursue development and deployment of debris removal solutions and policies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24517951','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24517951"><span>[The analysis of threshold effect using Empower Stats software].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan</p> <p>2013-11-01</p> <p>In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1374843','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1374843"><span>Controlling marginally detached divertor plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Eldon, David; Kolemen, Egemen; Barton, Joseph L.</p> <p></p> <p>A new control system at DIII-D has stabilized the inter-ELM detached divertor plasma state for H-mode in close proximity to the threshold for reattachment, thus demonstrating the ability to maintain detachment with minimal gas puffing. When the same control system was instead ordered to hold the plasma at the threshold (here defined as T e = 5 eV near the divertor target plate), the resulting T e profiles separated into two groups with one group consistent with marginal detachment, and the other with marginal attachment. The plasma dithers between the attached and detached states when the control system attempts to hold at the threshold. The control system is upgraded from the one described in and it handles ELMing plasmas by using real time D α measurements to remove during-ELM slices from real time T e measurements derived from divertor Thomson scattering. The difference between measured and requested inter-ELM T e is passed to a PID (proportionalintegral-derivative) controller to determine gas puff commands. While some degree of detachment is essential for the health of ITER’s divertor, more deeply detached plasmas have greater radiative losses and, at the extreme, confinement degradation, making it desirable to limit detachment to the minimum level needed to protect the target plate. However, the observed bifurcation in plasma conditions at the outer strike point with the ion B ×more » $$\</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1400087-material-thresholding-method-improving-integerness-solutions-topology-optimization','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1400087-material-thresholding-method-improving-integerness-solutions-topology-optimization"><span>An n -material thresholding method for improving integerness of solutions in topology optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Watts, Seth; Tortorelli, Daniel A.</p> <p>2016-04-10</p> <p>It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AIPC.1643..312Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AIPC.1643..312Z"><span>Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz</p> <p>2015-02-01</p> <p>In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23653075','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23653075"><span>Convergence between DSM-IV-TR and DSM-5 diagnostic models for personality disorder: evaluation of strategies for establishing diagnostic thresholds.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morey, Leslie C; Skodol, Andrew E</p> <p>2013-05-01</p> <p>The Personality and Personality Disorders Work Group for the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) recommended substantial revisions to the personality disorders (PDs) section of DSM-IV-TR, proposing a hybrid categorical-dimensional model that represented PDs as combinations of core personality dysfunctions and various configurations of maladaptive personality traits. Although the DSM-5 Task Force endorsed the proposal, the Board of Trustees of the American Psychiatric Association (APA) did not, placing the Work Group's model in DSM-5 Section III ("Emerging Measures and Models") with other concepts thought to be in need of additional research. This paper documents the impact of using this alternative model in a national sample of 337 patients as described by clinicians familiar with their cases. In particular, the analyses focus on alternative strategies considered by the Work Group for deriving decision rules, or diagnostic thresholds, with which to assign categorical diagnoses. Results demonstrate that diagnostic rules could be derived that yielded appreciable correspondence between DSM-IV-TR and proposed DSM-5 PD diagnoses-correspondence greater than that observed in the transition between DSM-III and DSM-III-R PDs. The approach also represents the most comprehensive attempt to date to provide conceptual and empirical justification for diagnostic thresholds utilized within the DSM PDs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23553904','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23553904"><span>Estimating economic thresholds for site-specific weed control using manual weed counts and sensor technology: an example based on three winter wheat trials.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Keller, Martina; Gutjahr, Christoph; Möhring, Jens; Weis, Martin; Sökefeld, Markus; Gerhards, Roland</p> <p>2014-02-01</p> <p>Precision experimental design uses the natural heterogeneity of agricultural fields and combines sensor technology with linear mixed models to estimate the effect of weeds, soil properties and herbicide on yield. These estimates can be used to derive economic thresholds. Three field trials are presented using the precision experimental design in winter wheat. Weed densities were determined by manual sampling and bi-spectral cameras, yield and soil properties were mapped. Galium aparine, other broad-leaved weeds and Alopecurus myosuroides reduced yield by 17.5, 1.2 and 12.4 kg ha(-1) plant(-1)  m(2) in one trial. The determined thresholds for site-specific weed control with independently applied herbicides were 4, 48 and 12 plants m(-2), respectively. Spring drought reduced yield effects of weeds considerably in one trial, since water became yield limiting. A negative herbicide effect on the crop was negligible, except in one trial, in which the herbicide mixture tended to reduce yield by 0.6 t ha(-1). Bi-spectral cameras for weed counting were of limited use and still need improvement. Nevertheless, large weed patches were correctly identified. The current paper presents a new approach to conducting field trials and deriving decision rules for weed control in farmers' fields. © 2013 Society of Chemical Industry.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28877536','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28877536"><span>Trial-dependent psychometric functions accounting for perceptual learning in 2-AFC discrimination tasks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kattner, Florian; Cochrane, Aaron; Green, C Shawn</p> <p>2017-09-01</p> <p>The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20020069123&hterms=transistors&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dtransistors','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20020069123&hterms=transistors&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dtransistors"><span>Modeling of Gate Bias Modulation in Carbon Nanotube Field-Effect-Transistors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yamada, Toshishige; Biegel, Bryan (Technical Monitor)</p> <p>2002-01-01</p> <p>The threshold voltages of a carbon nanotube (CNT) field-effect transistor (FET) are derived and compared with those of the metal oxide-semiconductor (MOS) FETs. The CNT channel is so thin that there is no voltage drop perpendicular to the gate electrode plane, which is the CNT diameter direction, and this makes the CNTFET characteristics quite different from those in MOSFETs. The relation between the voltage and the electrochemical potentials, and the mass action law for electrons and holes are examined in the context of CNTs, and it is shown that the familiar relations are still valid because of the macroscopic number of states available in the CNTs. This is in sharp contrast to the cases of quantum dots. Using these relations, we derive an inversion threshold voltage V(sub Ti) and an accumulation threshold voltage V(sub Ta) as a function of the Fermi level E(sub F) in the channel, where E(sub F) is a measure of channel doping. V(sub Ti) of the CNTFETs has a much stronger dependence than that of MOSFETs, while V(sub Ta)s of both CNTFETs and MOSFETs depend quite weakly on E(sub F) with the same functional form. This means the transition from normally-off mode to normally-on mode is much sharper in CNTFETs as the doping increases, and this property has to be taken into account in circuit design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1374843-controlling-marginally-detached-divertor-plasmas','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1374843-controlling-marginally-detached-divertor-plasmas"><span>Controlling marginally detached divertor plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Eldon, David; Kolemen, Egemen; Barton, Joseph L.; ...</p> <p>2017-05-04</p> <p>A new control system at DIII-D has stabilized the inter-ELM detached divertor plasma state for H-mode in close proximity to the threshold for reattachment, thus demonstrating the ability to maintain detachment with minimal gas puffing. When the same control system was instead ordered to hold the plasma at the threshold (here defined as T e = 5 eV near the divertor target plate), the resulting T e profiles separated into two groups with one group consistent with marginal detachment, and the other with marginal attachment. The plasma dithers between the attached and detached states when the control system attempts to hold at the threshold. The control system is upgraded from the one described in and it handles ELMing plasmas by using real time D α measurements to remove during-ELM slices from real time T e measurements derived from divertor Thomson scattering. The difference between measured and requested inter-ELM T e is passed to a PID (proportionalintegral-derivative) controller to determine gas puff commands. While some degree of detachment is essential for the health of ITER’s divertor, more deeply detached plasmas have greater radiative losses and, at the extreme, confinement degradation, making it desirable to limit detachment to the minimum level needed to protect the target plate. However, the observed bifurcation in plasma conditions at the outer strike point with the ion B ×more » $$\</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014IJC....87.1181Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014IJC....87.1181Z"><span>Large signal-to-noise ratio quantification in MLE for ARARMAX models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zou, Yiqun; Tang, Xiafei</p> <p>2014-06-01</p> <p>It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70182467','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70182467"><span>Tree mortality predicted from drought-induced vascular damage</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Anderegg, William R.L.; Flint, Alan L.; Huang, Cho-ying; Flint, Lorraine E.; Berry, Joseph A.; Davis, Frank W.; Sperry, John S.; Field, Christopher B.</p> <p>2015-01-01</p> <p>The projected responses of forest ecosystems to warming and drying associated with twenty-first-century climate change vary widely from resiliency to widespread tree mortality1, 2, 3. Current vegetation models lack the ability to account for mortality of overstorey trees during extreme drought owing to uncertainties in mechanisms and thresholds causing mortality4, 5. Here we assess the causes of tree mortality, using field measurements of branch hydraulic conductivity during ongoing mortality in Populus tremuloides in the southwestern United States and a detailed plant hydraulics model. We identify a lethal plant water stress threshold that corresponds with a loss of vascular transport capacity from air entry into the xylem. We then use this hydraulic-based threshold to simulate forest dieback during historical drought, and compare predictions against three independent mortality data sets. The hydraulic threshold predicted with 75% accuracy regional patterns of tree mortality as found in field plots and mortality maps derived from Landsat imagery. In a high-emissions scenario, climate models project that drought stress will exceed the observed mortality threshold in the southwestern United States by the 2050s. Our approach provides a powerful and tractable way of incorporating tree mortality into vegetation models to resolve uncertainty over the fate of forest ecosystems in a changing climate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NIMPB.349...64H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NIMPB.349...64H"><span>New method to evaluate the 7Li(p, n)7Be reaction near threshold</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Herrera, María S.; Moreno, Gustavo A.; Kreiner, Andrés J.</p> <p>2015-04-01</p> <p>In this work a complete description of the 7Li(p, n)7Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007OExpr..15.6019M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007OExpr..15.6019M"><span>Self ordering threshold and superradiant backscattering to slow a fast gas beam in a ring cavity with counter propagating pump</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maes, C.; Asbóth, J. K.; Ritsch, H.</p> <p>2007-05-01</p> <p>We study the dynamics of a fast gaseous beam in a high Q ring cavity counter propagating a strong pump laser with large detuning from any particle optical resonance. As spontaneous emission is strongly suppressed the particles can be treated as polarizable point masses forming a dynamic moving mirror. Above a threshold intensity the particles exhibit spatial periodic ordering enhancing collective coherent backscattering which decelerates the beam. Based on a linear stability analysis in their accelerated rest frame we derive analytic bounds for the intensity threshold of this selforganization as a function of particle number, average velocity, kinetic temperature, pump detuning and resonator linewidth. The analytical results agree well with time dependent simulations of the N-particle motion including field damping and spontaneous emission noise. Our results give conditions which may be easily evaluated for stopping and cooling a fast molecular beam.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19546906','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19546906"><span>Self ordering threshold and superradiant backscattering to slow a fast gas beam in a ring cavity with counter propagating pump.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maes, C; Asbóth, J K; Ritsch, H</p> <p>2007-05-14</p> <p>We study the dynamics of a fast gaseous beam in a high Q ring cavity counter propagating a strong pump laser with large detuning from any particle optical resonance. As spontaneous emission is strongly suppressed the particles can be treated as polarizable point masses forming a dynamic moving mirror. Above a threshold intensity the particles exhibit spatial periodic ordering enhancing collective coherent backscattering which decelerates the beam. Based on a linear stability analysis in their accelerated rest frame we derive analytic bounds for the intensity threshold of this selforganization as a function of particle number, average velocity, kinetic temperature, pump detuning and resonator linewidth. The analytical results agree well with time dependent simulations of the N-particle motion including field damping and spontaneous emission noise. Our results give conditions which may be easily evaluated for stopping and cooling a fast molecular beam.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CPM...tmp...14B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CPM...tmp...14B"><span>Electrical percolation threshold of magnetostrictive inclusions in a piezoelectric matrix under simulated sintering conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bedard, Antoine Joseph; Barbero, Ever J.</p> <p>2018-03-01</p> <p>Magnetoelectric (ME) composites can be produced by embedding magnetostrictive H particles in a piezoelectric E matrix derived from a piezoelectric powder precursor. Previously, using a bi-disperse hard-shell model (Barbero and Bedard in Comput Part Mech, 2018. https://doi.org/10.1007/s40571-017-0165-4), it has been shown that the electrical percolation threshold of the conductive H phase can be increased by decreasing the piezoelectric E particle size, relative to the H phase particle size, and by increasing short-range affinity between the E and H particles. This study builds on our previous study by exploring what happens during sintering of the ME composite when either the H or E particles undergo deformation. It was found that deformation of the H particles reduces the percolation threshold, and that deformation of E particles increases inter-phase H-E mechanical coupling, thus contributing to enhancing of ME coupling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19750040913&hterms=Quantum+mechanics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DQuantum%2Bmechanics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19750040913&hterms=Quantum+mechanics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DQuantum%2Bmechanics"><span>Analysis of surface sputtering on a quantum statistical basis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wilhelm, H. E.</p> <p>1975-01-01</p> <p>Surface sputtering is explained theoretically by means of a 3-body sputtering mechanism involving the ion and two surface atoms of the solid. By means of quantum-statistical mechanics, a formula for the sputtering ratio S(E) is derived from first principles. The theoretical sputtering rate S(E) was found experimentally to be proportional to the square of the difference between incident ion energy and the threshold energy for sputtering of surface atoms at low ion energies. Extrapolation of the theoretical sputtering formula to larger ion energies indicates that S(E) reaches a saturation value and finally decreases at high ion energies. The theoretical sputtering ratios S(E) for wolfram, tantalum, and molybdenum are compared with the corresponding experimental sputtering curves in the low energy region from threshold sputtering energy to 120 eV above the respective threshold energy. Theory and experiment are shown to be in good agreement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD0666650','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD0666650"><span>FUNDAMENTALS OF THRESHOLD LOGIC.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p></p> <p>These notes on threshold logic are intended as intermediary material between a completely geometric, heuristic presentation and the more formal...source material available in the literature. Basic definitions and simple properties of threshold function are developed, followed by a complete treatment</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.7580P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.7580P"><span>Derivation of debris flow critical rainfall thresholds from land stability modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Papa, M. N.; Medina, V.; Bateman, A.; Ciervo, F.</p> <p>2012-04-01</p> <p>The aim of the work is to develop a system capable of providing debris flow warnings in areas where historical events data are not available as well as in the case of changing environments and climate. For these reasons, critical rainfall threshold curves are derived from mathematical and numerical simulations rather than the classical derivation from empirical rainfall data. The operational use of distributed model, based on the stability analysis for each grid cell of the basin, is not feasible in the case of warnings due to the long running time required for this kind of model as well as the lack of detailed information on the spatial distribution of the properties of the material in many practical cases. Moreover, with the aim of giving debris flow warnings, it is not necessary to know the distribution of instable elements along the basin but only if a debris flow may affect the vulnerable areas in the valley. The capability of a debris flow of reaching the downstream areas depends on many factors linked with the topography, the solid concentration, the rheological properties of the debris mixture and the flow discharge as well as the occurrence of liquefaction of the sliding mass. In relation to a specific basin, many of these factors may be considered as not time dependent. The most rainfall dependent factors are flow discharge and correlated total debris volume. In the present study, the total volume that is instable, and therefore available for the flow, is considered as the governing factor from which it is possible to assess whether a debris flow will affect the downstream areas or not. The possible triggering debris flow is simulated, in a generic element of the basin, by an infinite slope stability analysis. The groundwater pressure is calculated by the superposition of the effect of an "antecedent" rainfall and an "event" rainfall. The groundwater pressure response to antecedent rainfall is used as the initial condition for the time-dependent computation of the groundwater pressure response to the event rainfall. Antecedent rainfall response is estimated in the hypotheses of low intensity and long duration, thus assuming steady state conditions and slope parallel groundwater flux. The short term response to rainfall is assessed in the hypothesis of vertical infiltration. The simulations are performed in a virtual basin, representative of the one studied, taking into account the uncertainties linked with the definition of the characteristics of the soil. The approach presented is based on the simulation of a large number of cases covering the entire range of the governing input dynamic variables. For any possible combination of rainfall intensity, duration and antecedent rain, the total debris volume, available for the flow, is estimated. The resulting database is elaborated in order to obtain rainfall threshold curves. When operating in real time, if the observed and forecasted rainfall exceeds a given threshold, the corresponding probability of debris flow occurrence may be estimated.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title24-vol4/pdf/CFR-2012-title24-vol4-sec1003-302.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title24-vol4/pdf/CFR-2012-title24-vol4-sec1003-302.pdf"><span>24 CFR 1003.302 - Project specific threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-04-01</p> <p>... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false Project specific threshold requirements. 1003.302 Section 1003.302 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT COMMUNITY...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title24-vol4/pdf/CFR-2013-title24-vol4-sec1003-302.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title24-vol4/pdf/CFR-2013-title24-vol4-sec1003-302.pdf"><span>24 CFR 1003.302 - Project specific threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-04-01</p> <p>... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false Project specific threshold requirements. 1003.302 Section 1003.302 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT COMMUNITY...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol4/pdf/CFR-2010-title24-vol4-sec1003-302.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol4/pdf/CFR-2010-title24-vol4-sec1003-302.pdf"><span>24 CFR 1003.302 - Project specific threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-04-01</p> <p>... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Project specific threshold requirements. 1003.302 Section 1003.302 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT COMMUNITY...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title24-vol4/pdf/CFR-2011-title24-vol4-sec1003-302.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title24-vol4/pdf/CFR-2011-title24-vol4-sec1003-302.pdf"><span>24 CFR 1003.302 - Project specific threshold requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-04-01</p> <p>... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Project specific threshold requirements. 1003.302 Section 1003.302 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT COMMUNITY...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009PhBio...6a6006M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009PhBio...6a6006M"><span>Autocrine signal transmission with extracellular ligand degradation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Muratov, C B; Posta, F; Shvartsman, S Y</p> <p>2009-03-01</p> <p>Traveling waves of cell signaling in epithelial layers orchestrate a number of important processes in developing and adult tissues. These waves can be mediated by positive feedback autocrine loops, a mode of cell signaling where binding of a diffusible extracellular ligand to a cell surface receptor can lead to further ligand release. We formulate and analyze a biophysical model that accounts for ligand-induced ligand release, extracellular ligand diffusion and ligand-receptor interaction. We focus on the case when the main mode for ligand degradation is extracellular and analyze the problem with the sharp threshold positive feedback nonlinearity. We derive expressions that link the speed of propagation and other characteristics of traveling waves to the parameters of the biophysical processes, such as diffusion rates, receptor expression level, etc. Analyzing the derived expressions we found that traveling waves in such systems can exhibit a number of unusual properties, e.g. non-monotonic dependence of the speed of propagation on ligand diffusivity. Our results for the fully developed traveling fronts can be used to analyze wave initiation from localized perturbations, a scenario that frequently arises in the in vitro models of epithelial wound healing, and guide future modeling studies of cell communication in epithelial layers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAG...149...35T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAG...149...35T"><span>Analytical and numerical solutions for heat transfer and effective thermal conductivity of cracked media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tran, A. B.; Vu, M. N.; Nguyen, S. T.; Dong, T. Q.; Le-Nguyen, K.</p> <p>2018-02-01</p> <p>This paper presents analytical solutions to heat transfer problems around a crack and derive an adaptive model for effective thermal conductivity of cracked materials based on singular integral equation approach. Potential solution of heat diffusion through two-dimensional cracked media, where crack filled by air behaves as insulator to heat flow, is obtained in a singular integral equation form. It is demonstrated that the temperature field can be described as a function of temperature and rate of heat flow on the boundary and the temperature jump across the cracks. Numerical resolution of this boundary integral equation allows determining heat conduction and effective thermal conductivity of cracked media. Moreover, writing this boundary integral equation for an infinite medium embedding a single crack under a far-field condition allows deriving the closed-form solution of temperature discontinuity on the crack and particularly the closed-form solution of temperature field around the crack. These formulas are then used to establish analytical effective medium estimates. Finally, the comparison between the developed numerical and analytical solutions allows developing an adaptive model for effective thermal conductivity of cracked media. This model takes into account both the interaction between cracks and the percolation threshold.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29309896','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29309896"><span>Thresholding functional connectomes by means of mixture modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F</p> <p>2018-05-01</p> <p>Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009ACP.....9.8825M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009ACP.....9.8825M"><span>Can gravity waves significantly impact PSC occurrence in the Antarctic?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McDonald, A. J.; George, S. E.; Woollands, R. M.</p> <p>2009-11-01</p> <p>A combination of POAM III aerosol extinction and CHAMP RO temperature measurements are used to examine the role of atmospheric gravity waves in the formation of Antarctic Polar Stratospheric Clouds (PSCs). POAM III aerosol extinction observations and quality flag information are used to identify Polar Stratospheric Clouds using an unsupervised clustering algorithm. A PSC proxy, derived by thresholding Met Office temperature analyses with the PSC Type Ia formation temperature (TNAT), shows general agreement with the results of the POAM III analysis. However, in June the POAM III observations of PSC are more abundant than expected from temperature threshold crossings in five out of the eight years examined. In addition, September and October PSC identified using temperature thresholding is often significantly higher than that derived from POAM III; this observation probably being due to dehydration and denitrification. Comparison of the Met Office temperature analyses with corresponding CHAMP observations also suggests a small warm bias in the Met Office data in June. However, this bias cannot fully explain the differences observed. Analysis of CHAMP data indicates that temperature perturbations associated with gravity waves may partially explain the enhanced PSC incidence observed in June (relative to the Met Office analyses). For this month, approximately 40% of the temperature threshold crossings observed using CHAMP RO data are associated with small-scale perturbations. Examination of the distribution of temperatures relative to TNAT shows a large proportion of June data to be close to this threshold, potentially enhancing the importance of gravity wave induced temperature perturbations. Inspection of the longitudinal structure of PSC occurrence in June 2005 also shows that regions of enhancement are geographically associated with the Antarctic Peninsula; a known mountain wave "hotspot". The latitudinal variation of POAM III observations means that we only observe this region in June-July, and thus the true pattern of enhanced PSC production may continue operating into later months. The analysis has shown that early in the Antarctic winter stratospheric background temperatures are close to the TNAT threshold (and PSC formation), and are thus sensitive to temperature perturbations associated with mountain wave activity near the Antarctic peninsula (40% of PSC formation). Later in the season, and at latitudes away from the peninsula, temperature perturbations associated with gravity waves contribute to about 15% of the observed PSC (a value which corresponds well to several previous studies). This lower value is likely to be due to colder background temperatures already achieving the TNAT threshold unaided. Additionally, there is a reduction in the magnitude of gravity waves perturbations observed as POAM III samples poleward of the peninsula.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5578750','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5578750"><span>Thresholds of Principle and Preference: Exploring Procedural Variation in Postgraduate Surgical Education</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei</p> <p>2017-01-01</p> <p>Background Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. Method We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents’ ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. Results The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents’ daily experiences of spotting, mapping, and negotiating their faculty members’ thresholds and defending their own emerging thresholds. Conclusions Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners’ development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context. PMID:26505105</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28050697','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28050697"><span>Development and validation of a prediction model for measurement variability of lung nodule volumetry in patients with pulmonary metastases.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil</p> <p>2017-08-01</p> <p>To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4227665','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4227665"><span>Development and Current Status of the “Cambridge” Loudness Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMGC23A0890K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMGC23A0890K"><span>Development of an Integrated Water Resources and Coastal Adaptation Plan for Exeter NH: Phase 1, Vulnerability Assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kirshen, P. H.; Holt-Shannon, M.; Aytur, S.; Becker, M.; Burdick, D.; Jones, S.; Mather, L.; Miller, S.; Roseen, R.; Wake, C. P.</p> <p>2013-12-01</p> <p>Situated at the head of a tidal river, divided by tributaries with wide floodplains, and located just upstream of the Great Bay National Estuary Research Reserve, the Town of Exeter in southeastern NH is already faced with problems of river and storm water flooding, public health and safety, water quality deterioration, and fisheries and wetland stresses. These challenges will be compounded over the next several decades due to expected (and recently observed) changes in climate, including increases in precipitation, temperature, and sea level - impacting not only Exeter but also Great Bay downstream. Land use changes in the town and upstream communities will also cause impacts. Working with this engaged community, a multidisciplinary team is using Community Based Participatory Research combined with water resources modeling to co-develop with the Town an adaptation strategy that will include a mix of approaches that are either robust or flexible and progressive that are implemented over time and space by public and private entities. The vulnerability assessment launches this process by identifying the impacts on the community under varying land use and climate change scenarios using community-derived metrics along with determining critical thresholds to identify possible time frames of impacts to both human and natural systems. Impacts and thresholds are determined by simulation of the possible changes yearly from the present to 2100.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26610978','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26610978"><span>Auditory Sensitivity and Masking Profiles for the Sea Otter (Enhydra lutris).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ghoul, Asila; Reichmuth, Colleen</p> <p>2016-01-01</p> <p>Sea otters are threatened marine mammals that may be negatively impacted by human-generated coastal noise, yet information about sound reception in this species is surprisingly scarce. We investigated amphibious hearing in sea otters by obtaining the first measurements of absolute sensitivity and critical masking ratios. Auditory thresholds were measured in air and underwater from 0.125 to 40 kHz. Critical ratios derived from aerial masked thresholds from 0.25 to 22.6 kHz were also obtained. These data indicate that although sea otters can detect underwater sounds, their hearing appears to be primarily air adapted and not specialized for detecting signals in background noise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25321973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25321973"><span>Human speckle perception threshold for still images from a laser projection system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Roelandt, Stijn; Meuret, Youri; Jacobs, An; Willaert, Koen; Janssens, Peter; Thienpont, Hugo; Verschaffelt, Guy</p> <p>2014-10-06</p> <p>We study the perception of speckle by human observers in a laser projector based on a 40 persons survey. The speckle contrast is first objectively measured making use of a well-defined speckle measurement method. We statistically analyse the results of the user quality scores, revealing that the speckle perception is not only influenced by the speckle contrast settings of the projector, but it is also strongly influenced by the type of image shown. Based on the survey, we derive a speckle contrast threshold for which speckle can be seen, and separately we investigate a speckle disturbance limit that is tolerated by the majority of test persons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JHEP...04..147H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JHEP...04..147H"><span>A smooth exit from eternal inflation?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hawking, S. W.; Hertog, Thomas</p> <p>2018-04-01</p> <p>The usual theory of inflation breaks down in eternal inflation. We derive a dual description of eternal inflation in terms of a deformed Euclidean CFT located at the threshold of eternal inflation. The partition function gives the amplitude of different geometries of the threshold surface in the no-boundary state. Its local and global behavior in dual toy models shows that the amplitude is low for surfaces which are not nearly conformal to the round three-sphere and essentially zero for surfaces with negative curvature. Based on this we conjecture that the exit from eternal inflation does not produce an infinite fractal-like multiverse, but is finite and reasonably smooth.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29649430','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29649430"><span>Dynamics analysis of SIR epidemic model with correlation coefficients and clustering coefficient in networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Juping; Yang, Chan; Jin, Zhen; Li, Jia</p> <p>2018-07-14</p> <p>In this paper, the correlation coefficients between nodes in states are used as dynamic variables, and we construct SIR epidemic dynamic models with correlation coefficients by using the pair approximation method in static networks and dynamic networks, respectively. Considering the clustering coefficient of the network, we analytically investigate the existence and the local asymptotic stability of each equilibrium of these models and derive threshold values for the prevalence of diseases. Additionally, we obtain two equivalent epidemic thresholds in dynamic networks, which are compared with the results of the mean field equations. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMGC23E1184Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMGC23E1184Z"><span>Modeling Global Urbanization Supported by Nighttime Light Remote Sensing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Y.</p> <p>2015-12-01</p> <p>Urbanization, a major driver of global change, profoundly impacts our physical and social world, for example, altering carbon cycling and climate. Understanding these consequences for better scientific insights and effective decision-making unarguably requires accurate information on urban extent and its spatial distributions. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the nighttime light remote sensing data, extended this method to the global domain by developing a computational method (parameterization) to estimate the key parameters in the cluster-based method, and built a consistent 20-year global urban map series to evaluate the time-reactive nature of global urbanization (e.g. 2000 in Fig. 1). Supported by urban maps derived from nightlights remote sensing data and socio-economic drivers, we developed an integrated modeling framework to project future urban expansion by integrating a top-down macro-scale statistical model with a bottom-up urban growth model. With the models calibrated and validated using historical data, we explored urban growth at the grid level (1-km) over the next two decades under a number of socio-economic scenarios. The derived spatiotemporal information of historical and potential future urbanization will be of great value with practical implications for developing adaptation and risk management measures for urban infrastructure, transportation, energy, and water systems when considered together with other factors such as climate variability and change, and high impact weather events.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27154835','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27154835"><span>Study design in high-dimensional classification analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen</p> <p>2016-10-01</p> <p>Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12766060','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12766060"><span>Multisampling suprathreshold perimetry: a comparison with conventional suprathreshold and full-threshold strategies by computer simulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David</p> <p>2003-06-01</p> <p>To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012NIMPB.279...44G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012NIMPB.279...44G"><span>Classical theory of atomic collisions - The first hundred years</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grujić, Petar V.</p> <p>2012-05-01</p> <p>Classical calculations of the atomic processes started in 1911 with famous Rutherford's evaluation of the differential cross section for α particles scattered on foil atoms [1]. The success of these calculations was soon overshadowed by the rise of Quantum Mechanics in 1925 and its triumphal success in describing processes at the atomic and subatomic levels. It was generally recognized that the classical approach should be inadequate and it was neglected until 1953, when the famous paper by Gregory Wannier appeared, in which the threshold law for the single ionization cross section behaviour by electron impact was derived. All later calculations and experimental studies confirmed the law derived by purely classical theory. The next step was taken by Ian Percival and collaborators in 60s, who developed a general classical three-body computer code, which was used by many researchers in evaluating various atomic processes like ionization, excitation, detachment, dissociation, etc. Another approach was pursued by Michal Gryzinski from Warsaw, who started a far reaching programme for treating atomic particles and processes as purely classical objects [2]. Though often criticized for overestimating the domain of the classical theory, results of his group were able to match many experimental data. Belgrade group was pursuing the classical approach using both analytical and numerical calculations, studying a number of atomic collisions, in particular near-threshold processes. Riga group, lead by Modris Gailitis [3], contributed considerably to the field, as it was done by Valentin Ostrovsky and coworkers from Sanct Petersbourg, who developed powerful analytical methods within purely classical mechanics [4]. We shall make an overview of these approaches and show some of the remarkable results, which were subsequently confirmed by semiclassical and quantum mechanical calculations, as well as by the experimental evidence. Finally we discuss the theoretical and epistemological background of the classical calculations and explain why these turned out so successful, despite the essentially quantum nature of the atomic and subatomic systems.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20872645','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20872645"><span>Estimating sediment quality thresholds to prevent restrictions on fish consumption: Application to polychlorinated biphenyls and dioxins-furans in the Canadian Great Lakes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bhavsar, Satyendra P; Gewurtz, Sarah B; Helm, Paul A; Labencki, Tanya L; Marvin, Christopher H; Fletcher, Rachael; Hayton, Alan; Reiner, Eric J; Boyd, Duncan</p> <p>2010-10-01</p> <p>Sediment quality thresholds (SQTs) are used by a variety of agencies to assess the potential for adverse impact of sediment-associated contaminants on aquatic biota, typically benthic invertebrates. However, sedimentary contaminants can also result in elevated fish contaminant levels, triggering consumption advisories that are protective of humans. As such, SQTs that would result in fish concentrations below consumption advisory levels should also be considered. To illustrate how this can be addressed, we first calculate biota sediment accumulation factors (BSAFs) for polychlorinated biphenyls (total PCB) and polychlorinated dioxins-furans (PCDD/Fs) in the Canadian Great Lakes using measured lake sediment and fish tissue concentrations in 4 fish species, namely, lake trout, whitefish, rainbow trout, and channel catfish. Using these BSAFs and tissue residue values for fish consumption advisories employed by the Ontario Ministry of the Environment (OMOE, Canada), we derive fish consumption advisory-based SQTs (fca-SQTs) that are likely to result in fish tissue residues that are safe to eat without restriction. The PCDD/Fs fca-SQTs ranged from 6 to 128 pg toxic equivalents (TEQ)/g dry weight (dw) and were above the Canadian Council of the Ministers of the Environment (CCME) threshold effect level (TEL) of 0.85 pg TEQ/g dw. In contrast, the total PCB fca-SQTs ranged from 1 to 60 ng/g dw and were generally below the CCME's TEL of 34.1 ng/g and OMOE's lowest effect level (LEL) of 70 ng/g; however, they were consistent with the OMOE's no effect level (NEL) of 10 ng/g. The fca-SQTs derived using the BSAF as well as food chain multiplier (FCM) approach for a smaller scale system (Hamilton Harbour in Lake Ontario) corresponded well with average lakewide Lake Ontario fca-SQTs. This analysis provides approximate sediment concentrations necessary for reducing fish consumption advisories for each of the Canadian Great Lakes and emphasizes the impacts of historical lake sediment contamination on fish advisories. We believe that this approach merits consideration in sediment guideline development. © 2010 SETAC.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21756773','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21756773"><span>[The role of brain-derived neurotrophic factor in pain facilitation and spinal mechanism in rat model of bone cancer pain].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Li-na; Yang, Jian-ping; Ji, Fu-hai; Wang, Xiu-yun; Zuo, Jian-ling; Xu, Qi-nian; Jia, Xiao-ming; Zhou, Jing; Ren, Chun-guang; Li, Wei</p> <p>2011-05-10</p> <p>To investigate the role of brain-derived neurotrophic factor (BDNF) in pain facilitation and spinal mechanisms in the rat model of bone cancer pain. The bone cancer pain model was developed by inoculated Walker 256 mammary gland carcinoma cells into the tibia medullary cavity. Sixty SD female rats were divided into 5 groups (n = 12 each) randomly; group I: control group (sham operation); group II: model group; group III: control group + anti-BDNF intrathecal (i.t.); group IV: model group + control IgG i.t.; group V: model group + anti-BDNF i.t.. Anti-BDNF or control IgG was injected i.t. during 7 to 9th day. Von-Frey threshold was measured one day before operation and every 2 days after operation. On the 9th day after threshold tested, rats were sacrificed after i.t. injection of either anti-BDNF or control IgG, the lumbar 4-6 spinal cord was removed. The expression of the spinal BDNF and the phosphorylation of extracellular signal-regulated protein kinase 1/2 (p-ERK1/2) were detected by immunohistochemistry assay and Western-Blot. Co-expression pattern of BDNF and p-ERK1/2 were determined by double-labeling immunofluorescence. We demonstrated the coexistence of BDNF and p-ERK1/2 in the spinal cord of rats. From the 7 to 9th day after operation, von-Frey threshold in groups II and IV was significantly lower than that in group I and group V (P < 0.01), group V was remarkly higher than that in group IV (P < 0.01). The spinal BDNF and p-ERK1/2 expression in group II or IV were significantly increased compared with that in group I or V (P < 0.01), intrathecal anti-BDNF was significantly suppressed BDNF and p-ERK1/2 expression (P < 0.01). BDNF and p-ERK1/2 was coexistence in the spinal cord of rats, and it maybe involved in the bone cancer pain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFD.G5001P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFD.G5001P"><span>Disease severity index derived from hemolysis evaluation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Piskin, Senol; Finol, Ender A.; Pekkan, Kerem; Vascular Biomechanics; Biofluids Laboratory (VBBL) Team; Pediatric Cardiovascular Fluid Mechanics Laboratory Team</p> <p>2017-11-01</p> <p>Several cardiovascular diseases (CVDs) are characterized by stenosis of the vessel, leaflet malfunction, disturbance of blood flow (vorticity) due to geometric deformation or abnormal growth, and development of jet flow due to ventricle overload. All of these abnormalities are followed by degeneration of inner wall of the heart and the arteries and red blood cell damage (hemolysis). In this study, identification and classification of CVDs are being performed based on hemolysis evaluation (HE). Two commonly used hemolysis models are implemented to our computational fluid dynamics simulations of CV system. The capability of HE on disease diagnosis is investigated. The analysis will be carried out on our CVD templates such as artery stenosis or pulmonary artery hypertension. HEs depend mainly on the strain rate and for some computational hemolysis models there is a threshold of strain of which the hemolysis will not take place. In the current study, we investigate the effect of thresholding besides using pseudo exposure time for steady state simulations on the blood damage evaluations. Details of our methodology for HE by post processing simulation results without necessity of re-running the simulations will be presented. American Heart Association, National Institute of Health, European Research Council, TUBITAK.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PApGe.173..173F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PApGe.173..173F"><span>An Explosion Aftershock Model with Application to On-Site Inspection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ford, Sean R.; Labak, Peter</p> <p>2016-01-01</p> <p>An estimate of aftershock activity due to a theoretical underground nuclear explosion is produced using an aftershock rate model. The model is developed with data from the Nevada National Security Site, formerly known as the Nevada Test Site, and the Semipalatinsk Test Site, which we take to represent soft-rock and hard-rock testing environments, respectively. Estimates of expected magnitude and number of aftershocks are calculated using the models for different testing and inspection scenarios. These estimates can help inform the Seismic Aftershock Monitoring System (SAMS) deployment in a potential Comprehensive Test Ban Treaty On-Site Inspection (OSI), by giving the OSI team a probabilistic assessment of potential aftershocks in the Inspection Area (IA). The aftershock assessment, combined with an estimate of the background seismicity in the IA and an empirically derived map of threshold magnitude for the SAMS network, could aid the OSI team in reporting. We apply the hard-rock model to a M5 event and combine it with the very sensitive detection threshold for OSI sensors to show that tens of events per day are expected up to a month after an explosion measured several kilometers away.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JNEng..14f6006T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JNEng..14f6006T"><span>Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.</p> <p>2017-12-01</p> <p>Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1343847-explosion-aftershock-model-application-site-inspection','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1343847-explosion-aftershock-model-application-site-inspection"><span>An explosion aftershock model with application to on-site inspection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Ford, Sean R.; Labak, Peter</p> <p>2015-02-14</p> <p>An estimate of aftershock activity due to a theoretical underground nuclear explosion is produced using an aftershock rate model. The model is developed with data from the Nevada National Security Site, formerly known as the Nevada Test Site, and the Semipalatinsk Test Site, which we take to represent soft-rock and hard-rock testing environments, respectively. Estimates of expected magnitude and number of aftershocks are calculated using the models for different testing and inspection scenarios. These estimates can help inform the Seismic Aftershock Monitoring System (SAMS) deployment in a potential Comprehensive Test Ban Treaty On-Site Inspection (OSI), by giving the OSI teammore » a probabilistic assessment of potential aftershocks in the Inspection Area (IA). The aftershock assessment, combined with an estimate of the background seismicity in the IA and an empirically derived map of threshold magnitude for the SAMS network, could aid the OSI team in reporting. Here, we apply the hard-rock model to a M5 event and combine it with the very sensitive detection threshold for OSI sensors to show that tens of events per day are expected up to a month after an explosion measured several kilometers away.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012HESSD...912797P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012HESSD...912797P"><span>Estimation of debris flow critical rainfall thresholds by a physically-based model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Papa, M. N.; Medina, V.; Ciervo, F.; Bateman, A.</p> <p>2012-11-01</p> <p>Real time assessment of debris flow hazard is fundamental for setting up warning systems that can mitigate its risk. A convenient method to assess the possible occurrence of a debris flow is the comparison of measured and forecasted rainfall with rainfall threshold curves (RTC). Empirical derivation of the RTC from the analysis of rainfall characteristics of past events is not possible when the database of observed debris flows is poor or when the environment changes with time. For landslides triggered debris flows, the above limitations may be overcome through the methodology here presented, based on the derivation of RTC from a physically based model. The critical RTC are derived from mathematical and numerical simulations based on the infinite-slope stability model in which land instability is governed by the increase in groundwater pressure due to rainfall. The effect of rainfall infiltration on landside occurrence is modelled trough a reduced form of the Richards equation. The simulations are performed in a virtual basin, representative of the studied basin, taking into account the uncertainties linked with the definition of the characteristics of the soil. A large number of calculations are performed combining different values of the rainfall characteristics (intensity and duration of event rainfall and intensity of antecedent rainfall). For each combination of rainfall characteristics, the percentage of the basin that is unstable is computed. The obtained database is opportunely elaborated to derive RTC curves. The methodology is implemented and tested on a small basin of the Amalfi Coast (South Italy).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=threshold+AND+cost+AND+effectiveness&pg=2&id=EJ355043','ERIC'); return false;" href="https://eric.ed.gov/?q=threshold+AND+cost+AND+effectiveness&pg=2&id=EJ355043"><span>When Benefits Are Difficult to Measure.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Birdsall, William C.</p> <p>1987-01-01</p> <p>It is difficult to apply benefit cost analysis to human service programs. This paper explains "threshold benefit analysis," the derivation of the minimum dollar value which the benefits must attain in order for their value to equal the intervention costs. The method is applied to a mobility training program. (BS)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=cultural+AND+appropriation&pg=4&id=EJ949292','ERIC'); return false;" href="https://eric.ed.gov/?q=cultural+AND+appropriation&pg=4&id=EJ949292"><span>Crossing the Threshold Mindfully: Exploring Rites of Passage Models in Adventure Therapy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Norris, Julian</p> <p>2011-01-01</p> <p>Rites of passage models, drawing from ethnographic descriptions of ritualized transition, are widespread in adventure therapy programmes. However, critical literature suggests that: (a) contemporary rites of passage models derive from a selective and sometimes misleading use of ethnographic materials, and (b) the appropriation of initiatory…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5643206','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5643206"><span>Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon</p> <p>2017-01-01</p> <p>[Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29036765','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29036765"><span>Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok</p> <p>2017-09-30</p> <p>The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21830713','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21830713"><span>An evaluation of the effect of recent temperature variability on the prediction of coral bleaching events.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Donner, Simon D</p> <p>2011-07-01</p> <p>Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19766683','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19766683"><span>Application of the "threshold of toxicological concern" to derive tolerable concentrations of "non-relevant metabolites" formed from plant protection products in ground and drinking water.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Melching-Kollmuss, Stephanie; Dekant, Wolfgang; Kalberlah, Fritz</p> <p>2010-03-01</p> <p>Limits for tolerable concentrations of ground water metabolites ("non-relevant metabolites" without targeted toxicities and specific classification and labeling) derived from active ingredients (AI) of plant protection products (PPPs) are discussed in the European Union. Risk assessments for "non-relevant metabolites" need to be performed when concentrations are above 0.75 microg/L. Since oral uptake is the only relevant exposure pathway for "non-relevant metabolites", risk assessment approaches as used for other chemicals with predominantly oral exposure in humans are applicable. The concept of "thresholds of toxicological concern" (TTC) defines tolerable dietary intakes for chemicals without toxicity data and is widely applied to chemicals present in food in low concentrations such as flavorings. Based on a statistical evaluation of the results of many toxicity studies and considerations of chemical structures, the TTC concept derives a maximum daily oral intake without concern of 90 microg/person/day for non-genotoxic chemicals, even for those with appreciable toxicity. When using the typical exposure assessment for drinking water contaminants (consumption of 2L of drinking water/person/day, allocation of 10% of the tolerable daily intake to drinking water), a TTC-based upper concentration limit of 4.5 microg/L for "non-relevant metabolites" in ground/drinking water is delineated. In the present publication it has been evaluated, whether this value would cover all relevant toxicities (repeated dose, reproductive and developmental, and immune effects). Taking into account, that after evaluation of specific reproduction toxicity data from chemicals and pharmaceuticals, a value of 1 microg/kgbw/day has been assessed as to cover developmental and reproduction toxicity, a TTC value of 60 microg/person/day was assessed as to represent a safe value. Based on these reasonable worst case assumptions, a TTC-derived threshold of 3 microg/L in drinking water is derived. When a non-relevant metabolite is present in concentration below 3 microg/L, animal testing for toxicity is not considered necessary for a compound-specific risk assessment since the application of the TTC covers all relevant toxicities to be considered in such assessment and any health risk resulting from these exposures is very low. (c) 2009 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12358319','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12358319"><span>New developments in supra-threshold perimetry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Henson, David B; Artes, Paul H</p> <p>2002-09-01</p> <p>To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PCE....35..507F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PCE....35..507F"><span>Inter-comparison of weather and circulation type classifications for hydrological drought development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fleig, Anne K.; Tallaksen, Lena M.; Hisdal, Hege; Stahl, Kerstin; Hannah, David M.</p> <p></p> <p>Classifications of weather and circulation patterns are often applied in research seeking to relate atmospheric state to surface environmental phenomena. However, numerous procedures have been applied to define the patterns, thus limiting comparability between studies. The COST733 Action “ Harmonisation and Applications of Weather Type Classifications for European regions” tests 73 different weather type classifications (WTC) and their associate weather types (WTs) and compares the WTCs’ utility for various applications. The objective of this study is to evaluate the potential of these WTCs for analysis of regional hydrological drought development in north-western Europe. Hydrological drought is defined in terms of a Regional Drought Area Index (RDAI), which is based on deficits derived from daily river flow series. RDAI series (1964-2001) were calculated for four homogeneous regions in Great Britain and two in Denmark. For each region, WTs associated with hydrological drought development were identified based on antecedent and concurrent WT-frequencies for major drought events. The utility of the different WTCs for the study of hydrological drought development was evaluated, and the influence of WTC attributes, i.e. input variables, number of defined WTs and general classification concept, on WTC performance was assessed. The objective Grosswetterlagen (OGWL), the objective Second-Generation Lamb Weather Type Classification (LWT2) with 18 WTs and two implementations of the objective Wetterlagenklassifikation (WLK; with 40 and 28 WTs) outperformed all other WTCs. In general, WTCs with more WTs (⩾27) were found to perform better than WTCs with less (⩽18) WTs. The influence of input variables was not consistent across the different classification procedures, and the performance of a WTC was determined primarily by the classification procedure itself. Overall, classification procedures following the relatively simple general classification concept of predefining WTs based on thresholds, performed better than those based on more sophisticated classification concepts such as deriving WTs by cluster analysis or artificial neural networks. In particular, PCA based WTCs with 9 WTs and automated WTCs with a high number of predefined WTs (subjectively and threshold based) performed well. It is suggested that the explicit consideration of the air flow characteristics of meridionality, zonality and cyclonicity in the definition of WTs is a useful feature for a WTC when analysing regional hydrological drought development.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1049878','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1049878"><span>A New Approach to Threshold Attribute Based Signatures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2011-01-01</p> <p>Inspired by developments in attribute based encryption and signatures, there has recently been a spurtof progress in the direction of threshold ...attribute based signatures (t-ABS). In this work we propose anovel approach to construct threshold attribute based signatures inspired by ring signatures...Thresholdattribute based signatures, dened by a (t; n) threshold predicate, ensure that the signer holds atleastt out of a specied set of n attributes</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3575305','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3575305"><span>A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23216969','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23216969"><span>A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T</p> <p>2012-12-08</p> <p>Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.1197F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.1197F"><span>Soil texture and climatc conditions for biocrust growth limitation: a meta analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fischer, Thomas; Subbotina, Mariia</p> <p>2015-04-01</p> <p>Along with afforestation, attempts have been made to combat desertification by managing soil crusts, and is has been reported that recovery rates of biocrusts are dependent on many factors, including the type, severity, and extent of disturbance; structure of the vascular plant community; conditions of adjoining substrates; availability of inoculation material; and climate during and after disturbance (Belnap & Eldridge 2001). Because biological soil crusts are known to be more stable on and to prefer fine substrates (Belnap 2001), the question arises as to how successful crust management practices can be applied to coarser soil. In previous studies we observed similar crust biomasses on finer soils under arid and on coarser soils under temperate conditions. We hypothesized that the higher water holding capacity of finer substrates would favor crust development, and that the amount of silt and clay in the substrate that is required for enhanced crust development would vary with changes in climatic conditions. In a global meta study, climatic and soil texture threshold values promoting BSC growth were derived. While examining literature sources, it became evident that the amount of studies to be incorporated into this meta analysis was reversely related to the amount of common environmental parameters they share. We selected annual mean precipitaion, mean temperature and the amount of silt and clay as driving variables for crust growth. Response variable was the "relative crust biomass", which was computed per literature source as the ratio between each individual crust biomass value of the given study to the study maximum value reported. We distinguished lichen, green algal, cyanobacterial and moss crusts. To quantify threshold conditions at which crust biomass responded to differences in texture and climate, we (I) determined correlations between bioclimatic variables, (II) calculated linear models to determine the effect of typical climatic variables with soil clay content and with study site as a random effect. (III) Threshold values of texture and climatc effects were identified using a regression tree. Three mean annual temperature classes for texture dependent BSC growth limitation were identified: (1) <9 °C with a threshold value of 25% silt and clay (limited growth on coarser soils), (2) 9-19 °C, where texture did have no influence on relative crust biomass, and (3) >19 °C at soils with <4 or >17% silt and clay. Because biocrust development is limited under certain climatic and soil texture conditions, it is suggested to consider soil texture for biocrust rehabilitation purposes and in biogeochemical modeling of cryptogamic ground covers. References Belnap, J. & Eldridge, D. 2001. Disturbance and Recovery of Biological Soil Crusts. In: Belnap, J. & Lange, O. (eds.) Biological Soil Crusts: Structure, Function, and Management, Springer, Berlin. Belnap, J. 2001. Biological Soil Crusts and Wind Erosion. In: Belnap, J. & Lange, O. (eds.) Fischer, T., Subbotina, M. 2014. Climatic and soil texture threshold values for cryptogamic cover development: a meta analysis. Biologia 69/11:1520-1530,</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24882456','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24882456"><span>Comparison between ABR with click and narrow band chirp stimuli in children.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zirn, Stefan; Louza, Julia; Reiman, Viktor; Wittlinger, Natalie; Hempel, John-Martin; Schuster, Maria</p> <p>2014-08-01</p> <p>Click and chirp-evoked auditory brainstem responses (ABR) are applied for the estimation of hearing thresholds in children. The present study analyzes ABR thresholds across a large sample of children's ears obtained with both methods. The aim was to demonstrate the correlation between both methods using narrow band chirp and click stimuli. Click and chirp evoked ABRs were measured in 253 children aged from 0 to 18 years to determine their individual auditory threshold. The delay-compensated stimuli were narrow band CE chirps with either 2000 Hz or 4000 Hz center frequencies. Measurements were performed consecutively during natural sleep, and under sedation or general anesthesia. Threshold estimation was performed for each measurement by two experienced audiologists. Pearson-correlation analysis revealed highly significant correlations (r=0.94) between click and chirp derived thresholds for both 2 kHz and 4 kHz chirps. No considerable differences were observed either between different age ranges or gender. Comparing the thresholds estimated using ABR with click stimuli and chirp stimuli, only 0.8-2% for the 2000 Hz NB-chirp and 0.4-1.2% of the 4000 Hz NB-chirp measurements differed more than 15 dB for different degrees of hearing loss or normal hearing. The results suggest that either NB-chirp or click ABR is sufficient for threshold estimation. This holds for the chirp frequencies of 2000 Hz and 4000 Hz. The use of either click- or chirp-evoked ABR allows a reduction of recording time in young infants. Nevertheless, to cross-check the results of one of the methods, we recommend measurements with the other method as well. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26794719','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26794719"><span>The validity of activity monitors for measuring sleep in elite athletes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sargent, Charli; Lastella, Michele; Halson, Shona L; Roach, Gregory D</p> <p>2016-10-01</p> <p>There is a growing interest in monitoring the sleep of elite athletes. Polysomnography is considered the gold standard for measuring sleep, however this technique is impractical if the aim is to collect data simultaneously with multiple athletes over consecutive nights. Activity monitors may be a suitable alternative for monitoring sleep, but these devices have not been validated against polysomnography in a population of elite athletes. Participants (n=16) were endurance-trained cyclists participating in a 6-week training camp. A total of 122 nights of sleep were recorded with polysomnography and activity monitors simultaneously. Agreement, sensitivity, and specificity were calculated from epoch-for-epoch comparisons of polysomnography and activity monitor data. Sleep variables derived from polysomnography and activity monitors were compared using paired t-tests. Activity monitor data were analysed using low, medium, and high sleep-wake thresholds. Epoch-for-epoch comparisons showed good agreement between activity monitors and polysomnography for each sleep-wake threshold (81-90%). Activity monitors were sensitive to sleep (81-92%), but specificity differed depending on the threshold applied (67-82%). Activity monitors underestimated sleep duration (18-90min) and overestimated wake duration (4-77min) depending on the threshold applied. Applying the correct sleep-wake threshold is important when using activity monitors to measure the sleep of elite athletes. For example, the default sleep-wake threshold (>40 activity counts=wake) underestimates sleep duration by ∼50min and overestimates wake duration by ∼40min. In contrast, sleep-wake thresholds that have a high sensitivity to sleep (>80 activity counts=wake) yield the best combination of agreement, sensitivity, and specificity. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4298527','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4298527"><span>Proposing an Empirically Justified Reference Threshold for Blood Culture Sampling Rates in Intensive Care Units</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Castell, Stefanie; Schwab, Frank; Geffers, Christine; Bongartz, Hannah; Brunkhorst, Frank M.; Gastmeier, Petra; Mikolajczyk, Rafael T.</p> <p>2014-01-01</p> <p>Early and appropriate blood culture sampling is recommended as a standard of care for patients with suspected bloodstream infections (BSI) but is rarely taken into account when quality indicators for BSI are evaluated. To date, sampling of about 100 to 200 blood culture sets per 1,000 patient-days is recommended as the target range for blood culture rates. However, the empirical basis of this recommendation is not clear. The aim of the current study was to analyze the association between blood culture rates and observed BSI rates and to derive a reference threshold for blood culture rates in intensive care units (ICUs). This study is based on data from 223 ICUs taking part in the German hospital infection surveillance system. We applied locally weighted regression and segmented Poisson regression to assess the association between blood culture rates and BSI rates. Below 80 to 90 blood culture sets per 1,000 patient-days, observed BSI rates increased with increasing blood culture rates, while there was no further increase above this threshold. Segmented Poisson regression located the threshold at 87 (95% confidence interval, 54 to 120) blood culture sets per 1,000 patient-days. Only one-third of the investigated ICUs displayed blood culture rates above this threshold. We provided empirical justification for a blood culture target threshold in ICUs. In the majority of the studied ICUs, blood culture sampling rates were below this threshold. This suggests that a substantial fraction of BSI cases might remain undetected; reporting observed BSI rates as a quality indicator without sufficiently high blood culture rates might be misleading. PMID:25520442</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15363935','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15363935"><span>Mate choice when males are in patches: optimal strategies and good rules of thumb.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hutchinson, John M C; Halupka, Konrad</p> <p>2004-11-07</p> <p>In standard mate-choice models, females encounter males sequentially and decide whether to inspect the quality of another male or to accept a male already inspected. What changes when males are clumped in patches and there is a significant cost to travel between patches? We use stochastic dynamic programming to derive optimum strategies under various assumptions. With zero costs to returning to a male in the current patch, the optimal strategy accepts males above a quality threshold which is constant whenever one or more males in the patch remain uninspected; this threshold drops when inspecting the last male in the patch, so returns may occur only then and are never to a male in a previously inspected patch. With non-zero within-patch return costs, such a two-threshold rule still performs extremely well, but a more gradual decline in acceptance threshold is optimal. Inability to return at all need not decrease performance by much. The acceptance threshold should also decline if it gets harder to discover the last males in a patch. Optimal strategies become more complex when mean male quality varies systematically between patches or years, and females estimate this in a Bayesian manner through inspecting male qualities. It can then be optimal to switch patch before inspecting all males on a patch, or, exceptionally, to return to an earlier patch. We compare performance of various rules of thumb in these environments and in ones without a patch structure. A two-threshold rule performs excellently, as do various simplifications of it. The best-of-N rule outperforms threshold rules only in non-patchy environments with between-year quality variation. The cutoff rule performs poorly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25464323','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25464323"><span>Acceleration of the herbicide isoproturon degradation in wheat by glycosyltransferases and salicylic acid.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lu, Yi Chen; Zhang, Shuang; Yang, Hong</p> <p>2015-01-01</p> <p>Isoproturon (IPU) is a herbicide widely used to prevent weeds in cereal production. Due to its extensive use in agriculture, residues of IPU are often detected in soils and crops. Overload of IPU to crops is associated with human health risks. Hence, there is an urgent need to develop an approach to mitigate its accumulation in crops. In this study, the IPU residues and its degradation products in wheat were characterized using ultra performance liquid chromatography-time of fight tandem-mass spectrometer/mass spectrometer (UPLC-TOF-MS/MS). Most detected IPU-derivatives were sugar-conjugated. Degradation and glycosylation of IPU-derivatives could be enhanced by applying salicylic acid (SA). While more sugar-conjugated IPU-derivatives were identified in wheat with SA application, lower levels of IPU were detected, indicating that SA is able to accelerate intracellular IPU catabolism. All structures of IPU-derivatives and sugar-conjugated products were characterized. Comparative data were provided with specific activities and gene expression of certain glucosyltransferases. A pathway with IPU degradation and glucosylation was discussed. Our work indicates that SA-accelerated degradation is practically useful for wheat crops growing in IPU-contaminated soils because such crops with SA application can potentially lower or minimize IPU accumulation in levels below the threshold for adverse effects. Copyright © 2014 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title24-vol1/pdf/CFR-2012-title24-vol1-sec92-102.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title24-vol1/pdf/CFR-2012-title24-vol1-sec92-102.pdf"><span>24 CFR 92.102 - Participation threshold amount.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-04-01</p> <p>... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Participation threshold amount. 92.102 Section 92.102 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development HOME INVESTMENT PARTNERSHIPS PROGRAM Consortia; Designation and Revocation of Designation as a Participating Jurisdiction § 92.102...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title24-vol1/pdf/CFR-2011-title24-vol1-sec92-102.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title24-vol1/pdf/CFR-2011-title24-vol1-sec92-102.pdf"><span>24 CFR 92.102 - Participation threshold amount.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-04-01</p> <p>... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Participation threshold amount. 92.102 Section 92.102 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development HOME INVESTMENT PARTNERSHIPS PROGRAM Consortia; Designation and Revocation of Designation as a Participating Jurisdiction § 92.102...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2014-title24-vol1/pdf/CFR-2014-title24-vol1-sec92-102.pdf','CFR2014'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2014-title24-vol1/pdf/CFR-2014-title24-vol1-sec92-102.pdf"><span>24 CFR 92.102 - Participation threshold amount.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2014&page.go=Go">Code of Federal Regulations, 2014 CFR</a></p> <p></p> <p>2014-04-01</p> <p>... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Participation threshold amount. 92.102 Section 92.102 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development HOME INVESTMENT PARTNERSHIPS PROGRAM Consortia; Designation and Revocation of Designation as a Participating Jurisdiction § 92.102...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol1/pdf/CFR-2010-title24-vol1-sec92-102.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol1/pdf/CFR-2010-title24-vol1-sec92-102.pdf"><span>24 CFR 92.102 - Participation threshold amount.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-04-01</p> <p>... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Participation threshold amount. 92.102 Section 92.102 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development HOME INVESTMENT PARTNERSHIPS PROGRAM Consortia; Designation and Revocation of Designation as a Participating Jurisdiction § 92.102...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title24-vol1/pdf/CFR-2013-title24-vol1-sec92-102.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title24-vol1/pdf/CFR-2013-title24-vol1-sec92-102.pdf"><span>24 CFR 92.102 - Participation threshold amount.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-04-01</p> <p>... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Participation threshold amount. 92.102 Section 92.102 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development HOME INVESTMENT PARTNERSHIPS PROGRAM Consortia; Designation and Revocation of Designation as a Participating Jurisdiction § 92.102...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JHyd..318...24Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JHyd..318...24Y"><span>Long term analysis of wet and dry years in Seoul, Korea</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yoo, Chulsang</p> <p>2006-03-01</p> <p>This study investigated the recurrence characteristics of wet and dry years using the annual precipitation data collected in Seoul, Korea since 1776. More than one half of these 200 years of precipitation data are those collected in the Chosun Age using an old Korean rain gauge called Chukwooki. The recurrence characteristics of wet and dry years were investigated for several sets of thresholds, represented by use of the mean and standard deviation (stdv) of the annual precipitation such as mean±0.5stdv, mean±0.75stdv and mean±1.0stdv. These sets of thresholds have been decided so as to make the occurrence of wet and dry years follow the Poisson distribution. For a given set of thresholds, the wet, dry, and normal years are categorized, then the transition probabilities among those years were derived and compared. The average return periods were also derived using the stationary probabilities of wet and dry years. This analysis has been applied not only to the entire data but also to partial data sets composed of the data before and after the long dry period (lasting about 25 years) around 1900. This was to compare and detect the difference in the recurrence characteristics as well as the difference between the old Chukwooki and the modern flip-bucket style rain gauge data. As results, the overall recurrence patterns of wet and dry years have been found to be very similar. Only one obvious difference may be the return period of extremely dry years (for the threshold of mean-stdv), which after the long dry period was found to be longer than that before the long dry period (8.03 and 6.77 years, respectively). A similar result could also be found in the occurrence probability (or, the inverse of return period) of consecutive dry years estimated by applying the Poisson process. That is, for the lowest threshold of mean-stdv, the occurrence probability of consecutive dry years before the long dry period was higher than those after the long dry period. Thus, we may conclude that the possibility of long dry periods is decreasing recently.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/8328181','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/8328181"><span>[Clinical experiences with four newly developed, surface modified stimulation electrodes].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Winter, U J; Fritsch, J; Liebing, J; Höpp, H W; Hilger, H H</p> <p>1993-05-01</p> <p>Newly developed pacing electrodes with so-called porous surfaces promise a significantly improved post-operative pacing and sensing threshold. We therefore investigated four newly developed leads (ELA-PMCF-860 n = 10; Biotronik-60/4-DNP n = 10, CPI-4010 n = 10, Intermedics-421-03-Biopore n = 6) connected to two different pacing devices (Intermedics NOVA II, Medtronic PASYS) in 36 patients (18 men, 18 women, age: 69.7 +/- 9.8 years) suffering from symptomatic bradycardia. The individual electrode maturation process was investigated by means of repeated measurements of pacing threshold, electrode impedance in acute, subacute, and chronic phase, as well as energy consumption and sensing behavior in the chronic phase. However, with the exception of the 4010, the investigated leads showed largely varying values of the pacing threshold with individual peaks occurring from the second up to the 13th week. All leads had nearly similar chronic pacing thresholds (PMCF 0.13 +/- 0.07; DNP 0.25 +/- 0.18; Biopore 0.15 +/- 0.05; 4010 0.14 +/- 0.05 ms). Impedance measurements revealed higher, but not significantly different values for the DNP (PMCF 582 +/- 112, DNP 755 +/- 88, Biopore 650 +/- 15, 4010 718 +/- 104 Ohm). Despite differing values for pacing threshold and impedance, the energy consumption in the chronic phase during threshold-adapted, but secure stimulation (3 * impulse-width at pacing threshold) were comparable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28886537','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28886537"><span>A probabilistic approach to assess antibiotic resistance development risks in environmental compartments and its application to an intensive aquaculture production scenario.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rico, Andreu; Jacobs, Rianne; Van den Brink, Paul J; Tello, Alfredo</p> <p>2017-12-01</p> <p>Estimating antibiotic pollution and antibiotic resistance development risks in environmental compartments is important to design management strategies that advance our stewardship of antibiotics. In this study we propose a modelling approach to estimate the risk of antibiotic resistance development in environmental compartments and demonstrate its application in aquaculture production systems. We modelled exposure concentrations for 12 antibiotics used in Vietnamese Pangasius catfish production using the ERA-AQUA model. Minimum selective concentration (MSC) distributions that characterize the selective pressure of antibiotics on bacterial communities were derived from the European Committee on Antimicrobial Susceptibility Testing (EUCAST) Minimum Inhibitory Concentration dataset. The antibiotic resistance development risk (RDR) for each antibiotic was calculated as the probability that the antibiotic exposure distribution exceeds the MSC distribution representing the bacterial community. RDRs in pond sediments were nearly 100% for all antibiotics. Median RDR values in pond water were high for the majority of the antibiotics, with rifampicin, levofloxacin and ampicillin having highest values. In the effluent mixing area, RDRs were low for most antibiotics, with the exception of amoxicillin, ampicillin and trimethoprim, which presented moderate risks, and rifampicin and levofloxacin, which presented high risks. The RDR provides an efficient means to benchmark multiple antibiotics and treatment regimes in the initial phase of a risk assessment with regards to their potential to develop resistance in different environmental compartments, and can be used to derive resistance threshold concentrations. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.B31F2046L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.B31F2046L"><span>Improvement of Alternative Crop Phenology Detection Algorithms using MODIS NDVI Time Series Data in US Corn Belt Region</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, J.; Kang, S.; Seo, B.; Lee, K.</p> <p>2017-12-01</p> <p>Predicting crop phenology is important for understanding of crop development and growth processes and improving the accuracy of crop model. Remote sensing offers a feasible tool for monitoring spatio-temporal patterns of crop phenology in region and continental scales. Various methods have been developed to determine the timing of crop phenological stages using spectral vegetation indices (i.e. NDVI and EVI) derived from satellite data. In our study, it was compared four alternative detection methods to identify crop phenological stages (i.e. the emergence and harvesting date) using high quality NDVI time series data derived from MODIS. Also we investigated factors associated with crop development rate. Temperature and photoperiod are the two main factors which would influence the crop's growth pattern expressed in the VI data. Only the effect of temperature on crop development rate was considered. The temperature response function in the Wang-Engel (WE) model was used, which simulates crop development using nonlinear models with response functions that range from zero to one. It has attempted at the state level over 14 years (2003-2016) in Iowa and Illinois state of USA, where the estimated phenology date by using four methods for both corn and soybean. Weekly crop progress reports produced by the USDA NASS were used to validate phenology detection algorithms effected by temperature. All methods showed substantial uncertainty but the threshold method showed relatively better agreement with the State-level data for soybean phenology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJMES..46..824H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJMES..46..824H"><span>Threshold concepts in finance: conceptualizing the curriculum</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoadley, Susan; Tickle, Leonie; Wood, Leigh N.; Kyng, Tim</p> <p>2015-08-01</p> <p>Graduates with well-developed capabilities in finance are invaluable to our society and in increasing demand. Universities face the challenge of designing finance programmes to develop these capabilities and the essential knowledge that underpins them. Our research responds to this challenge by identifying threshold concepts that are central to the mastery of finance and by exploring their potential for informing curriculum design and pedagogical practices to improve student outcomes. In this paper, we report the results of an online survey of finance academics at multiple institutions in Australia, Canada, New Zealand, South Africa and the United Kingdom. The outcomes of our research are recommendations for threshold concepts in finance endorsed by quantitative evidence, as well as a model of the finance curriculum incorporating finance, modelling and statistics threshold concepts. In addition, we draw conclusions about the application of threshold concept theory supported by both quantitative and qualitative evidence. Our methodology and findings have general relevance to the application of threshold concept theory as a means to investigate and inform curriculum design and delivery in higher education.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29087388','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29087388"><span>Deriving an optimal threshold of waist circumference for detecting cardiometabolic risk in sub-Saharan Africa.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ekoru, K; Murphy, G A V; Young, E H; Delisle, H; Jerome, C S; Assah, F; Longo-Mbenza, B; Nzambi, J P D; On'Kin, J B K; Buntix, F; Muyer, M C; Christensen, D L; Wesseh, C S; Sabir, A; Okafor, C; Gezawa, I D; Puepet, F; Enang, O; Raimi, T; Ohwovoriole, E; Oladapo, O O; Bovet, P; Mollentze, W; Unwin, N; Gray, W K; Walker, R; Agoudavi, K; Siziya, S; Chifamba, J; Njelekela, M; Fourie, C M; Kruger, S; Schutte, A E; Walsh, C; Gareta, D; Kamali, A; Seeley, J; Norris, S A; Crowther, N J; Pillay, D; Kaleebu, P; Motala, A A; Sandhu, M S</p> <p>2017-10-03</p> <p>Waist circumference (WC) thresholds derived from western populations continue to be used in sub-Saharan Africa (SSA) despite increasing evidence of ethnic variation in the association between adiposity and cardiometabolic disease and availability of data from African populations. We aimed to derive a SSA-specific optimal WC cut-point for identifying individuals at increased cardiometabolic risk. We used individual level cross-sectional data on 24 181 participants aged ⩾15 years from 17 studies conducted between 1990 and 2014 in eight countries in SSA. Receiver operating characteristic curves were used to derive optimal WC cut-points for detecting the presence of at least two components of metabolic syndrome (MS), excluding WC. The optimal WC cut-point was 81.2 cm (95% CI 78.5-83.8 cm) and 81.0 cm (95% CI 79.2-82.8 cm) for men and women, respectively, with comparable accuracy in men and women. Sensitivity was higher in women (64%, 95% CI 63-65) than in men (53%, 95% CI 51-55), and increased with the prevalence of obesity. Having WC above the derived cut-point was associated with a twofold probability of having at least two components of MS (age-adjusted odds ratio 2.6, 95% CI 2.4-2.9, for men and 2.2, 95% CI 2.0-2.3, for women). The optimal WC cut-point for identifying men at increased cardiometabolic risk is lower (⩾81.2 cm) than current guidelines (⩾94.0 cm) recommend, and similar to that in women in SSA. Prospective studies are needed to confirm these cut-points based on cardiometabolic outcomes.International Journal of Obesity advance online publication, 31 October 2017; doi:10.1038/ijo.2017.240.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AGUFMSH31A1648W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AGUFMSH31A1648W"><span>A Fractional Differential Kinetic Equation and Applications to Modelling Bursts in Turbulent Nonlinear Space Plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Watkins, N. W.; Rosenberg, S.; Sanchez, R.; Chapman, S. C.; Credgington, D.</p> <p>2008-12-01</p> <p>Since the 1960s Mandelbrot has advocated the use of fractals for the description of the non-Euclidean geometry of many aspects of nature. In particular he proposed two kinds of model to capture persistence in time (his Joseph effect, common in hydrology and with fractional Brownian motion as the prototype) and/or prone to heavy tailed jumps (the Noah effect, typical of economic indices, for which he proposed Lévy flights as an exemplar). Both effects are now well demonstrated in space plasmas, notably in the turbulent solar wind. Models have, however, typically emphasised one of the Noah and Joseph parameters (the Lévy exponent μ and the temporal exponent β) at the other's expense. I will describe recent work in which we studied a simple self-affine stable model-linear fractional stable motion, LFSM, which unifies both effects and present a recently-derived diffusion equation for LFSM. This replaces the second order spatial derivative in the equation of fBm with a fractional derivative of order μ, but retains a diffusion coefficient with a power law time dependence rather than a fractional derivative in time. I will also show work in progress using an LFSM model and simple analytic scaling arguments to study the problem of the area between an LFSM curve and a threshold. This problem relates to the burst size measure introduced by Takalo and Consolini into solar-terrestrial physics and further studied by Freeman et al [PRE, 2000] on solar wind Poynting flux near L1. We test how expressions derived by other authors generalise to the non-Gaussian, constant threshold problem. Ongoing work on extension of these LFSM results to multifractals will also be discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21032836-determination-prospective-displacement-based-gate-threshold-respiratory-gated-radiation-delivery-from-retrospective-phase-based-gate-threshold-selected-ct-simulation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21032836-determination-prospective-displacement-based-gate-threshold-respiratory-gated-radiation-delivery-from-retrospective-phase-based-gate-threshold-selected-ct-simulation"><span>Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Vedam, S.; Archambault, L.; Starkschall, G.</p> <p>2007-11-15</p> <p>Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the deliverymore » gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8{+-}11% and 14{+-}21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4{+-}7% and 8{+-}15% with and without audio-visual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18072489','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18072489"><span>Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vedam, S; Archambault, L; Starkschall, G; Mohan, R; Beddar, S</p> <p>2007-11-01</p> <p>Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the delivery gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8 +/- 11% and 14 +/- 21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4 +/- 7% and 8 +/- 15% with and without audiovisual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1340245','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1340245"><span>Development of high damage threshold laser-machined apodizers and gain filters for laser applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Rambo, Patrick; Schwarz, Jens; Kimmel, Mark</p> <p></p> <p>We have developed high damage threshold filters to modify the spatial profile of a high energy laser beam. The filters are formed by laser ablation of a transmissive window. The ablation sites constitute scattering centers which can be filtered in a subsequent spatial filter. Finally, by creating the filters in dielectric materials, we see an increased laser-induced damage threshold from previous filters created using ‘metal on glass’ lithography.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1340245-development-high-damage-threshold-laser-machined-apodizers-gain-filters-laser-applications','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1340245-development-high-damage-threshold-laser-machined-apodizers-gain-filters-laser-applications"><span>Development of high damage threshold laser-machined apodizers and gain filters for laser applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Rambo, Patrick; Schwarz, Jens; Kimmel, Mark; ...</p> <p>2016-09-27</p> <p>We have developed high damage threshold filters to modify the spatial profile of a high energy laser beam. The filters are formed by laser ablation of a transmissive window. The ablation sites constitute scattering centers which can be filtered in a subsequent spatial filter. Finally, by creating the filters in dielectric materials, we see an increased laser-induced damage threshold from previous filters created using ‘metal on glass’ lithography.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JKPS...72.1058J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JKPS...72.1058J"><span>Numerical Calculation of Non-uniform Magnetization Using Experimental Magnetic Field Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jhun, Bukyoung; Jhun, Youngseok; Kim, Seung-wook; Han, JungHyun</p> <p>2018-05-01</p> <p>A relation between the distance from the surface of a magnet and the number of cells required for a numerical calculation in order to secure the error below a certain threshold is derived. We also developed a method to obtain the magnetization at each part of the magnet from the experimentally measured magnetic field. This method is applied to three magnets with distinct patterns on magnetic-field-viewing film. Each magnet showed a unique pattern of magnetization. We found that the magnet that shows symmetric magnetization on the magnetic-field-viewing film is not uniformly magnetized. This method can be useful comparing the magnetization between magnets that yield typical magnetic field and those that yield atypical magnetic field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AmJPh..86..105R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AmJPh..86..105R"><span>A toy model for the yield of a tamped fission bomb</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reed, B. Cameron</p> <p>2018-02-01</p> <p>A simple expression is developed for estimating the yield of a tamped fission bomb, that is, a basic nuclear weapon comprising a fissile core jacketed by a surrounding neutron-reflecting tamper. This expression is based on modeling the nuclear chain reaction as a geometric progression in combination with a previously published expression for the threshold-criticality condition for such a core. The derivation is especially straightforward, as it requires no knowledge of diffusion theory and should be accessible to students of both physics and policy. The calculation can be set up as a single page spreadsheet. Application to the Little Boy and Fat Man bombs of World War II gives results in reasonable accord with published yield estimates for these weapons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4924590','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4924590"><span>Rett Syndrome: Crossing the Threshold to Clinical Translation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Katz, David M.; Bird, Adrian; Coenraads, Monica; Gray, Steven J.; Menon, Debashish U.; Philpot, Benjamin D.; Tarquinio, Daniel C.</p> <p>2016-01-01</p> <p>Lying at the intersection between neurobiology and epigenetics, Rett syndrome (RTT) has garnered intense interest in recent years, not only from a broad range of academic scientists, but also from the pharmaceutical and biotechnology industries. In addition to the critical need for treatments for this devastating disorder, optimism for developing RTT treatments derives from a unique convergence of factors, including a known monogenic cause, reversibility of symptoms in preclinical models, a strong clinical research infrastructure highlighted by an NIH-funded natural history study and well-established clinics with significant patient populations. Here, we review recent advances in understanding the biology of RTT, particularly promising preclinical findings, lessons from past clinical trials, and critical elements of trial design for rare disorders. PMID:26830113</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120015904','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120015904"><span>Research and Development of Automated Eddy Current Testing for Composite Overwrapped Pressure Vessels</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.</p> <p>2012-01-01</p> <p>Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA609532','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA609532"><span>New Generation of Satellite-Derived Ocean Thermal Structure for the Western North Pacific Typhoon Intensity Forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-10-26</p> <p>took 35% of error as a threshold to deter- mine whether the parameters derived by the REGWNP are of acceptable accuracy. Fig. 13 shows the applicable...2000. The interaction between Hurricane Opal (1995) and a warm core ring in the Gulf of Mexico. Monthly Weather Review 128, 1347–1365. Jacob, S.D...Hurricane Opal . Monthly Weather Review 128, 1366–1383. Stephens, C., Antonov, J.I., Boyer, T.P., Conkright, M.E., Locarnini, R.A., O’Brien, T.D., Carcia</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26196617','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26196617"><span>QCD inequalities for hadron interactions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Detmold, William</p> <p>2015-06-05</p> <p>We derive generalizations of the Weingarten-Witten QCD mass inequalities for particular multihadron systems. For systems of any number of identical pseudoscalar mesons of maximal isospin, these inequalities prove that near threshold interactions between the constituent mesons must be repulsive and that no bound states can form in these channels. Similar constraints in less symmetric systems are also extracted. These results are compatible with experimental results (where known) and recent lattice QCD calculations, and also lead to a more stringent bound on the nucleon mass than previously derived, m_{N}≥3/2m_{π}.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3859166','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3859166"><span>Large Covariance Estimation by Thresholding Principal Orthogonal Complements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fan, Jianqing; Liao, Yuan; Mincheva, Martina</p> <p>2012-01-01</p> <p>This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24348088','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24348088"><span>Large Covariance Estimation by Thresholding Principal Orthogonal Complements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fan, Jianqing; Liao, Yuan; Mincheva, Martina</p> <p>2013-09-01</p> <p>This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27415267','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27415267"><span>Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ottino-Löffler, Bertrand; Strogatz, Steven H</p> <p>2016-06-01</p> <p>We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20080046288&hterms=law&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dlaw','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20080046288&hterms=law&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dlaw"><span>Photo-Double Ionization: Threshold Law and Low-Energy Behavior</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bhatia, A. K.; Temkin, A.</p> <p>2007-01-01</p> <p>The threshold law for photoejection of two electrons from atoms (PDI) is derived from a modification of the Coulomb-dipole (C-D) theory. The C-D theory applies to two-electron ejection from negative ions (photo-double detachment:PDD). The modification consists of correctly accounting for the fact that in PDI that the two escaping electrons see a Coulomb field, asymptotically no matter what their relative distances from the residual ion are. We find in the contralinear spherically symmetric model that the analytic threshold law Q(E), i.e. the yield of residual ions, to be Q Integral of (E) varies as E + (C(sub w) E(sup gamma W)) +CE(sup 5/4) sin [1/2 ln E + phi]/ln(E). The first and third terms are beyond the Wannier law. Our threshold law can only be rigorously justified for residual energies <= 10(exp -3) eV. Nevertheless in the present experimental range (0.1 - 4 eV), the form, even without the second term, can be fitted to experimental results of PDI for He, Li, and Be, in contrast to the Wannier law which has a larger deviation from the data for Li and Be.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26520077','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26520077"><span>Interplay between the local information based behavioral responses and the epidemic spreading in complex networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Can; Xie, Jia-Rong; Chen, Han-Shuang; Zhang, Hai-Feng; Tang, Ming</p> <p>2015-10-01</p> <p>The spreading of an infectious disease can trigger human behavior responses to the disease, which in turn plays a crucial role on the spreading of epidemic. In this study, to illustrate the impacts of the human behavioral responses, a new class of individuals, S(F), is introduced to the classical susceptible-infected-recovered model. In the model, S(F) state represents that susceptible individuals who take self-initiate protective measures to lower the probability of being infected, and a susceptible individual may go to S(F) state with a response rate when contacting an infectious neighbor. Via the percolation method, the theoretical formulas for the epidemic threshold as well as the prevalence of epidemic are derived. Our finding indicates that, with the increasing of the response rate, the epidemic threshold is enhanced and the prevalence of epidemic is reduced. The analytical results are also verified by the numerical simulations. In addition, we demonstrate that, because the mean field method neglects the dynamic correlations, a wrong result based on the mean field method is obtained-the epidemic threshold is not related to the response rate, i.e., the additional S(F) state has no impact on the epidemic threshold.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JPhD...44.5305Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JPhD...44.5305Y"><span>Electrical modulus analysis on the Ni/CCTO/PVDF system near the percolation threshold</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Wenhu; Yu, Shuhui; Sun, Rong; Ke, Shanming; Huang, Haitao; Du, Ruxu</p> <p>2011-11-01</p> <p>A type of Ni/CCTO/PVDF three-phase percolative composite was prepared, in which the filler content (volume fraction) of Ni and CCTO was set at 60 vol%. The dependence of permittivity, electrical modulus and ac conductivity on the concentration of Ni and CCTO fillers near the percolation threshold was investigated in detail. The permittivity of the composites dramatically increased as the Ni content approached 24 vol%. This unique physical mechanism was realized as the formation of conductive channels near the percolation threshold. Analysis on the electrical modulus showed that the conductive channels are governed by three relaxation processes induced by the fillers (Ni, CCTO) and PVDF matrix, which are the interfacial polarization derived from the interfaces between fillers (Ni, CCTO) and PVDF matrix, and the polarization of CCTO ceramic filler and PVDF matrix. The conductivity behaviour with various Ni loadings and temperature suggested that the transition from an insulating to a conducting state should be induced by charge tunnelling between Ni-Ni particles, Ni-CCTO fillers and Ni-PVDF matrix. These findings demonstrated that the tunnelling conduction in the composite can be attributed to the unique physical mechanism near the percolation threshold.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160011473','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160011473"><span>Light Diffraction by Large Amplitude Ultrasonic Waves in Liquids</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Adler, Laszlo; Cantrell, John H.; Yost, William T.</p> <p>2016-01-01</p> <p>Light diffraction from ultrasound, which can be used to investigate nonlinear acoustic phenomena in liquids, is reported for wave amplitudes larger than that typically reported in the literature. Large amplitude waves result in waveform distortion due to the nonlinearity of the medium that generates harmonics and produces asymmetries in the light diffraction pattern. For standing waves with amplitudes above a threshold value, subharmonics are generated in addition to the harmonics and produce additional diffraction orders of the incident light. With increasing drive amplitude above the threshold a cascade of period-doubling subharmonics are generated, terminating in a region characterized by a random, incoherent (chaotic) diffraction pattern. To explain the experimental results a toy model is introduced, which is derived from traveling wave solutions of the nonlinear wave equation corresponding to the fundamental and second harmonic standing waves. The toy model reduces the nonlinear partial differential equation to a mathematically more tractable nonlinear ordinary differential equation. The model predicts the experimentally observed cascade of period-doubling subharmonics terminating in chaos that occurs with increasing drive amplitudes above the threshold value. The calculated threshold amplitude is consistent with the value estimated from the experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17489033','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17489033"><span>Determination of ReQuest-based symptom thresholds to define symptom relief in GERD clinical studies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stanghellini, Vincenzo; Armstrong, David; Mönnikes, Hubert; Berghöfer, Peter; Gatz, Gudrun; Bardhan, Karna Dev</p> <p>2007-01-01</p> <p>The growing importance of symptom assessment is evident from the numerous clinical studies on gastroesophageal reflux disease (GERD) assessing treatment-induced symptom relief. However, to date, the a priori selection of criteria defining symptom relief has been arbitrary. The present study was designed to prospectively identify GERD symptom thresholds for the broad spectrum of GERD-related symptoms assessed by the validated reflux questionnaire (ReQuest) and its subscales, ReQuest-GI (gastrointestinal symptoms) and ReQuest-WSO (general well-being, sleep disturbances, other complaints), in individuals without evidence of GERD. In this 4-day evaluation in Germany, 385 individuals without evidence of GERD were included. On the first day, participants completed the ReQuest, the Gastrointestinal Symptom Rating Scale, and the Psychological General Well-Being scale. On the other days, participants filled in the ReQuest only. GERD symptom thresholds were calculated for ReQuest and its subscales, based on the respective 90th percentiles. GERD symptom thresholds were 3.37 for ReQuest, 0.95 for ReQuest-GI, and 2.46 for ReQuest-WSO. Even individuals without evidence of GERD may experience some mild symptoms that are commonly ascribed to GERD. GERD symptom thresholds derived in this study can be used to define the global symptom relief in patients with GERD. Copyright 2007 S. Karger AG, Basel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1813908G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1813908G"><span>Introducing hydrological information in rainfall intensity-duration thresholds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Greco, Roberto; Bogaard, Thom</p> <p>2016-04-01</p> <p>Regional landslide hazard assessment is mainly based on empirically derived precipitation-intensity-duration (PID) thresholds. Generally, two features of rainfall events are plotted to discriminate between observed occurrence and absence of occurrence of mass movements. Hereafter, a separation line is drawn in logarithmic space. Although successfully applied in many case studies, such PID thresholds suffer from many false positives as well as limited physical process insight. One of the main limitations is indeed that they do not include any information about the hydrological processes occurring along the slopes, so that the triggering is only related to rainfall characteristics. In order to introduce such an hydrological information in the definition of rainfall thresholds for shallow landslide triggering assessment, in this study the introduction of non-dimensional rainfall characteristics is proposed. In particular, rain storm depth, intensity and duration are divided by a characteristic infiltration depth, a characteristic infiltration rate and a characteristic duration, respectively. These latter variables depend on the hydraulic properties and on the moisture state of the soil cover at the beginning of the precipitation. The proposed variables are applied to the case of a slope covered with shallow pyroclastic deposits in Cervinara (southern Italy), for which experimental data of hourly rainfall and soil suction were available. Rainfall thresholds defined with the proposed non-dimensional variables perform significantly better than those defined with dimensional variables, either in the intensity-duration plane or in the depth-duration plane.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001APS..MARN23001H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001APS..MARN23001H"><span>Theory of Auditory Thresholds in Primates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Harrison, Michael J.</p> <p>2001-03-01</p> <p>The influence of thermal pressure fluctuations at the tympanic membrane has been previously investigated as a possible determinant of the threshold of hearing in humans (L.J. Sivian and S.D. White, J. Acoust. Soc. Am. IV, 4;288(1933).). More recent work has focussed more precisely on the relation between statistical mechanics and sensory signal processing by biological means in creatures' brains (W. Bialek, in ``Physics of Biological Systems: from molecules to species'', H. Flyvberg et al, (Eds), p. 252; Springer 1997.). Clinical data on the frequency dependence of hearing thresholds in humans and other primates (W.C. Stebbins, ``The Acoustic Sense of Animals'', Harvard 1983.) has long been available. I have derived an expression for the frequency dependence of hearing thresholds in primates, including humans, by first calculating the frequency dependence of thermal pressure fluctuations at eardrums from damped normal modes excited in model ear canals of given simple geometry. I then show that most of the features of the clinical data are directly related to the frequency dependence of the ratio of thermal noise pressure arising from without to that arising from within the masking bandwidth which signals must dominate in order to be sensed. The higher intensity of threshold signals in primates smaller than humans, which is clinically observed over much but not all of the human auditory spectrum is shown to arise from their smaller meatus dimensions. note</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26384918','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26384918"><span>Cavitation and non-cavitation regime for large-scale ultrasonic standing wave particle separation systems--In situ gentle cavitation threshold determination and free radical related oxidation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Johansson, Linda; Singh, Tanoj; Leong, Thomas; Mawson, Raymond; McArthur, Sally; Manasseh, Richard; Juliano, Pablo</p> <p>2016-01-01</p> <p>We here suggest a novel and straightforward approach for liter-scale ultrasound particle manipulation standing wave systems to guide system design in terms of frequency and acoustic power for operating in either cavitation or non-cavitation regimes for ultrasound standing wave systems, using the sonochemiluminescent chemical luminol. We show that this method offers a simple way of in situ determination of the cavitation threshold for selected separation vessel geometry. Since the pressure field is system specific the cavitation threshold is system specific (for the threshold parameter range). In this study we discuss cavitation effects and also measure one implication of cavitation for the application of milk fat separation, the degree of milk fat lipid oxidation by headspace volatile measurements. For the evaluated vessel, 2 MHz as opposed to 1 MHz operation enabled operation in non-cavitation or low cavitation conditions as measured by the luminol intensity threshold method. In all cases the lipid oxidation derived volatiles were below the human sensory detection level. Ultrasound treatment did not significantly influence the oxidative changes in milk for either 1 MHz (dose of 46 kJ/L and 464 kJ/L) or 2 MHz (dose of 37 kJ/L and 373 kJ/L) operation. Copyright © 2015 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22172287','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22172287"><span>Detection of single-copy functional genes in prokaryotic cells by two-pass TSA-FISH with polynucleotide probes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kawakami, Shuji; Hasegawa, Takuya; Imachi, Hiroyuki; Yamaguchi, Takashi; Harada, Hideki; Ohashi, Akiyoshi; Kubota, Kengo</p> <p>2012-02-01</p> <p>In situ detection of functional genes with single-cell resolution is currently of interest to microbiologists. Here, we developed a two-pass tyramide signal amplification (TSA)-fluorescence in situ hybridization (FISH) protocol with PCR-derived polynucleotide probes for the detection of single-copy genes in prokaryotic cells. The mcrA gene and the apsA gene in methanogens and sulfate-reducing bacteria, respectively, were targeted. The protocol showed bright fluorescence with a good signal-to-noise ratio and achieved a high efficiency of detection (>98%). The discrimination threshold was approximately 82-89% sequence identity. Microorganisms possessing the mcrA or apsA gene in anaerobic sludge samples were successfully detected by two-pass TSA-FISH with polynucleotide probes. The developed protocol is useful for identifying single microbial cells based on functional gene sequences. Copyright © 2011 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=29449&keyword=good+AND+reason&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=29449&keyword=good+AND+reason&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9644E..0ZA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9644E..0ZA"><span>High-spatial resolution multispectral and panchromatic satellite imagery for mapping perennial desert plants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alsharrah, Saad A.; Bruce, David A.; Bouabid, Rachid; Somenahalli, Sekhar; Corcoran, Paul A.</p> <p>2015-10-01</p> <p>The use of remote sensing techniques to extract vegetation cover information for the assessment and monitoring of land degradation in arid environments has gained increased interest in recent years. However, such a task can be challenging, especially for medium-spatial resolution satellite sensors, due to soil background effects and the distribution and structure of perennial desert vegetation. In this study, we utilised Pleiades high-spatial resolution, multispectral (2m) and panchromatic (0.5m) imagery and focused on mapping small shrubs and low-lying trees using three classification techniques: 1) vegetation indices (VI) threshold analysis, 2) pre-built object-oriented image analysis (OBIA), and 3) a developed vegetation shadow model (VSM). We evaluated the success of each approach using a root of the sum of the squares (RSS) metric, which incorporated field data as control and three error metrics relating to commission, omission, and percent cover. Results showed that optimum VI performers returned good vegetation cover estimates at certain thresholds, but failed to accurately map the distribution of the desert plants. Using the pre-built IMAGINE Objective OBIA approach, we improved the vegetation distribution mapping accuracy, but this came at the cost of over classification, similar to results of lowering VI thresholds. We further introduced the VSM which takes into account shadow for further refining vegetation cover classification derived from VI. The results showed significant improvements in vegetation cover and distribution accuracy compared to the other techniques. We argue that the VSM approach using high-spatial resolution imagery provides a more accurate representation of desert landscape vegetation and should be considered in assessments of desertification.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>