Science.gov

Sample records for population-based threshold model

  1. Development of a population-based threshold model of conidial germination for analysing the effects of physiological manipulation on the stress tolerance and infectivity of insect pathogenic fungi.

    PubMed

    Andersen, M; Magan, N; Mead, A; Chandler, D

    2006-09-01

    Entomopathogenic fungi are being used as biocontrol agents of insect pests, but their efficacy can be poor in environments where water availability is reduced. In this study, the potential to improve biocontrol by physiologically manipulating fungal inoculum was investigated. Cultures of Beauveria bassiana, Lecanicillium muscarium, Lecanicillium longisporum, Metarhizium anisopliae and Paecilomyces fumosoroseus were manipulated by growing them under conditions of water stress, which produced conidia with increased concentrations of erythritol. The time-course of germination of conidia at different water activities (water activity, aw) was described using a generalized linear model, and in most cases reducing the water activity of the germination medium delayed the onset of germination without affecting the distribution of germination times. The germination of M. anisopliae, L. muscarium, L. longisporum and P. fumosoroseus was accelerated over a range of aw levels as a result of physiological manipulation. However, the relationship between the effect of physiological manipulation on germination and the osmolyte content of conidia varied according to fungal species. There was a linear relationship between germination rate, expressed as the reciprocal of germination time, and aw of the germination medium, but there was no significant effect of fungal species or physiological manipulation on the aw threshold for germination. In bioassays with M. anisopliae, physiologically manipulated conidia germinated more rapidly on the surface of an insect host, the melon cotton aphid Aphis gossypii, and fungal virulence was increased even when relative humidity was reduced after an initial high period. It is concluded that physiological manipulation may lead to improvements in biocontrol in the field, but choice of fungal species/isolate will be critical. In addition, the population-based threshold model used in this study, which considered germination in terms of physiological

  2. Diversity Outbred Mice Identify Population-Based Exposure Thresholds and Genetic Factors that Influence Benzene-Induced Genotoxicity

    PubMed Central

    Gatti, Daniel M.; Morgan, Daniel L.; Kissling, Grace E.; Shockley, Keith R.; Knudsen, Gabriel A.; Shepard, Kim G.; Price, Herman C.; King, Deborah; Witt, Kristine L.; Pedersen, Lars C.; Munger, Steven C.; Svenson, Karen L.; Churchill, Gary A.

    2014-01-01

    Background Inhalation of benzene at levels below the current exposure limit values leads to hematotoxicity in occupationally exposed workers. Objective We sought to evaluate Diversity Outbred (DO) mice as a tool for exposure threshold assessment and to identify genetic factors that influence benzene-induced genotoxicity. Methods We exposed male DO mice to benzene (0, 1, 10, or 100 ppm; 75 mice/exposure group) via inhalation for 28 days (6 hr/day for 5 days/week). The study was repeated using two independent cohorts of 300 animals each. We measured micronuclei frequency in reticulocytes from peripheral blood and bone marrow and applied benchmark concentration modeling to estimate exposure thresholds. We genotyped the mice and performed linkage analysis. Results We observed a dose-dependent increase in benzene-induced chromosomal damage and estimated a benchmark concentration limit of 0.205 ppm benzene using DO mice. This estimate is an order of magnitude below the value estimated using B6C3F1 mice. We identified a locus on Chr 10 (31.87 Mb) that contained a pair of overexpressed sulfotransferases that were inversely correlated with genotoxicity. Conclusions The genetically diverse DO mice provided a reproducible response to benzene exposure. The DO mice display interindividual variation in toxicity response and, as such, may more accurately reflect the range of response that is observed in human populations. Studies using DO mice can localize genetic associations with high precision. The identification of sulfotransferases as candidate genes suggests that DO mice may provide additional insight into benzene-induced genotoxicity. Citation French JE, Gatti DM, Morgan DL, Kissling GE, Shockley KR, Knudsen GA, Shepard KG, Price HC, King D, Witt KL, Pedersen LC, Munger SC, Svenson KL, Churchill GA. 2015. Diversity Outbred mice identify population-based exposure thresholds and genetic factors that influence benzene-induced genotoxicity. Environ Health Perspect 123:237

  3. Threshold models in radiation carcinogenesis

    SciTech Connect

    Hoel, D.G.; Li, P.

    1998-09-01

    Cancer incidence and mortality data from the atomic bomb survivors cohort has been analyzed to allow for the possibility of a threshold dose response. The same dose-response models as used in the original papers were fit to the data. The estimated cancer incidence from the fitted models over-predicted the observed cancer incidence in the lowest exposure group. This is consistent with a threshold or nonlinear dose-response at low-doses. Thresholds were added to the dose-response models and the range of possible thresholds is shown for both solid tumor cancers as well as the different leukemia types. This analysis suggests that the A-bomb cancer incidence data agree more with a threshold or nonlinear dose-response model than a purely linear model although the linear model is statistically equivalent. This observation is not found with the mortality data. For both the incidence data and the mortality data the addition of a threshold term significantly improves the fit to the linear or linear-quadratic dose response for both total leukemias and also for the leukemia subtypes of ALL, AML, and CML.

  4. Diversity Outbred Mice Identify Population-Based Exposure Thresholds and Genetic Factors that Influence Benzene-Induced Genotoxicity.

    PubMed

    French, John E; Gatti, Daniel M; Morgan, Daniel L; Kissling, Grace E; Shockley, Keith R; Knudsen, Gabriel A; Shepard, Kim G; Price, Herman C; King, Deborah; Witt, Kristine L; Pedersen, Lars C; Munger, Steven C; Svenson, Karen L; Churchill, Gary A

    2015-03-01

    Inhalation of benzene at levels below the current exposure limit values leads to hematotoxicity in occupationally exposed workers. We sought to evaluate Diversity Outbred (DO) mice as a tool for exposure threshold assessment and to identify genetic factors that influence benzene-induced genotoxicity. We exposed male DO mice to benzene (0, 1, 10, or 100 ppm; 75 mice/exposure group) via inhalation for 28 days (6 hr/day for 5 days/week). The study was repeated using two independent cohorts of 300 animals each. We measured micronuclei frequency in reticulocytes from peripheral blood and bone marrow and applied benchmark concentration modeling to estimate exposure thresholds. We genotyped the mice and performed linkage analysis. We observed a dose-dependent increase in benzene-induced chromosomal damage and estimated a benchmark concentration limit of 0.205 ppm benzene using DO mice. This estimate is an order of magnitude below the value estimated using B6C3F1 mice. We identified a locus on Chr 10 (31.87 Mb) that contained a pair of overexpressed sulfotransferases that were inversely correlated with genotoxicity. The genetically diverse DO mice provided a reproducible response to benzene exposure. The DO mice display interindividual variation in toxicity response and, as such, may more accurately reflect the range of response that is observed in human populations. Studies using DO mice can localize genetic associations with high precision. The identification of sulfotransferases as candidate genes suggests that DO mice may provide additional insight into benzene-induced genotoxicity.

  5. Universal Screening for Emotional and Behavioral Problems: Fitting a Population-Based Model

    ERIC Educational Resources Information Center

    Schanding, G. Thomas, Jr.; Nowell, Kerri P.

    2013-01-01

    Schools have begun to adopt a population-based method to conceptualizing assessment and intervention of students; however, little empirical evidence has been gathered to support this shift in service delivery. The present study examined the fit of a population-based model in identifying students' behavioral and emotional functioning using a…

  6. Population based models of cortical drug response: insights from anaesthesia

    PubMed Central

    Bojak, Ingo; Liley, David T. J.

    2008-01-01

    A great explanatory gap lies between the molecular pharmacology of psychoactive agents and the neurophysiological changes they induce, as recorded by neuroimaging modalities. Causally relating the cellular actions of psychoactive compounds to their influence on population activity is experimentally challenging. Recent developments in the dynamical modelling of neural tissue have attempted to span this explanatory gap between microscopic targets and their macroscopic neurophysiological effects via a range of biologically plausible dynamical models of cortical tissue. Such theoretical models allow exploration of neural dynamics, in particular their modification by drug action. The ability to theoretically bridge scales is due to a biologically plausible averaging of cortical tissue properties. In the resulting macroscopic neural field, individual neurons need not be explicitly represented (as in neural networks). The following paper aims to provide a non-technical introduction to the mean field population modelling of drug action and its recent successes in modelling anaesthesia. PMID:19003456

  7. POPULATION-BASED EXPOSURE MODELING FOR AIR POLLUTANTS AT EPA'S NATIONAL EXPOSURE RESEARCH LABORATORY

    EPA Science Inventory

    The US EPA's National Exposure Research Laboratory (NERL) has been developing, applying, and evaluating population-based exposure models to improve our understanding of the variability in personal exposure to air pollutants. Estimates of population variability are needed for E...

  8. Age-stratified thresholds of anti-Müllerian hormone improve prediction of polycystic ovary syndrome over a population-based threshold.

    PubMed

    Dewailly, Didier

    2017-09-26

    In this issue, Quinn et al. reports their experience on the diagnostic value of the anti-mullerian hormone (AMH) assay for the recognition of polycystic ovary syndrome (PCOS). This subject remains very much debated and, in particular, there is no consensus on a specific threshold discriminating PCOS from normal women. One of the reasons, but certainly not the only one, is the heterogeneity of the control groups between the various studies reported to date. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  9. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  10. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  11. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  12. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  13. Models of population-based analyses for data collected from large extended families.

    PubMed

    Wang, Wenyu; Lee, Elisa T; Howard, Barbara V; Fabsitz, Richard R; Devereux, Richard B; MacCluer, Jean W; Laston, Sandra; Comuzzie, Anthony G; Shara, Nawar M; Welty, Thomas K

    2010-12-01

    Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim.

  14. Differential equation models for sharp threshold dynamics.

    PubMed

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations.

  15. Development of a population-based microsimulation model of osteoarthritis in Canada.

    PubMed

    Kopec, J A; Sayre, E C; Flanagan, W M; Fines, P; Cibere, J; Rahman, Md M; Bansback, N J; Anis, A H; Jordan, J M; Sobolev, B; Aghajanian, J; Kang, W; Greidanus, N V; Garbuz, D S; Hawker, G A; Badley, E M

    2010-03-01

    The purpose of the study was to develop a population-based simulation model of osteoarthritis (OA) in Canada that can be used to quantify the future health and economic burden of OA under a range of scenarios for changes in the OA risk factors and treatments. In this article we describe the overall structure of the model, sources of data, derivation of key input parameters for the epidemiological component of the model, and preliminary validation studies. We used the Population Health Model (POHEM) platform to develop a stochastic continuous-time microsimulation model of physician-diagnosed OA. Incidence rates were calibrated to agree with administrative data for the province of British Columbia, Canada. The effect of obesity on OA incidence and the impact of OA on health-related quality of life (HRQL) were modeled using Canadian national surveys. Incidence rates of OA in the model increase approximately linearly with age in both sexes between the ages of 50 and 80 and plateau in the very old. In those aged 50+, the rates are substantially higher in women. At baseline, the prevalence of OA is 11.5%, 13.6% in women and 9.3% in men. The OA hazard ratios for obesity are 2.0 in women and 1.7 in men. The effect of OA diagnosis on HRQL, as measured by the Health Utilities Index Mark 3 (HUI3), is to reduce it by 0.10 in women and 0.14 in men. We describe the development of the first population-based microsimulation model of OA. Strengths of this model include the use of large population databases to derive the key parameters and the application of modern microsimulation technology. Limitations of the model reflect the limitations of administrative and survey data and gaps in the epidemiological and HRQL literature. Copyright 2009 Osteoarthritis Research Society International. All rights reserved.

  16. Population-based modeling of the progression of apoptosis in mammalian cell culture.

    PubMed

    Meshram, Mukesh; Naderi, Saeideh; McConkey, Brendan; Budman, Hector; Scharer, Jeno; Ingalls, Brian

    2012-05-01

    The production of biopharmaceuticals from mammalian cell culture is hindered by apoptosis, which is the primary cause of cell death in these cultures. As a tool for optimization of culture yield, this study presents a population-based model describing the progression of apoptosis in a monoclonal antibody (mAb)-producing Chinese hamster ovary (CHO) cell culture. Because mAb production does not cease when apoptosis begins, the model was designed to incorporate subpopulations at various stages in the progression of apoptosis. The model was validated against intracellular measurements of caspase activity as well as cell density, nutrient levels, and toxic metabolites. Since the specific details of apoptotic mechanisms have not been elucidated in this cell line, we employed a model comparison analysis that suggests the most plausible pathways of activation. Copyright © 2011 Wiley Periodicals, Inc.

  17. Education and Successful Aging Trajectories: A Longitudinal Population-Based Latent Variable Modelling Analysis.

    PubMed

    Cosco, Theodore D; Stephan, Blossom C M; Brayne, Carol; Muniz, Graciela

    2017-10-11

    As the population ages, interest is increasing in studying aging well. However, more refined means of examining predictors of biopsychosocial conceptualizations of successful aging (SA) are required. Existing evidence of the relationship between early-life education and later-life SA is unclear. The Successful Aging Index (SAI) was mapped onto the Cognitive Function and Aging Study (CFAS), a longitudinal population-based cohort (n = 1,141). SAI scores were examined using growth mixture modelling (GMM) to identify SA trajectories. Unadjusted and adjusted (age, sex, occupational status) ordinal logistic regressions were conducted to examine the association between trajectory membership and education level. GMM identified a three-class model, capturing high, moderate, and low functioning trajectories. Adjusted ordinal logistic regression models indicated that individuals in higher SAI classes were significantly more likely to have higher educational attainment than individuals in the lower SAI classes. These results provide evidence of a life course link between education and SA.

  18. Scalable Entity-Based Modeling of Population-Based Systems, Final LDRD Report

    SciTech Connect

    Cleary, A J; Smith, S G; Vassilevska, T K; Jefferson, D R

    2005-01-27

    The goal of this project has been to develop tools, capabilities and expertise in the modeling of complex population-based systems via scalable entity-based modeling (EBM). Our initial focal application domain has been the dynamics of large populations exposed to disease-causing agents, a topic of interest to the Department of Homeland Security in the context of bioterrorism. In the academic community, discrete simulation technology based on individual entities has shown initial success, but the technology has not been scaled to the problem sizes or computational resources of LLNL. Our developmental emphasis has been on the extension of this technology to parallel computers and maturation of the technology from an academic to a lab setting.

  19. Comprehensive, Population-Based Sensitivity Analysis of a Two-Mass Vocal Fold Model

    PubMed Central

    Robertson, Daniel; Zañartu, Matías; Cook, Douglas

    2016-01-01

    Previous vocal fold modeling studies have generally focused on generating detailed data regarding a narrow subset of possible model configurations. These studies can be interpreted to be the investigation of a single subject under one or more vocal conditions. In this study, a broad population-based sensitivity analysis is employed to examine the behavior of a virtual population of subjects and to identify trends between virtual individuals as opposed to investigating a single subject or model instance. Four different sensitivity analysis techniques were used in accomplishing this task. Influential relationships between model input parameters and model outputs were identified, and an exploration of the model’s parameter space was conducted. Results indicate that the behavior of the selected two-mass model is largely dominated by complex interactions, and that few input-output pairs have a consistent effect on the model. Results from the analysis can be used to increase the efficiency of optimization routines of reduced-order models used to investigate voice abnormalities. Results also demonstrate the types of challenges and difficulties to be expected when applying sensitivity analyses to more complex vocal fold models. Such challenges are discussed and recommendations are made for future studies. PMID:26845452

  20. Threshold modeling of extreme spatial rainfall

    NASA Astrophysics Data System (ADS)

    Thibaud, E.; Davison, A.

    2013-12-01

    Complex events such as sustained extreme precipitation have major effects on human populations and environmental sustainability, and there is a growing interest in modeling them realistically. For risk assessment based on spatial quantities such as the total amount of rainfall falling over a region, it is necessary to properly model the dependence among extremes over that region, based on data from perhaps only a few sites within it. We propose an approach to spatial modeling of extreme rainfall, based on max-stable processes fitted using partial duration series and a censored threshold likelihood function. The resulting models are coherent with classical extreme-value theory and allow the consistent treatment of spatial dependence of rainfall using ideas related to those of classical geostatistics. The method can be used to produce simulations needed for hydrological models, and in particular for the generation of spatially heterogeneous extreme rainfall fields over catchments. We illustrate the ideas through data from the Val Ferret watershed in the Swiss Alps, based on daily cumulative rainfall totals recorded at 24 stations for four summers, augmented by a longer series from nearby. References: Davison, A. C., Huser, R., Thibaud, E. (2013). Geostatistics of Dependent and Asymptotically Independent Extremes, Mathematical Geosciences, vol. 45, num. 5, p. 511-529, 2013, doi:10.1007/s11004-013-9469-y Thibaud, E., Mutzner, R., Davison A. C. (2013, to appear). Threshold modeling of extreme spatial rainfall, Water Resources Research, doi:10.1002/wrcr.20329

  1. An Activation Threshold Model for Response Inhibition

    PubMed Central

    MacDonald, Hayley J.; McMorland, Angus J. C.; Stinear, Cathy M.; Coxon, James P.; Byblow, Winston D.

    2017-01-01

    Reactive response inhibition (RI) is the cancellation of a prepared response when it is no longer appropriate. Selectivity of RI can be examined by cueing the cancellation of one component of a prepared multi-component response. This substantially delays execution of other components. There is debate regarding whether this response delay is due to a selective neural mechanism. Here we propose a computational activation threshold model (ATM) and test it against a classical “horse-race” model using behavioural and neurophysiological data from partial RI experiments. The models comprise both facilitatory and inhibitory processes that compete upstream of motor output regions. Summary statistics (means and standard deviations) of predicted muscular and neurophysiological data were fit in both models to equivalent experimental measures by minimizing a Pearson Chi-square statistic. The ATM best captured behavioural and neurophysiological dynamics of partial RI. The ATM demonstrated that the observed modulation of corticomotor excitability during partial RI can be explained by nonselective inhibition of the prepared response. The inhibition raised the activation threshold to a level that could not be reached by the original response. This was necessarily followed by an additional phase of facilitation representing a secondary activation process in order to reach the new inhibition threshold and initiate the executed component of the response. The ATM offers a mechanistic description of the neural events underlying RI, in which partial movement cancellation results from a nonselective inhibitory event followed by subsequent initiation of a new response. The ATM provides a framework for considering and exploring the neuroanatomical constraints that underlie RI. PMID:28085907

  2. Threshold modeling of extreme spatial rainfall

    NASA Astrophysics Data System (ADS)

    Thibaud, E.; Mutzner, R.; Davison, A. C.

    2013-08-01

    We propose an approach to spatial modeling of extreme rainfall, based on max-stable processes fitted using partial duration series and a censored threshold likelihood function. The resulting models are coherent with classical extreme-value theory and allow the consistent treatment of spatial dependence of rainfall using ideas related to those of classical geostatistics. We illustrate the ideas through data from the Val Ferret watershed in the Swiss Alps, based on daily cumulative rainfall totals recorded at 24 stations for four summers, augmented by a longer series from nearby. We compare the fits of different statistical models appropriate for spatial extremes, select that best fitting our data, and compare return level estimates for the total daily rainfall over the stations. The method can be used in other situations to produce simulations needed for hydrological models, and in particular, for the generation of spatially heterogeneous extreme rainfall fields over catchments.

  3. Dynamic model of the threshold displacement energy

    NASA Astrophysics Data System (ADS)

    Kupchishin, A. I.; Kupchishin, A. A.

    2017-01-01

    A dynamic (cascade-probability) model for calculating the threshold displacement energy of knocked-out atoms (Ed) was proposed taking into account the influence of the instability zone (spontaneous recombination). General expression was recorded for Ed depending on the formation energy of interstitial atoms Ef and vacancies Ei, on the energy transfer coefficient α and the number of interactions i needed to move the atom out of the instability zone. The parameters of primary particles were calculated. Comparison of calculations with experimental data gives a satisfactory agreement.

  4. Solution of an infection model near threshold

    NASA Astrophysics Data System (ADS)

    Kessler, David A.; Shnerb, Nadav M.

    2007-07-01

    We study the susceptible-infected-recovered model of epidemics in the vicinity of the threshold infectivity. We derive the distribution of total outbreak size in the limit of large population size N . This is accomplished by mapping the problem to the first passage time of a random walker subject to a drift that increases linearly with time. We recover the scaling results of Ben-Naim and Krapivsky that the effective maximal size of the outbreak scales as N2/3 , with the average scaling as N1/3 , with an explicit form for the scaling function.

  5. A Population-based Model of Local Control and Survival Benefit of Radiotherapy for Lung Cancer.

    PubMed

    Shafiq, J; Hanna, T P; Vinod, S K; Delaney, G P; Barton, M B

    2016-10-01

    To estimate the population-based locoregional control and overall survival benefits of radiotherapy for lung cancer if the whole population were treated according to evidence-based guidelines. These estimates were based on a published radiotherapy utilisation (RTU) model that has been used to estimate the demand and planning of radiotherapy services nationally and internationally. The lung cancer RTU model was extended to incorporate an estimate of benefits of radiotherapy alone, and of radiotherapy in conjunction with concurrent chemotherapy (CRT). Benefits were defined as the proportional gains in locoregional control and overall survival from radiotherapy over no radiotherapy for radical indications, and from postoperative radiotherapy over surgery alone for adjuvant indications. A literature review (1990-2015) was conducted to identify benefit estimates of individual radiotherapy indications and summed to estimate the population-based gains for these outcomes. Model robustness was tested through univariate and multivariate sensitivity analyses. If evidence-based radiotherapy recommendations are followed for the whole lung cancer population, the model estimated that radiotherapy alone would result in a gain of 8.3% (95% confidence interval 7.4-9.2%) in 5 year locoregional control, 11.4% (10.8-12.0%) in 2 year overall survival and 4.0% (3.6-4.4%) in 5 year overall survival. For the use of CRT over radiotherapy alone, estimated benefits would be: locoregional control 1.7% (0.8-2.4%), 2 year overall survival 1.7% (0.5-2.8%) and 5 year overall survival 1.2% (0.7-1.9%). The model provided estimates of radiotherapy benefit that could be achieved if treatment guidelines are followed for all cancer patients. These can be used as a benchmark so that the effects of a shortfall in the utilisation of radiotherapy can be better understood and addressed. The model can be adapted to other populations with known epidemiological parameters to ensure the planning of equitable

  6. A threshold model of investor psychology

    NASA Astrophysics Data System (ADS)

    Cross, Rod; Grinfeld, Michael; Lamba, Harbir; Seaman, Tim

    2005-08-01

    We introduce a class of agent-based market models founded upon simple descriptions of investor psychology. Agents are subject to various psychological tensions induced by market conditions and endowed with a minimal ‘personality’. This personality consists of a threshold level for each of the tensions being modeled, and the agent reacts whenever a tension threshold is reached. This paper considers an elementary model including just two such tensions. The first is ‘cowardice’, which is the stress caused by remaining in a minority position with respect to overall market sentiment and leads to herding-type behavior. The second is ‘inaction’, which is the increasing desire to act or re-evaluate one's investment position. There is no inductive learning by agents and they are only coupled via the global market price and overall market sentiment. Even incorporating just these two psychological tensions, important stylized facts of real market data, including fat-tails, excess kurtosis, uncorrelated price returns and clustered volatility over the timescale of a few days are reproduced. By then introducing an additional parameter that amplifies the effect of externally generated market noise during times of extreme market sentiment, long-time volatility correlations can also be recovered.

  7. A population based statistical model for daily geometric variations in the thorax.

    PubMed

    Szeto, Yenny Z; Witte, Marnix G; van Herk, Marcel; Sonke, Jan-Jakob

    2017-04-01

    To develop a population based statistical model of the systematic interfraction geometric variations between the planning CT and first treatment week of lung cancer patients for inclusion as uncertainty term in future probabilistic planning. Deformable image registrations between the planning CT and first week CBCTs of 235 lung cancer patients were used to generate deformation vector fields (DVFs) representing the geometric variations of lung cancer patients. Using a second deformable registration step, the average DVF per patient was mapped to an average patient CT. Subsequently, the dominant modes of systematic geometric variations were extracted using Principal Component Analysis (PCA). For evaluation a leave-one-out cross-validation was performed. The first three PCA components mainly described cranial-caudal, anterior-posterior, and left-right variations, respectively. Fifty and 112 components were needed to describe correspondingly 75% and 90% of the variance. An overall systematic variation of 3.6mm SD was observed and could be described with an accuracy of about 1.0mm with the PCA model. A PCA based model for systematic geometric variations in the thorax was developed, and its accuracy determined. Such a model can serve as a basis for probability based treatment planning in lung cancer patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Theoretical model for FCGR near the threshold

    NASA Astrophysics Data System (ADS)

    Lanteigne, Jacques; Baïlon, Jean-Paul

    1981-03-01

    A theoretical model for fatigue crack growth rate at low and near threshold stress intensity factor is developed. The crack tip is assumed to be a semicircular notch of radius ρ and incremental crack growth occurs along a distance 4ρ ahead of the crack tip. After analysis of the stress and strain distribution ahead of the crack tip, a relationship between the strain range and the stress intensity range is proposed. It is then assumed that Manson-Coffin cumulative rule can be applied to a region of length 4ρ from the crack tip, where strain reversal occurs. Finally, a theoretical equation giving the fatigue crack growth rate is obtained and applied to several materials (316L stainless steel, 300M alloy steel, 70-30 α brass, 2618A and 7025 aluminum alloys). It is found that the model can be used to correlate fatigue crack growth rates with the mechanical properties of the materials, and to determine the threshold stress intensity factor, once the crack tip radius α is obtained from the previous data.

  9. Toxicogenetics: population-based testing of drug and chemical safety in mouse models.

    PubMed

    Rusyn, Ivan; Gatti, Daniel M; Wiltshire, Timothy; Wilshire, Timothy; Kleeberger, Steven R; Threadgill, David W

    2010-08-01

    The rapid decline in the cost of dense genotyping is paving the way for new DNA sequence-based laboratory tests to move quickly into clinical practice, and to ultimately help realize the promise of 'personalized' therapies. These advances are based on the growing appreciation of genetics as an important dimension in science and the practice of investigative pharmacology and toxicology. On the clinical side, both the regulators and the pharmaceutical industry hope that the early identification of individuals prone to adverse drug effects will keep advantageous medicines on the market for the benefit of the vast majority of prospective patients. On the environmental health protection side, there is a clear need for better science to define the range and causes of susceptibility to adverse effects of chemicals in the population, so that the appropriate regulatory limits are established. In both cases, most of the research effort is focused on genome-wide association studies in humans where de novo genotyping of each subject is required. At the same time, the power of population-based preclinical safety testing in rodent models (e.g., mouse) remains to be fully exploited. Here, we highlight the approaches available to utilize the knowledge of DNA sequence and genetic diversity of the mouse as a species in mechanistic toxicology research. We posit that appropriate genetically defined mouse models may be combined with the limited data from human studies to not only discover the genetic determinants of susceptibility, but to also understand the molecular underpinnings of toxicity.

  10. Population based model of human embryonic stem cell (hESC) differentiation during endoderm induction.

    PubMed

    Task, Keith; Jaramillo, Maria; Banerjee, Ipsita

    2012-01-01

    The mechanisms by which human embryonic stem cells (hESC) differentiate to endodermal lineage have not been extensively studied. Mathematical models can aid in the identification of mechanistic information. In this work we use a population-based modeling approach to understand the mechanism of endoderm induction in hESC, performed experimentally with exposure to Activin A and Activin A supplemented with growth factors (basic fibroblast growth factor (FGF2) and bone morphogenetic protein 4 (BMP4)). The differentiating cell population is analyzed daily for cellular growth, cell death, and expression of the endoderm proteins Sox17 and CXCR4. The stochastic model starts with a population of undifferentiated cells, wherefrom it evolves in time by assigning each cell a propensity to proliferate, die and differentiate using certain user defined rules. Twelve alternate mechanisms which might describe the observed dynamics were simulated, and an ensemble parameter estimation was performed on each mechanism. A comparison of the quality of agreement of experimental data with simulations for several competing mechanisms led to the identification of one which adequately describes the observed dynamics under both induction conditions. The results indicate that hESC commitment to endoderm occurs through an intermediate mesendoderm germ layer which further differentiates into mesoderm and endoderm, and that during induction proliferation of the endoderm germ layer is promoted. Furthermore, our model suggests that CXCR4 is expressed in mesendoderm and endoderm, but is not expressed in mesoderm. Comparison between the two induction conditions indicates that supplementing FGF2 and BMP4 to Activin A enhances the kinetics of differentiation than Activin A alone. This mechanistic information can aid in the derivation of functional, mature cells from their progenitors. While applied to initial endoderm commitment of hESC, the model is general enough to be applicable either to a system of

  11. Quantitative high-throughput screening for chemical toxicity in a population-based in vitro model.

    PubMed

    Lock, Eric F; Abdo, Nour; Huang, Ruili; Xia, Menghang; Kosyk, Oksana; O'Shea, Shannon H; Zhou, Yi-Hui; Sedykh, Alexander; Tropsha, Alexander; Austin, Christopher P; Tice, Raymond R; Wright, Fred A; Rusyn, Ivan

    2012-04-01

    A shift in toxicity testing from in vivo to in vitro may efficiently prioritize compounds, reveal new mechanisms, and enable predictive modeling. Quantitative high-throughput screening (qHTS) is a major source of data for computational toxicology, and our goal in this study was to aid in the development of predictive in vitro models of chemical-induced toxicity, anchored on interindividual genetic variability. Eighty-one human lymphoblast cell lines from 27 Centre d'Etude du Polymorphisme Humain trios were exposed to 240 chemical substances (12 concentrations, 0.26nM-46.0μM) and evaluated for cytotoxicity and apoptosis. qHTS screening in the genetically defined population produced robust and reproducible results, which allowed for cross-compound, cross-assay, and cross-individual comparisons. Some compounds were cytotoxic to all cell types at similar concentrations, whereas others exhibited interindividual differences in cytotoxicity. Specifically, the qHTS in a population-based human in vitro model system has several unique aspects that are of utility for toxicity testing, chemical prioritization, and high-throughput risk assessment. First, standardized and high-quality concentration-response profiling, with reproducibility confirmed by comparison with previous experiments, enables prioritization of chemicals for variability in interindividual range in cytotoxicity. Second, genome-wide association analysis of cytotoxicity phenotypes allows exploration of the potential genetic determinants of interindividual variability in toxicity. Furthermore, highly significant associations identified through the analysis of population-level correlations between basal gene expression variability and chemical-induced toxicity suggest plausible mode of action hypotheses for follow-up analyses. We conclude that as the improved resolution of genetic profiling can now be matched with high-quality in vitro screening data, the evaluation of the toxicity pathways and the effects of

  12. Toxicogenetics: population-based testing of drug and chemical safety in mouse models

    PubMed Central

    Rusyn, Ivan; Gatti, Daniel M; Wiltshire, Timothy; Kleeberger, Steven R; Threadgill, David W

    2011-01-01

    The rapid decline in the cost of dense genotyping is paving the way for new DNA sequence-based laboratory tests to move quickly into clinical practice, and to ultimately help realize the promise of ‘personalized’ therapies. These advances are based on the growing appreciation of genetics as an important dimension in science and the practice of investigative pharmacology and toxicology. On the clinical side, both the regulators and the pharmaceutical industry hope that the early identification of individuals prone to adverse drug effects will keep advantageous medicines on the market for the benefit of the vast majority of prospective patients. On the environmental health protection side, there is a clear need for better science to define the range and causes of susceptibility to adverse effects of chemicals in the population, so that the appropriate regulatory limits are established. In both cases, most of the research effort is focused on genome-wide association studies in humans where de novo genotyping of each subject is required. At the same time, the power of population-based preclinical safety testing in rodent models (e.g., mouse) remains to be fully exploited. Here, we highlight the approaches available to utilize the knowledge of DNA sequence and genetic diversity of the mouse as a species in mechanistic toxicology research. We posit that appropriate genetically defined mouse models may be combined with the limited data from human studies to not only discover the genetic determinants of susceptibility, but to also understand the molecular underpinnings of toxicity. PMID:20704464

  13. Predicting high-cost pediatric patients: derivation and validation of a population-based model.

    PubMed

    Leininger, Lindsey J; Saloner, Brendan; Wherry, Laura R

    2015-08-01

    Health care administrators often lack feasible methods to prospectively identify new pediatric patients with high health care needs, precluding the ability to proactively target appropriate population health management programs to these children. To develop and validate a predictive model identifying high-cost pediatric patients using parent-reported health (PRH) measures that can be easily collected in clinical and administrative settings. Retrospective cohort study using 2-year panel data from the 2001 to 2011 rounds of the Medical Expenditure Panel Survey. A total of 24,163 children aged 5-17 with family incomes below 400% of the federal poverty line were included in this study. Predictive performance, including the c-statistic, sensitivity, specificity, and predictive values, of multivariate logistic regression models predicting top-decile health care expenditures over a 1-year period. Seven independent domains of PRH measures were tested for predictive capacity relative to basic sociodemographic information: the Children with Special Health Care Needs (CSHCN) Screener; subjectively rated health status; prior year health care utilization; behavioral problems; asthma diagnosis; access to health care; and parental health status and access to care. The CSHCN screener and prior year utilization domains exhibited the highest incremental predictive gains over the baseline model. A model including sociodemographic characteristics, the CSHCN screener, and prior year utilization had a c-statistic of 0.73 (95% confidence interval, 0.70-0.74), surpassing the commonly used threshold to establish sufficient predictive capacity (c-statistic>0.70). The proposed prediction tool, comprising a simple series of PRH measures, accurately stratifies pediatric populations by their risk of incurring high health care costs.

  14. Policy evaluation in diabetes prevention and treatment using a population-based macro simulation model: the MICADO model.

    PubMed

    van der Heijden, A A W A; Feenstra, T L; Hoogenveen, R T; Niessen, L W; de Bruijne, M C; Dekker, J M; Baan, C A; Nijpels, G

    2015-12-01

    To test a simulation model, the MICADO model, for estimating the long-term effects of interventions in people with and without diabetes. The MICADO model includes micro- and macrovascular diseases in relation to their risk factors. The strengths of this model are its population scope and the possibility to assess parameter uncertainty using probabilistic sensitivity analyses. Outcomes include incidence and prevalence of complications, quality of life, costs and cost-effectiveness. We externally validated MICADO's estimates of micro- and macrovascular complications in a Dutch cohort with diabetes (n = 498,400) by comparing these estimates with national and international empirical data. For the annual number of people undergoing amputations, MICADO's estimate was 592 (95% interquantile range 291-842), which compared well with the registered number of people with diabetes-related amputations in the Netherlands (728). The incidence of end-stage renal disease estimated using the MICADO model was 247 people (95% interquartile range 120-363), which was also similar to the registered incidence in the Netherlands (277 people). MICADO performed well in the validation of macrovascular outcomes of population-based cohorts, while it had more difficulty in reflecting a highly selected trial population. Validation by comparison with independent empirical data showed that the MICADO model simulates the natural course of diabetes and its micro- and macrovascular complications well. As a population-based model, MICADO can be applied for projections as well as scenario analyses to evaluate the long-term (cost-)effectiveness of population-level interventions targeting diabetes and its complications in the Netherlands or similar countries. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.

  15. Estimating and modelling cure in population-based cancer studies within the framework of flexible parametric survival models

    PubMed Central

    2011-01-01

    Background When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Methods Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. Results We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Conclusions Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. PMID:21696598

  16. Comparison of models used for the determination of odor thresholds

    NASA Astrophysics Data System (ADS)

    Poostchi, E.; Gnyp, A. W.; Pierre, C. C. St.

    Several methods of data analysis used for the evaluation of odor detection thresholds have been examined through application to two samples of n-butanol. Panels of seven-ten people, working with a six level, IITRI, ternary forced choice olfactometer, were presented with initial concentrations of 99.5 and 52.1 ppm n-butanol during three trials. The ranking-plotting and ASTM E-679 methods were applied to the evaluation of discrimination-recognition thresholds of the odorous samples. It was found that single evaluations of detection or discrimination-recognition thresholds by either method were always ± 50%of the mean of six trials. The effects of successful guessing on the magnitudes of detection thresholds were examined in terms of a model based on the principle of maximum likelihood estimation of one, two and three trials of panel response. The magnitude of the discrimination threshold obtained by this method always fell between the detection and discrimination-recognition thresholds evaluated by the currently used models. The mean discrimination threshold of n-butanol for six trials was found to be 0.65 ± 0.25 ppm. It appears that the magnitude obtained from one trial with seven panel members would be sufficiently reliable for regulatory purposes when only one field sample is available, since any subsequent trials did not produce threshold values better than ± 40 % of the mean of six tests involving seven and ten panel members exposed to two different initial concentrations.

  17. Analytical threshold voltage model for strained silicon GAA-TFET

    NASA Astrophysics Data System (ADS)

    Kang, Hai-Yan; Hu, Hui-Yong; Wang, Bin

    2016-11-01

    Tunnel field effect transistors (TFETs) are promising devices for low power applications. An analytical threshold voltage model, based on the channel surface potential and electric field obtained by solving the 2D Poisson’s equation, for strained silicon gate all around TFETs is proposed. The variation of the threshold voltage with device parameters, such as the strain (Ge mole fraction x), gate oxide thickness, gate oxide permittivity, and channel length has also been investigated. The threshold voltage model is extracted using the peak transconductance method and is verified by good agreement with the results obtained from the TCAD simulation. Project supported by the National Natural Science Foundation of China (Grant No. 61474085).

  18. Effect of model uncertainty on failure detection - The threshold selector

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, Abbas; Akhter, Muhammad M.; Rock, Stephen M.

    1988-01-01

    The performance of all failure detection, isolation, and accomodation (DIA) algorithms is influenced by the presence of model uncertainty. A unique framework is presented to incorporate a knowledge of modeling error in the analysis and design of failure detection systems. The tools being used are very similar to those in robust control theory. A concept is introduced called the threshold selector, which is a nonlinear inequality whose solution defines the set of detectable sensor failure signals. The threshold selector represents an innovative tool for analysis and synthesis of DIA algorithms. It identifies the optimal threshold to be used in innovations-based DIA algorithms. The optimal threshold is shown to be a function of the bound on modeling errors, the noise properties, the speed of DIA filters, and the classes of reference and failure signals. The size of the smallest detectable failure is also determined. The results are applied to a multivariable turbofan jet engine example, which demonstrates improvements compared to previous studies.

  19. Effect of model uncertainty on failure detection - The threshold selector

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, Abbas; Akhter, Muhammad M.; Rock, Stephen M.

    1988-01-01

    The performance of all failure detection, isolation, and accomodation (DIA) algorithms is influenced by the presence of model uncertainty. A unique framework is presented to incorporate a knowledge of modeling error in the analysis and design of failure detection systems. The tools being used are very similar to those in robust control theory. A concept is introduced called the threshold selector, which is a nonlinear inequality whose solution defines the set of detectable sensor failure signals. The threshold selector represents an innovative tool for analysis and synthesis of DIA algorithms. It identifies the optimal threshold to be used in innovations-based DIA algorithms. The optimal threshold is shown to be a function of the bound on modeling errors, the noise properties, the speed of DIA filters, and the classes of reference and failure signals. The size of the smallest detectable failure is also determined. The results are applied to a multivariable turbofan jet engine example, which demonstrates improvements compared to previous studies.

  20. Uncertainties in the Modelled CO2 Threshold for Antarctic Glaciation

    NASA Technical Reports Server (NTRS)

    Gasson, E.; Lunt, D. J.; DeConto, R.; Goldner, A.; Heinemann, M.; Huber, M.; LeGrande, A. N.; Pollard, D.; Sagoo, N.; Siddall, M.; Winguth, A.; Valdes, P. J.

    2014-01-01

    frequently cited atmospheric CO2 threshold for the onset of Antarctic glaciation of approximately780 parts per million by volume is based on the study of DeConto and Pollard (2003) using an ice sheet model and the GENESIS climate model. Proxy records suggest that atmospheric CO2 concentrations passed through this threshold across the Eocene-Oligocene transition approximately 34 million years. However, atmospheric CO2 concentrations may have been close to this threshold earlier than this transition, which is used by some to suggest the possibility of Antarctic ice sheets during the Eocene. Here we investigate the climate model dependency of the threshold for Antarctic glaciation by performing offline ice sheet model simulations using the climate from 7 different climate models with Eocene boundary conditions (HadCM3L, CCSM3, CESM1.0, GENESIS, FAMOUS, ECHAM5 and GISS_ER). These climate simulations are sourced from a number of independent studies, and as such the boundary conditions, which are poorly constrained during the Eocene, are not identical between simulations. The results of this study suggest that the atmospheric CO2 threshold for Antarctic glaciation is highly dependent on the climate model used and the climate model configuration. A large discrepancy between the climate model and ice sheet model grids for some simulations leads to a strong sensitivity to the lapse rate parameter.

  1. Uncertainties in the modelled CO2 threshold for Antarctic glaciation

    NASA Astrophysics Data System (ADS)

    Gasson, E.; Lunt, D. J.; DeConto, R.; Goldner, A.; Heinemann, M.; Huber, M.; LeGrande, A. N.; Pollard, D.; Sagoo, N.; Siddall, M.; Winguth, A.; Valdes, P. J.

    2014-03-01

    A frequently cited atmospheric CO2 threshold for the onset of Antarctic glaciation of ~780 ppmv is based on the study of DeConto and Pollard (2003) using an ice sheet model and the GENESIS climate model. Proxy records suggest that atmospheric CO2 concentrations passed through this threshold across the Eocene-Oligocene transition ~34 Ma. However, atmospheric CO2 concentrations may have been close to this threshold earlier than this transition, which is used by some to suggest the possibility of Antarctic ice sheets during the Eocene. Here we investigate the climate model dependency of the threshold for Antarctic glaciation by performing offline ice sheet model simulations using the climate from 7 different climate models with Eocene boundary conditions (HadCM3L, CCSM3, CESM1.0, GENESIS, FAMOUS, ECHAM5 and GISS_ER). These climate simulations are sourced from a number of independent studies, and as such the boundary conditions, which are poorly constrained during the Eocene, are not identical between simulations. The results of this study suggest that the atmospheric CO2 threshold for Antarctic glaciation is highly dependent on the climate model used and the climate model configuration. A large discrepancy between the climate model and ice sheet model grids for some simulations leads to a strong sensitivity to the lapse rate parameter.

  2. Octave-Band Thresholds for Modeled Reverberant Fields

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).

  3. Octave-Band Thresholds for Modeled Reverberant Fields

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).

  4. Simulation of Population-Based Commuter Exposure to NO2 Using Different Air Pollution Models

    PubMed Central

    Ragettli, Martina S.; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E.; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C.

    2014-01-01

    We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m−3, range: 21–61) than with a dispersion model with a lower resolution (39 ± 5 µg m−3; range: 24–51), and a land use regression model (41 ± 5 µg m−3; range: 24–54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas. PMID:24823664

  5. Simulation of population-based commuter exposure to NO₂ using different air pollution models.

    PubMed

    Ragettli, Martina S; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C

    2014-05-12

    We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m(-3), range: 21-61) than with a dispersion model with a lower resolution (39 ± 5 µg m(-3); range: 24-51), and a land use regression model (41 ± 5 µg m(-3); range: 24-54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas.

  6. Evaluation of a spatially resolved forest fire smoke model for population-based epidemiologic exposure assessment.

    PubMed

    Yao, Jiayun; Eyamie, Jeff; Henderson, Sarah B

    2016-01-01

    Exposure to forest fire smoke (FFS) is associated with multiple adverse health effects, mostly respiratory. Findings for cardiovascular effects have been inconsistent, possibly related to the limitations of conventional methods to assess FFS exposure. In previous work, we developed an empirical model to estimate smoke-related fine particulate matter (PM2.5) for all populated areas in British Columbia (BC), Canada. Here, we evaluate the utility of our model by comparing epidemiologic associations between modeled and measured PM2.5. For each local health area (LHA), we used Poisson regression to estimate the effects of PM2.5 estimates and measurements on counts of medication dispensations and outpatient physician visits. We then used meta-regression to estimate the overall effects. A 10 μg/m(3) increase in modeled PM2.5 was associated with increased sabutamol dispensations (RR=1.04, 95% CI 1.03-1.06), and physician visits for asthma (1.06, 1.04-1.08), COPD (1.02, 1.00-1.03), lower respiratory infections (1.03, 1.00-1.05), and otitis media (1.05, 1.03-1.07), all comparable to measured PM2.5. Effects on cardiovascular outcomes were only significant using model estimates in all LHAs during extreme fire days. This suggests that the exposure model is a promising tool for increasing the power of epidemiologic studies to detect the health effects of FFS via improved spatial coverage and resolution.

  7. Modeling Associative Recognition: A Comparison of Two-High-Threshold, Two-High-Threshold Signal Detection, and Mixture Distribution Models

    ERIC Educational Resources Information Center

    Macho, Siegfried

    2004-01-01

    A 2-high-threshold signal detection (HTSDT) model, a mixture distribution (SON) model, and 2-highthreshold (HT) models with responses distributed over 1 or several response categories were fit to results of 6 experiments from 2 studies on associative recognition: R. Kelley and J. T. Wixted (2001) and A. P. Yonelinas (1997). HTSDT assumes that…

  8. Modeling Associative Recognition: A Comparison of Two-High-Threshold, Two-High-Threshold Signal Detection, and Mixture Distribution Models

    ERIC Educational Resources Information Center

    Macho, Siegfried

    2004-01-01

    A 2-high-threshold signal detection (HTSDT) model, a mixture distribution (SON) model, and 2-highthreshold (HT) models with responses distributed over 1 or several response categories were fit to results of 6 experiments from 2 studies on associative recognition: R. Kelley and J. T. Wixted (2001) and A. P. Yonelinas (1997). HTSDT assumes that…

  9. Evaluation of a spatially resolved forest fire smoke model for population-based epidemiologic exposure assessment

    PubMed Central

    Yao, Jiayun; Eyamie, Jeff; Henderson, Sarah B

    2016-01-01

    Exposure to forest fire smoke (FFS) is associated with multiple adverse health effects, mostly respiratory. Findings for cardiovascular effects have been inconsistent, possibly related to the limitations of conventional methods to assess FFS exposure. In previous work, we developed an empirical model to estimate smoke-related fine particulate matter (PM2.5) for all populated areas in British Columbia (BC), Canada. Here, we evaluate the utility of our model by comparing epidemiologic associations between modeled and measured PM2.5. For each local health area (LHA), we used Poisson regression to estimate the effects of PM2.5 estimates and measurements on counts of medication dispensations and outpatient physician visits. We then used meta-regression to estimate the overall effects. A 10 μg/m3 increase in modeled PM2.5 was associated with increased sabutamol dispensations (RR=1.04, 95% CI 1.03–1.06), and physician visits for asthma (1.06, 1.04–1.08), COPD (1.02, 1.00–1.03), lower respiratory infections (1.03, 1.00–1.05), and otitis media (1.05, 1.03–1.07), all comparable to measured PM2.5. Effects on cardiovascular outcomes were only significant using model estimates in all LHAs during extreme fire days. This suggests that the exposure model is a promising tool for increasing the power of epidemiologic studies to detect the health effects of FFS via improved spatial coverage and resolution. PMID:25294305

  10. Setting conservation management thresholds using a novel participatory modeling approach.

    PubMed

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  11. Mathematical model for adaptive evolution of populations based on a complex domain

    PubMed Central

    Ibrahim, Rabha W.; Ahmad, M.Z.; Al-Janaby, Hiba F.

    2015-01-01

    A mutation is ultimately essential for adaptive evolution in all populations. It arises all the time, but is mostly fixed by enzymes. Further, most do consider that the evolution mechanism is by a natural assortment of variations in organisms in line for random variations in their DNA, and the suggestions for this are overwhelming. The altering of the construction of a gene, causing a different form that may be communicated to succeeding generations, produced by the modification of single base units in DNA, or the deletion, insertion, or rearrangement of larger units of chromosomes or genes. This altering is called a mutation. In this paper, a mathematical model is introduced to this reality. The model describes the time and space for the evolution. The tool is based on a complex domain for the space. We show that the evolution is distributed with the hypergeometric function. The Boundedness of the evolution is imposed by utilizing the Koebe function. PMID:26858564

  12. Identifying genetic loci associated with antidepressant drug response with drug-gene interaction models in a population-based study.

    PubMed

    Noordam, Raymond; Direk, Nese; Sitlani, Colleen M; Aarts, Nikkie; Tiemeier, Henning; Hofman, Albert; Uitterlinden, André G; Psaty, Bruce M; Stricker, Bruno H; Visser, Loes E

    2015-03-01

    It has been difficult to identify genes affecting drug response to Selective Serotonin Reuptake Inhibitors (SSRIs). We used multiple cross-sectional assessments of depressive symptoms in a population-based study to identify potential genetic interactions with SSRIs as a model to study genetic variants associated with SSRI response. This study, embedded in the prospective Rotterdam Study, included all successfully genotyped participants with data on depressive symptoms (CES-D scores). We used repeated measurement models to test multiplicative interaction between genetic variants and use of SSRIs on repeated CESD scores. Besides a genome-wide analysis, we also performed an analysis which was restricted to genes related to the serotonergic signaling pathway. A total of 273 out of 14,937 assessments of depressive symptoms in 6443 participants, use of an SSRI was recorded. After correction for multiple testing, no plausible loci were identified in the genome-wide analysis. However, among the top 10 independent loci with the lowest p-values, findings within two genes (FSHR and HMGB4) might be of interest. Among 26 genes related to the serotonergic signaling pathway, the rs6108160 polymorphism in the PLCB1 gene reached statistical significance after Bonferroni correction (p-value = 8.1e-5). Also, the widely replicated 102C > T polymorphism in the HTR2A gene showed a statistically significant drug-gene interaction with SSRI use. Therefore, the present study suggests that drug-gene interaction models on (repeated) cross-sectional assessments of depressive symptoms in a population-based study can identify potential loci that may influence SSRI response.

  13. On the probability summation model for laser-damage thresholds

    NASA Astrophysics Data System (ADS)

    Clark, Clifton D.; Buffington, Gavin D.

    2016-01-01

    This paper explores the probability summation model in an attempt to provide insight to the model's utility and ultimately its validity. The model is a statistical description of multiple-pulse (MP) damage trends. It computes the probability of n pulses causing damage from knowledge of the single-pulse dose-response curve. Recently, the model has been used to make a connection between the observed n trends in MP damage thresholds for short pulses (<10 μs) and experimental uncertainties, suggesting that the observed trend is an artifact of experimental methods. We will consider the correct application of the model in this case. We also apply this model to the spot-size dependence of short pulse damage thresholds, which has not been done previously. Our results predict that the damage threshold trends with respect to the irradiated area should be similar to the MP damage threshold trends, and that observed spot-size dependence for short pulses seems to display this trend, which cannot be accounted for by the thermal models.

  14. Cascades in the Threshold Model for varying system sizes

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy

    2015-03-01

    A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.

  15. Associations between five-factor model traits and perceived job strain: a population-based study.

    PubMed

    Törnroos, Maria; Hintsanen, Mirka; Hintsa, Taina; Jokela, Markus; Pulkki-Råback, Laura; Hutri-Kähönen, Nina; Keltikangas-Järvinen, Liisa

    2013-10-01

    This study examined the association between Five-Factor Model personality traits and perceived job strain. The sample consisted of 758 women and 614 men (aged 30-45 years in 2007) participating in the Young Finns study. Personality was assessed with the Neuroticism, Extraversion, Openness, Five-Factor Inventory (NEO-FFI) questionnaire and work stress according to Karasek's demand-control model of job strain. The associations between personality traits and job strain and its components were measured by linear regression analyses where the traits were first entered individually and then simultaneously. The results for the associations between individually entered personality traits showed that high neuroticism, low extraversion, low openness, low conscientiousness, and low agreeableness were associated with high job strain. High neuroticism, high openness, and low agreeableness were related to high demands, whereas high neuroticism, low extraversion, low openness, low conscientiousness, and low agreeableness were associated with low control. In the analyses for the simultaneously entered traits, high neuroticism, low openness, and low conscientiousness were associated with high job strain. In addition, high neuroticism was related to high demands and low control, whereas low extraversion was related to low demands and low control. Low openness and low conscientiousness were also related to low control. This study suggests that personality is related to perceived job strain. Perceptions of work stressors and decision latitude are not only indicators of structural aspects of work but also indicate that there are individual differences in how individuals experience their work environment.

  16. Peristomal Skin Complications Are Common, Expensive, and Difficult to Manage: A Population Based Cost Modeling Study

    PubMed Central

    Meisner, Søren; Lehur, Paul-Antoine; Moran, Brendan; Martins, Lina; Jemec, Gregor Borut Ernst

    2012-01-01

    Background Peristomal skin complications (PSCs) are the most common post-operative complications following creation of a stoma. Living with a stoma is a challenge, not only for the patient and their carers, but also for society as a whole. Due to methodological problems of PSC assessment, the associated health-economic burden of medium to longterm complications has been poorly described. Aim The aim of the present study was to create a model to estimate treatment costs of PSCs using the standardized assessment Ostomy Skin Tool as a reference. The resultant model was applied to a real-life global data set of stoma patients (n = 3017) to determine the prevalence and financial burden of PSCs. Methods Eleven experienced stoma care nurses were interviewed to get a global understanding of a treatment algorithm that formed the basis of the cost analysis. The estimated costs were based on a seven week treatment period. PSC costs were estimated for five underlying diagnostic categories and three levels of severity. The estimated treatment costs of severe cases of PSCs were increased 2–5 fold for the different diagnostic categories of PSCs compared with mild cases. French unit costs were applied to the global data set. Results The estimated total average cost for a seven week treatment period (including appliances and accessories) was 263€ for those with PSCs (n = 1742) compared to 215€ for those without PSCs (n = 1172). A co-variance analysis showed that leakage level had a significant impact on PSC cost from ‘rarely/never’ to ‘always/often’ p<0.00001 and from ‘rarely/never’ to ‘sometimes’ p = 0.0115. Conclusion PSCs are common and troublesome and the consequences are substantial, both for the patient and from a health economic viewpoint. PSCs should be diagnosed and treated at an early stage to prevent long term, debilitating and expensive complications. PMID:22679479

  17. Analysis of amyotrophic lateral sclerosis as a multistep process: a population-based modelling study

    PubMed Central

    Al-Chalabi, Ammar; Calvo, Andrea; Chio, Adriano; Colville, Shuna; Ellis, Cathy M; Hardiman, Orla; Heverin, Mark; Howard, Robin S; Huisman, Mark H B; Keren, Noa; Leigh, P Nigel; Mazzini, Letizia; Mora, Gabriele; Orrell, Richard W; Rooney, James; Scott, Kirsten M; Scotton, William J; Seelen, Meinie; Shaw, Christopher E; Sidle, Katie S; Swingler, Robert; Tsuda, Miho; Veldink, Jan H; Visser, Anne E; van den Berg, Leonard H; Pearce, Neil

    2014-01-01

    Summary Background Amyotrophic lateral sclerosis shares characteristics with some cancers, such as onset being more common in later life, progression usually being rapid, the disease affecting a particular cell type, and showing complex inheritance. We used a model originally applied to cancer epidemiology to investigate the hypothesis that amyotrophic lateral sclerosis is a multistep process. Methods We generated incidence data by age and sex from amyotrophic lateral sclerosis population registers in Ireland (registration dates 1995–2012), the Netherlands (2006–12), Italy (1995–2004), Scotland (1989–98), and England (2002–09), and calculated age and sex-adjusted incidences for each register. We regressed the log of age-specific incidence against the log of age with least squares regression. We did the analyses within each register, and also did a combined analysis, adjusting for register. Findings We identified 6274 cases of amyotrophic lateral sclerosis from a catchment population of about 34 million people. We noted a linear relationship between log incidence and log age in all five registers: England r2=0·95, Ireland r2=0·99, Italy r2=0·95, the Netherlands r2=0·99, and Scotland r2=0·97; overall r2=0·99. All five registers gave similar estimates of the linear slope ranging from 4·5 to 5·1, with overlapping confidence intervals. The combination of all five registers gave an overall slope of 4·8 (95% CI 4·5–5·0), with similar estimates for men (4·6, 4·3–4·9) and women (5·0, 4·5–5·5). Interpretation A linear relationship between the log incidence and log age of onset of amyotrophic lateral sclerosis is consistent with a multistage model of disease. The slope estimate suggests that amyotrophic lateral sclerosis is a six-step process. Identification of these steps could lead to preventive and therapeutic avenues. Funding UK Medical Research Council; UK Economic and Social Research Council; Ireland Health Research Board; The

  18. Analysis of amyotrophic lateral sclerosis as a multistep process: a population-based modelling study.

    PubMed

    Al-Chalabi, Ammar; Calvo, Andrea; Chio, Adriano; Colville, Shuna; Ellis, Cathy M; Hardiman, Orla; Heverin, Mark; Howard, Robin S; Huisman, Mark H B; Keren, Noa; Leigh, P Nigel; Mazzini, Letizia; Mora, Gabriele; Orrell, Richard W; Rooney, James; Scott, Kirsten M; Scotton, William J; Seelen, Meinie; Shaw, Christopher E; Sidle, Katie S; Swingler, Robert; Tsuda, Miho; Veldink, Jan H; Visser, Anne E; van den Berg, Leonard H; Pearce, Neil

    2014-11-01

    Amyotrophic lateral sclerosis shares characteristics with some cancers, such as onset being more common in later life, progression usually being rapid, the disease affecting a particular cell type, and showing complex inheritance. We used a model originally applied to cancer epidemiology to investigate the hypothesis that amyotrophic lateral sclerosis is a multistep process. We generated incidence data by age and sex from amyotrophic lateral sclerosis population registers in Ireland (registration dates 1995-2012), the Netherlands (2006-12), Italy (1995-2004), Scotland (1989-98), and England (2002-09), and calculated age and sex-adjusted incidences for each register. We regressed the log of age-specific incidence against the log of age with least squares regression. We did the analyses within each register, and also did a combined analysis, adjusting for register. We identified 6274 cases of amyotrophic lateral sclerosis from a catchment population of about 34 million people. We noted a linear relationship between log incidence and log age in all five registers: England r(2)=0·95, Ireland r(2)=0·99, Italy r(2)=0·95, the Netherlands r(2)=0·99, and Scotland r(2)=0·97; overall r(2)=0·99. All five registers gave similar estimates of the linear slope ranging from 4·5 to 5·1, with overlapping confidence intervals. The combination of all five registers gave an overall slope of 4·8 (95% CI 4·5-5·0), with similar estimates for men (4·6, 4·3-4·9) and women (5·0, 4·5-5·5). A linear relationship between the log incidence and log age of onset of amyotrophic lateral sclerosis is consistent with a multistage model of disease. The slope estimate suggests that amyotrophic lateral sclerosis is a six-step process. Identification of these steps could lead to preventive and therapeutic avenues. UK Medical Research Council; UK Economic and Social Research Council; Ireland Health Research Board; The Netherlands Organisation for Health Research and Development (ZonMw); the

  19. The threshold of a stochastic SIQS epidemic model

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Bing; Huo, Hai-Feng; Xiang, Hong; Shi, Qihong; Li, Dungang

    2017-09-01

    In this paper, we present the threshold of a stochastic SIQS epidemic model which determines the extinction and persistence of the disease. Furthermore, we find that noise can suppress the disease outbreak. Numerical simulations are also carried out to confirm the analytical results.

  20. Modeling the Interactions Between Multiple Crack Closure Mechanisms at Threshold

    NASA Technical Reports Server (NTRS)

    Newman, John A.; Riddell, William T.; Piascik, Robert S.

    2003-01-01

    A fatigue crack closure model is developed that includes interactions between the three closure mechanisms most likely to occur at threshold; plasticity, roughness, and oxide. This model, herein referred to as the CROP model (for Closure, Roughness, Oxide, and Plasticity), also includes the effects of out-of plane cracking and multi-axial loading. These features make the CROP closure model uniquely suited for, but not limited to, threshold applications. Rough cracks are idealized here as two-dimensional sawtooths, whose geometry induces mixed-mode crack- tip stresses. Continuum mechanics and crack-tip dislocation concepts are combined to relate crack face displacements to crack-tip loads. Geometric criteria are used to determine closure loads from crack-face displacements. Finite element results, used to verify model predictions, provide critical information about the locations where crack closure occurs.

  1. Effect of threshold disorder on the quorum percolation model

    NASA Astrophysics Data System (ADS)

    Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel

    2016-07-01

    We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.

  2. Performance and Cost-Effectiveness of Computed Tomography Lung Cancer Screening Scenarios in a Population-Based Setting: A Microsimulation Modeling Analysis in Ontario, Canada

    PubMed Central

    ten Haaf, Kevin; Tammemägi, Martin C.; Bondy, Susan J.; van der Aalst, Carlijn M.; Gu, Sumei; de Koning, Harry J.

    2017-01-01

    Background The National Lung Screening Trial (NLST) results indicate that computed tomography (CT) lung cancer screening for current and former smokers with three annual screens can be cost-effective in a trial setting. However, the cost-effectiveness in a population-based setting with >3 screening rounds is uncertain. Therefore, the objective of this study was to estimate the cost-effectiveness of lung cancer screening in a population-based setting in Ontario, Canada, and evaluate the effects of screening eligibility criteria. Methods and Findings This study used microsimulation modeling informed by various data sources, including the Ontario Health Insurance Plan (OHIP), Ontario Cancer Registry, smoking behavior surveys, and the NLST. Persons, born between 1940 and 1969, were examined from a third-party health care payer perspective across a lifetime horizon. Starting in 2015, 576 CT screening scenarios were examined, varying by age to start and end screening, smoking eligibility criteria, and screening interval. Among the examined outcome measures were lung cancer deaths averted, life-years gained, percentage ever screened, costs (in 2015 Canadian dollars), and overdiagnosis. The results of the base-case analysis indicated that annual screening was more cost-effective than biennial screening. Scenarios with eligibility criteria that required as few as 20 pack-years were dominated by scenarios that required higher numbers of accumulated pack-years. In general, scenarios that applied stringent smoking eligibility criteria (i.e., requiring higher levels of accumulated smoking exposure) were more cost-effective than scenarios with less stringent smoking eligibility criteria, with modest differences in life-years gained. Annual screening between ages 55–75 for persons who smoked ≥40 pack-years and who currently smoke or quit ≤10 y ago yielded an incremental cost-effectiveness ratio of $41,136 Canadian dollars ($33,825 in May 1, 2015, United States dollars) per

  3. Performance and Cost-Effectiveness of Computed Tomography Lung Cancer Screening Scenarios in a Population-Based Setting: A Microsimulation Modeling Analysis in Ontario, Canada.

    PubMed

    Ten Haaf, Kevin; Tammemägi, Martin C; Bondy, Susan J; van der Aalst, Carlijn M; Gu, Sumei; McGregor, S Elizabeth; Nicholas, Garth; de Koning, Harry J; Paszat, Lawrence F

    2017-02-01

    The National Lung Screening Trial (NLST) results indicate that computed tomography (CT) lung cancer screening for current and former smokers with three annual screens can be cost-effective in a trial setting. However, the cost-effectiveness in a population-based setting with >3 screening rounds is uncertain. Therefore, the objective of this study was to estimate the cost-effectiveness of lung cancer screening in a population-based setting in Ontario, Canada, and evaluate the effects of screening eligibility criteria. This study used microsimulation modeling informed by various data sources, including the Ontario Health Insurance Plan (OHIP), Ontario Cancer Registry, smoking behavior surveys, and the NLST. Persons, born between 1940 and 1969, were examined from a third-party health care payer perspective across a lifetime horizon. Starting in 2015, 576 CT screening scenarios were examined, varying by age to start and end screening, smoking eligibility criteria, and screening interval. Among the examined outcome measures were lung cancer deaths averted, life-years gained, percentage ever screened, costs (in 2015 Canadian dollars), and overdiagnosis. The results of the base-case analysis indicated that annual screening was more cost-effective than biennial screening. Scenarios with eligibility criteria that required as few as 20 pack-years were dominated by scenarios that required higher numbers of accumulated pack-years. In general, scenarios that applied stringent smoking eligibility criteria (i.e., requiring higher levels of accumulated smoking exposure) were more cost-effective than scenarios with less stringent smoking eligibility criteria, with modest differences in life-years gained. Annual screening between ages 55-75 for persons who smoked ≥40 pack-years and who currently smoke or quit ≤10 y ago yielded an incremental cost-effectiveness ratio of $41,136 Canadian dollars ($33,825 in May 1, 2015, United States dollars) per life-year gained (compared to

  4. Empirical assessment of a threshold model for sylvatic plague.

    PubMed

    Davis, S; Leirs, H; Viljugrein, H; Stenseth, N Chr; De Bruyn, L; Klassovskiy, N; Ageyev, V; Begon, M

    2007-08-22

    Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model using additional data from the same archives. Throughout the focus, population levels above the threshold were a necessary condition for an epizootic to occur. However, there were large numbers of occasions when an epizootic was not observed even though great gerbils were, and had been, abundant. We examine six hypotheses that could explain the resulting false positive predictions, namely (i) including end-of-outbreak data erroneously lowers the estimated threshold, (ii) too few gerbils were tested, (iii) plague becomes locally extinct, (iv) the abundance of fleas was too low, (v) the climate was unfavourable, and (vi) a high proportion of gerbils were resistant. Of these, separate thresholds, fleas and climate received some support but accounted for few false positives and can be disregarded as serious omissions from the model. Small sample size and local extinction received strong support and can account for most of the false positives. Host resistance received no support here but should be subject to more direct experimental testing.

  5. Radiation-induced aging of PDMS Elastomer TR-55: a summary of constitutive, mesoscale, and population-based models

    SciTech Connect

    Maiti, A; Weisgraber, T. H.; Dinh, L. N.

    2016-11-16

    Filled and cross-linked elastomeric rubbers are versatile network materials with a multitude of applications ranging from artificial organs and biomedical devices to cushions, coatings, adhesives, interconnects, and seismic-isolation-, thermal-, and electrical barriers. External factors like mechanical stress, temperature fluctuations, or radiation are known to create chemical changes in such materials that can directly affect the molecular weight distribution (MWD) of the polymer between cross-links and alter the structural and mechanical properties. From a Materials Science point of view it is highly desirable to understand, effect, and manipulate such property changes in a controlled manner. In this report we summarize our modeling efforts on a polysiloxane elastomer TR-55, which is an important component in several of our systems, and representative of a wide class of filled rubber materials. The primary aging driver in this work has been γ-radiation, and a variety of modeling approaches have been employed, including constitutive, mesoscale, and population-based models. The work utilizes diverse experimental data, including mechanical stress-strain and compression set measurements, as well as MWD measurements using multiquantum NMR.

  6. Traveling density wave models for earthquakes and driven threshold systems

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Klein, W.; Gross, Susanna; Ferguson, C. D.

    1997-07-01

    Driven threshold systems are now used to model sandpiles, earthquakes, magnetic depinning transitions, integrate-and-fire neural networks, and driven foams. We analyze a physically motivated model which has many of the same properties as the hard threshold models, but in which all of the nonequilibrium physics is obtained from a Lyapunov functional. The ideas apply to mean-field systems, and lead to a number of predictions, including scaling exponents and metastable lifetimes for nucleating droplets. The former predictions are supported, for example, by data observed for earthquake fault systems. An interesting consequence of the model is that time appears as a scaling field, leading to temporal scaling laws similar to those observed in nature.

  7. Recognition of Threshold Dose Model: Avoiding Continued Excessive Regulation

    SciTech Connect

    Logan, Stanley E.

    1999-06-06

    The purpose of this work is to examine the relationships between radiation dose-response models and associated regulations. It is concluded that recognition of the validity of a threshold model can be done on the basis of presently known data and that changes in regulations should be started at this time to avoid further unnecessary losses due to continued excessive regulation. As results from new research come in, refinement of interim values proposed in revised regulations can be incorporated.

  8. Applying the effort-reward imbalance model to household and family work: a population-based study of German mothers.

    PubMed

    Sperlich, Stefanie; Peter, Richard; Geyer, Siegfried

    2012-01-06

    This paper reports on results of a newly developed questionnaire for the assessment of effort-reward imbalance (ERI) in unpaid household and family work. Using a cross-sectional population-based survey of German mothers (n = 3129) the dimensional structure of the theoretical ERI model was validated by means of Confirmatory Factor Analysis (CFA). Analyses of Variance were computed to examine relationships between ERI and social factors and health outcomes. CFA revealed good psychometric properties indicating that the subscale 'effort' is based on one latent factor and the subscale 'reward' is composed of four dimensions: 'intrinsic value of family and household work', 'societal esteem', 'recognition from the partner', and 'affection from the child(ren)'. About 19.3% of mothers perceived lack of reciprocity and 23.8% showed high rates of overcommitment in terms of inability to withdraw from household and family obligations. Socially disadvantaged mothers were at higher risk of ERI, in particular with respect to the perception of low societal esteem. Gender inequality in the division of household and family work and work-family conflict accounted most for ERI in household and family work. Analogous to ERI in paid work we could demonstrate that ERI affects self-rated health, somatic complaints, mental health and, to some extent, hypertension. The newly developed questionnaire demonstrates satisfied validity and promising results for extending the ERI model to household and family work.

  9. Applying the effort-reward imbalance model to household and family work: a population-based study of German mothers

    PubMed Central

    2012-01-01

    Background This paper reports on results of a newly developed questionnaire for the assessment of effort-reward imbalance (ERI) in unpaid household and family work. Methods: Using a cross-sectional population-based survey of German mothers (n = 3129) the dimensional structure of the theoretical ERI model was validated by means of Confirmatory Factor Analysis (CFA). Analyses of Variance were computed to examine relationships between ERI and social factors and health outcomes. Results CFA revealed good psychometric properties indicating that the subscale 'effort' is based on one latent factor and the subscale 'reward' is composed of four dimensions: 'intrinsic value of family and household work', 'societal esteem', 'recognition from the partner', and 'affection from the child(ren)'. About 19.3% of mothers perceived lack of reciprocity and 23.8% showed high rates of overcommitment in terms of inability to withdraw from household and family obligations. Socially disadvantaged mothers were at higher risk of ERI, in particular with respect to the perception of low societal esteem. Gender inequality in the division of household and family work and work-family conflict accounted most for ERI in household and family work. Analogous to ERI in paid work we could demonstrate that ERI affects self-rated health, somatic complaints, mental health and, to some extent, hypertension. Conclusions The newly developed questionnaire demonstrates satisfied validity and promising results for extending the ERI model to household and family work. PMID:22221851

  10. Predictors of the nicotine reinforcement threshold, compensation, and elasticity of demand in a rodent model of nicotine reduction policy*

    PubMed Central

    Grebenstein, Patricia E.; Burroughs, Danielle; Roiko, Samuel A.; Pentel, Paul R.; LeSage, Mark G.

    2015-01-01

    Background The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. Methods The present study examined these issues in a rodent nicotine self- administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Results Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. Conclusions These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. PMID:25891231

  11. Recognition of threshold dose model: Avoiding continued excessive regulation

    SciTech Connect

    Logan, S.E.

    1999-09-01

    The purpose of this work is to examine the relationships between radiation dose-response models and associated regulations. The objective of radiation protection regulations is to protect workers and the public from harm resulting from excessive exposure to radiation. The regulations generally stipulate various levels of radiation dose rate to individuals or limit concentrations of radionuclides in releases to water or the atmosphere. The cleanup standards applied in remedial action for contaminated sites limit the concentrations of radionuclides in soil, groundwater, or structures, for release of sites to other uses. The guiding philosophy is that less is better and none is better yet. This has culminated with the concept of as low as reasonably achievable (ALARA). In fact, all regulations currently in place are arbitrarily based on the linear no-threshold hypothesis (LNTH) dose-response relationship. This concept came into use several decades ago and simply assumes that the incidence of health effects observed at a high dose or high dose rate will decrease linearly with dose or dose rate all the way down to zero, with no threshold level. Subsequent data have accumulated and continue to accumulate, demonstrating that there is a threshold level for net damage and, further, that there is a net benefit (radiation hormesis) at levels below the threshold level. It is concluded that recognition of the validity of a threshold model can be done on the basis of presently known data and that changes in regulations should be started at this time to avoid further unnecessary losses due to continued excessive regulation. As results from new research come in, refinement of interim values proposed in revised regulations can be incorporated.

  12. The interplay between cooperativity and diversity in model threshold ensembles.

    PubMed

    Cervera, Javier; Manzanares, José A; Mafe, Salvador

    2014-10-06

    The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  13. The interplay between cooperativity and diversity in model threshold ensembles

    PubMed Central

    Cervera, Javier; Manzanares, José A.; Mafe, Salvador

    2014-01-01

    The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516

  14. Selection Strategies for Social Influence in the Threshold Model

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  15. A structured threshold model for mountain pine beetle outbreak.

    PubMed

    Lewis, Mark A; Nelson, William; Xu, Cailin

    2010-04-01

    A vigor-structured model for mountain pine beetle outbreak dynamics within a forest stand is proposed and analyzed. This model explicitly tracks the changing vigor structure in the stand. All model parameters, other than beetle vigor preference, were determined by fitting model components to empirical data. An abrupt threshold for tree mortality to beetle densities allows for model simplification. Based on initial beetle density, model outcomes vary from decimation of the entire stand in a single year, to inability of the beetles to infect any trees. An intermediate outcome involves an initial infestation which subsequently dies out before the entire stand is killed. A model extension is proposed for dynamics of beetle aggregation. This involves a stochastic formulation.

  16. Sri Lankan FRAX model and country-specific intervention thresholds.

    PubMed

    Lekamwasam, Sarath

    2013-01-01

    There is a wide variation in fracture probabilities estimated by Asian FRAX models, although the outputs of South Asian models are concordant. Clinicians can choose either fixed or age-specific intervention thresholds when making treatment decisions in postmenopausal women. Cost-effectiveness of such approach, however, needs to be addressed. This study examined suitable fracture probability intervention thresholds (ITs) for Sri Lanka, based on the Sri Lankan FRAX model. Fracture probabilities were estimated using all Asian FRAX models for a postmenopausal woman of BMI 25 kg/m² and has no clinical risk factors apart from a fragility fracture, and they were compared. Age-specific ITs were estimated based on the Sri Lankan FRAX model using the method followed by the National Osteoporosis Guideline Group in the UK. Using the age-specific ITs as the reference standard, suitable fixed ITs were also estimated. Fracture probabilities estimated by different Asian FRAX models varied widely. Japanese and Taiwan models showed higher fracture probabilities while Chinese, Philippine, and Indonesian models gave lower fracture probabilities. Output of remaining FRAX models were generally similar. Age-specific ITs of major osteoporotic fracture probabilities (MOFP) based on the Sri Lankan FRAX model varied from 2.6 to 18% between 50 and 90 years. ITs of hip fracture probabilities (HFP) varied from 0.4 to 6.5% between 50 and 90 years. In finding fixed ITs, MOFP of 11% and HFP of 3.5% gave the lowest misclassification and highest agreement. Sri Lankan FRAX model behaves similar to other Asian FRAX models such as Indian, Singapore-Indian, Thai, and South Korean. Clinicians may use either the fixed or age-specific ITs in making therapeutic decisions in postmenopausal women. The economical aspects of such decisions, however, need to be considered.

  17. [Epidemiology of homicides in Cali, Colombia, 1993-1998: six years of a population-based model].

    PubMed

    Concha-Eastman, Alberto; Espitia, Victoria E; Espinosa, Rafael; Guerrero, Rodrigo

    2002-10-01

    the benefits of a population-based surveillance model are discussed, particularly their usefulness for identifying risk factors and the measures that can be applied to prevent and control this form of violence.

  18. Modeling Source Water Threshold Exceedances with Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Rajagopalan, B.; Samson, C.; Summers, R. S.

    2016-12-01

    Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.

  19. Threshold dynamics of a malaria transmission model in periodic environment

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Teng, Zhidong; Zhang, Tailei

    2013-05-01

    In this paper, we propose a malaria transmission model with periodic environment. The basic reproduction number R0 is computed for the model and it is shown that the disease-free periodic solution of the model is globally asymptotically stable when R0<1, that is, the disease goes extinct when R0<1, while the disease is uniformly persistent and there is at least one positive periodic solution when R0>1. It indicates that R0 is the threshold value determining the extinction and the uniform persistence of the disease. Finally, some examples are given to illustrate the main theoretical results. The numerical simulations show that, when the disease is uniformly persistent, different dynamic behaviors may be found in this model, such as the global attractivity and the chaotic attractor.

  20. A threshold model analysis of deafness in Dalmatians.

    PubMed

    Famula, T R; Oberbauer, A M; Sousa, C A

    1996-09-01

    To elucidate the inheritance of deafness in Dalmatian dogs, 825 dogs in 111 litters were evaluated for abnormalities in hearing through the brainstem auditory evoked response (BAER). Recorded along with their quality of hearing (normal, unilaterally deaf, or bilaterally deaf) were the sex, coat color, eye color and the presence or absence of a color patch. The analysis considered deafness an ordered categorical trait in a threshold model. The underlying, unobservable continuous variate of the threshold model was assumed to be a linear function of sex of dog, coat color (black or liver and white), color patch (presence or absence), eye color, the deafness phenotype of the parents and a random family effect. Twenty-six percent of dogs were deaf in at least one ear. Eye color, color patch, sex and the hearing status of the parents were all significant contributions to deafness. The heritability of deafness, on the continuous unobservable scale, was 0.21. This value was computed after correction for eye color, color patch, parental hearing status and sex, implying that significant genetic variation exists beyond the contribution of several single loci.

  1. Non-smooth plant disease models with economic thresholds.

    PubMed

    Zhao, Tingting; Xiao, Yanni; Smith, Robert J

    2013-01-01

    In order to control plant diseases and eventually maintain the number of infected plants below an economic threshold, a specific management strategy called the threshold policy is proposed, resulting in Filippov systems. These are a class of piecewise smooth systems of differential equations with a discontinuous right-hand side. The aim of this work is to investigate the global dynamic behavior including sliding dynamics of one Filippov plant disease model with cultural control strategy. We examine a Lotka-Volterra Filippov plant disease model with proportional planting rate, which is globally studied in terms of five types of equilibria. For one type of equilibrium, the global structure is discussed by the iterative equations for initial numbers of plants. For the other four types of equilibria, the bounded global attractor of each type is obtained by constructing appropriate Lyapunov functions. The ideas of constructing Lyapunov functions for Filippov systems, the methods of analyzing such systems and the main results presented here provide scientific support for completing control regimens on plant diseases in integrated disease management. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Model to Estimate Threshold Mechanical Stability of Lower Lateral Cartilage

    PubMed Central

    Kim, James Hakjune; Hamamoto, Ashley; Kiyohara, Nicole; Wong, Brian J. F.

    2015-01-01

    IMPORTANCE In rhinoplasty, techniques used to alter the shape of the nasal tip often compromise the structural stability of the cartilage framework in the nose. Determining the minimum threshold level of cartilage stiffness required to maintain long-term structural stability is a critical aspect in performing these surgical maneuvers. OBJECTIVE To quantify the minimum threshold mechanical stability (elastic modulus) of lower lateral cartilage (LLC) according to expert opinion. METHODS Five anatomically correct LLC phantoms were made from urethane via a 3-dimensional computer modeling and injection molding process. All 5 had identical geometry but varied in stiffness along the intermediate crural region (0.63–30.6 MPa). DESIGN, SETTING, AND PARTICIPANTS A focus group of experienced rhinoplasty surgeons (n = 33) was surveyed at a regional professional meeting on October 25, 2013. Each survey participant was presented the 5 phantoms in a random order and asked to arrange the phantoms in order of increasing stiffness based on their sense of touch. Then, they were asked to select a single phantom out of the set that they believed to have the minimum acceptable mechanical stability for LLC to maintain proper form and function. MAIN OUTCOMES AND MEASURES A binary logistic regression was performed to calculate the probability of mechanical acceptability as a function of the elastic modulus of the LLC based on survey data. A Hosmer-Lemeshow test was performed to measure the goodness of fit between the logistic regression and survey data. The minimum threshold mechanical stability for LLC was taken at a 50% acceptability rating. RESULTS Phantom 4 was selected most frequently by the participants as having the minimum acceptable stiffness for LLC intermediate care. The minimum threshold mechanical stability for LLC was determined to be 3.65 MPa. The Hosmer-Lemeshow test revealed good fit between the logistic regression and survey data ( χ32=0.92 , P = .82). CONCLUSIONS AND

  3. Terrestrial Microgravity Model and Threshold Gravity Simulation using Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successfully simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars. The paper will discuss experiments md modeling work to date in support of this project.

  4. Terrestrial Microgravity Model and Threshold Gravity Simulation using Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successfully simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars. The paper will discuss experiments md modeling work to date in support of this project.

  5. Stylized facts from a threshold-based heterogeneous agent model

    NASA Astrophysics Data System (ADS)

    Cross, R.; Grinfeld, M.; Lamba, H.; Seaman, T.

    2007-05-01

    A class of heterogeneous agent models is investigated where investors switch trading position whenever their motivation to do so exceeds some critical threshold. These motivations can be psychological in nature or reflect behaviour suggested by the efficient market hypothesis (EMH). By introducing different propensities into a baseline model that displays EMH behaviour, one can attempt to isolate their effects upon the market dynamics. The simulation results indicate that the introduction of a herding propensity results in excess kurtosis and power-law decay consistent with those observed in actual return distributions, but not in significant long-term volatility correlations. Possible alternatives for introducing such long-term volatility correlations are then identified and discussed.

  6. Terrestrial Microgravity Model and Threshold Gravity Simulation sing Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successiblly simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars.

  7. Terrestrial Microgravity Model and Threshold Gravity Simulation sing Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successiblly simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars.

  8. Semiautomatic bladder segmentation on CBCT using a population-based model for multiple-plan ART of bladder cancer

    NASA Astrophysics Data System (ADS)

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-12-01

    The aim of this study is to develop a novel semiautomatic bladder segmentation approach for selecting the appropriate plan from the library of plans for a multiple-plan adaptive radiotherapy (ART) procedure. A population-based statistical bladder model was first built from a training data set (95 bladder contours from 8 patients). This model was then used as constraint to segment the bladder in an independent validation data set (233 CBCT scans from the remaining 22 patients). All 3D bladder contours were converted into parametric surface representations using spherical harmonic expansion. Principal component analysis (PCA) was applied in the spherical harmonic-based shape parameter space to calculate the major variation of bladder shapes. The number of dominating PCA modes was chosen such that 95% of the total shape variation of the training data set was described. The automatic segmentation started from the bladder contour of the planning CT of each patient, which was modified by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to best fit the bladder boundary in the localization CBCT image. A cost function was defined to measure the goodness of fit of the segmentation on the localization CBCT image. The segmentation was obtained by minimizing this cost function using a simplex optimizer. After automatic segmentation, a fast manual correction method was provided to correct those bladders (parts) that were poorly segmented. Volume- and distance-based metrics and the accuracy of plan selection from multiple plans were evaluated to quantify the performance of the automatic and semiautomatic segmentation methods. For the training data set, only seven PCA modes were needed to represent 95% of the bladder shape variation. The mean CI overlap and residual error (SD) of automatic bladder segmentation over all of the validation data were 70.5% and 0.39 cm, respectively. The agreement of plan

  9. Diagnosis of Parkinson’s disease on the basis of clinical–genetic classification: a population-based modelling study

    PubMed Central

    Nalls, Mike A.; McLean, Cory Y.; Rick, Jacqueline; Eberly, Shirley; Hutten, Samantha J.; Gwinn, Katrina; Sutherland, Margaret; Martinez, Maria; Heutink, Peter; Williams, Nigel; Hardy, John; Gasser, Thomas; Brice, Alexis; Price, T. Ryan; Nicolas, Aude; Keller, Margaux F.; Molony, Cliona; Gibbs, J. Raphael; Chen-Plotkin, Alice; Suh, Eunran; Letson, Christopher; Fiandaca, Massimo S.; Mapstone, Mark; Federoff, Howard J.; Noyce, Alastair J; Morris, Huw; Van Deerlin, Vivianna M.; Weintraub, Daniel; Zabetian, Cyrus; Hernandez, Dena G.; Lesage, Suzanne; Mullins, Meghan; Conley, Emily Drabant; Northover, Carrie; Frasier, Mark; Marek, Ken; Day-Williams, Aaron G.; Stone, David J.; Ioannidis, John P. A.; Singleton, Andrew B.

    2015-01-01

    Background Accurate diagnosis and early detection of complex disease has the potential to be of enormous benefit to clinical trialists, patients, and researchers alike. We sought to create a non-invasive, low-cost, and accurate classification model for diagnosing Parkinson’s disease risk to serve as a basis for future disease prediction studies in prospective longitudinal cohorts. Methods We developed a simple disease classifying model within 367 patients with Parkinson’s disease and phenotypically typical imaging data and 165 controls without neurological disease of the Parkinson’s Progression Marker Initiative (PPMI) study. Olfactory function, genetic risk, family history of PD, age and gender were algorithmically selected as significant contributors to our classifying model. This model was developed using the PPMI study then tested in 825 patients with Parkinson’s disease and 261 controls from five independent studies with varying recruitment strategies and designs including the Parkinson’s Disease Biomarkers Program (PDBP), Parkinson’s Associated Risk Study (PARS), 23andMe, Longitudinal and Biomarker Study in PD (LABS-PD), and Morris K. Udall Parkinson’s Disease Research Center of Excellence (Penn-Udall). Findings Our initial model correctly distinguished patients with Parkinson’s disease from controls at an area under the curve (AUC) of 0.923 (95% CI = 0.900 – 0.946) with high sensitivity (0.834, 95% CI = 0.711 – 0.883) and specificity (0.903, 95% CI = 0.824 – 0.946) in PPMI at its optimal AUC threshold (0.655). The model is also well-calibrated with all Hosmer-Lemeshow simulations suggesting that when parsed into random subgroups, the actual data mirrors that of the larger expected data, demonstrating that our model is robust and fits well. Likewise external validation shows excellent classification of PD with AUCs of 0.894 in PDBP, 0.998 in PARS, 0.955 in 23andMe, 0.929 in LABS-PD, and 0.939 in Penn-Udall. Additionally, when our model

  10. Diagnosis of Parkinson's disease on the basis of clinical and genetic classification: a population-based modelling study.

    PubMed

    Nalls, Mike A; McLean, Cory Y; Rick, Jacqueline; Eberly, Shirley; Hutten, Samantha J; Gwinn, Katrina; Sutherland, Margaret; Martinez, Maria; Heutink, Peter; Williams, Nigel M; Hardy, John; Gasser, Thomas; Brice, Alexis; Price, T Ryan; Nicolas, Aude; Keller, Margaux F; Molony, Cliona; Gibbs, J Raphael; Chen-Plotkin, Alice; Suh, Eunran; Letson, Christopher; Fiandaca, Massimo S; Mapstone, Mark; Federoff, Howard J; Noyce, Alastair J; Morris, Huw; Van Deerlin, Vivianna M; Weintraub, Daniel; Zabetian, Cyrus; Hernandez, Dena G; Lesage, Suzanne; Mullins, Meghan; Conley, Emily Drabant; Northover, Carrie A M; Frasier, Mark; Marek, Ken; Day-Williams, Aaron G; Stone, David J; Ioannidis, John P A; Singleton, Andrew B

    2015-10-01

    Accurate diagnosis and early detection of complex diseases, such as Parkinson's disease, has the potential to be of great benefit for researchers and clinical practice. We aimed to create a non-invasive, accurate classification model for the diagnosis of Parkinson's disease, which could serve as a basis for future disease prediction studies in longitudinal cohorts. We developed a model for disease classification using data from the Parkinson's Progression Marker Initiative (PPMI) study for 367 patients with Parkinson's disease and phenotypically typical imaging data and 165 controls without neurological disease. Olfactory function, genetic risk, family history of Parkinson's disease, age, and gender were algorithmically selected by stepwise logistic regression as significant contributors to our classifying model. We then tested the model with data from 825 patients with Parkinson's disease and 261 controls from five independent cohorts with varying recruitment strategies and designs: the Parkinson's Disease Biomarkers Program (PDBP), the Parkinson's Associated Risk Study (PARS), 23andMe, the Longitudinal and Biomarker Study in PD (LABS-PD), and the Morris K Udall Parkinson's Disease Research Center of Excellence cohort (Penn-Udall). Additionally, we used our model to investigate patients who had imaging scans without evidence of dopaminergic deficit (SWEDD). In the population from PPMI, our initial model correctly distinguished patients with Parkinson's disease from controls at an area under the curve (AUC) of 0·923 (95% CI 0·900-0·946) with high sensitivity (0·834, 95% CI 0·711-0·883) and specificity (0·903, 95% CI 0·824-0·946) at its optimum AUC threshold (0·655). All Hosmer-Lemeshow simulations suggested that when parsed into random subgroups, the subgroup data matched that of the overall cohort. External validation showed good classification of Parkinson's disease, with AUCs of 0·894 (95% CI 0·867-0·921) in the PDBP cohort, 0·998 (0·992-1·000

  11. Use of biotelemetry to define physiology-based deterioration thresholds in a murine cecal ligation and puncture model of sepsis

    PubMed Central

    Lewis, Anthony J.; Yuan, Du; Zhang, Xianghong; Angus, Derek C.; Rosengart, Matthew R.; Seymour, Christopher W.

    2015-01-01

    Objective Murine models of critical illness are commonly used to test new therapeutic interventions. However, these interventions are often administered at fixed time intervals after the insult, perhaps ignoring the inherent variability in magnitude and temporality of the host response. We propose to use wireless biotelemetry monitoring to define and validate criteria for acute deterioration and generate a physiology-based murine cecal ligation and puncture (CLP) model that is more similar to the conduct of human trials of sepsis. Design Laboratory and animal research Setting University basic science laboratory Subjects Male C57BL/6 mice Interventions Mice underwent CLP, and an HD-X11 wireless telemetry monitor (DSI) was implanted that enabled continuous, real-time measurement of heart rate, core temperature, and mobility. We performed a population-based analysis to determine threshold criteria that met face validity for acute physiologic deterioration. We assessed construct validity by temporally matching mice that met these acute physiologic deterioration thresholds with mice that had not yet met deterioration threshold. We analyzed matched blood samples for blood gas, inflammatory cytokine concentration, Cystatin C, and alanine aminotransferase. Measurements and Main Results We observed that a 10% reduction in both heart rate and temperature sustained for >=10 minutes defined acute physiologic deterioration. There was significant variability in the time to reach acute deterioration threshold across mice, ranging from 339 to 529 minutes after CLP. We found adequate construct validity, as mice, which met criteria for acute deterioration had significantly worse shock, systemic inflammation (elevated TNFα, p=0.003; IL-6, p=0.01; IL-10, p=0.005), and acute kidney injury when compared to mice that had not yet met acute deterioration criteria. Conclusion We defined a murine threshold for acute physiologic deterioration after CLP that has adequate face and construct

  12. Use of Biotelemetry to Define Physiology-Based Deterioration Thresholds in a Murine Cecal Ligation and Puncture Model of Sepsis.

    PubMed

    Lewis, Anthony J; Yuan, Du; Zhang, Xianghong; Angus, Derek C; Rosengart, Matthew R; Seymour, Christopher W

    2016-06-01

    Murine models of critical illness are commonly used to test new therapeutic interventions. However, these interventions are often administered at fixed time intervals after the insult, perhaps ignoring the inherent variability in magnitude and temporality of the host response. We propose to use wireless biotelemetry monitoring to define and validate criteria for acute deterioration and generate a physiology-based murine cecal ligation and puncture model that is more similar to the conduct of human trials of sepsis. Laboratory and animal research. University basic science laboratory. Male C57BL/6 mice. Mice underwent cecal ligation and puncture, and an HD-X11 wireless telemetry monitor (Data Sciences International) was implanted that enabled continuous, real-time measurement of heart rate, core temperature, and mobility. We performed a population-based analysis to determine threshold criteria that met face validity for acute physiologic deterioration. We assessed construct validity by temporally matching mice that met these acute physiologic deterioration thresholds with mice that had not yet met deterioration threshold. We analyzed matched blood samples for blood gas, inflammatory cytokine concentration, cystatin C, and alanine aminotransferase. We observed that a 10% reduction in both heart rate and temperature sustained for greater than or equal to 10 minutes defined acute physiologic deterioration. There was significant variability in the time to reach acute deterioration threshold across mice, ranging from 339 to 529 minutes after cecal ligation and puncture. We found adequate construct validity, as mice that met criteria for acute deterioration had significantly worse shock, systemic inflammation (elevated tumor necrosis factor-α, p = 0.003; interleukin-6, p = 0.01; interleukin-10, p = 0.005), and acute kidney injury when compared with mice that had not yet met acute deterioration criteria. We defined a murine threshold for acute physiologic deterioration

  13. Predictors of the nicotine reinforcement threshold, compensation, and elasticity of demand in a rodent model of nicotine reduction policy.

    PubMed

    Grebenstein, Patricia E; Burroughs, Danielle; Roiko, Samuel A; Pentel, Paul R; LeSage, Mark G

    2015-06-01

    The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. The present study examined these issues in a rodent nicotine self-administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Linear No-Threshold Model VS. Radiation Hormesis

    PubMed Central

    Doss, Mohan

    2013-01-01

    The atomic bomb survivor cancer mortality data have been used in the past to justify the use of the linear no-threshold (LNT) model for estimating the carcinogenic effects of low dose radiation. An analysis of the recently updated atomic bomb survivor cancer mortality dose-response data shows that the data no longer support the LNT model but are consistent with a radiation hormesis model when a correction is applied for a likely bias in the baseline cancer mortality rate. If the validity of the phenomenon of radiation hormesis is confirmed in prospective human pilot studies, and is applied to the wider population, it could result in a considerable reduction in cancers. The idea of using radiation hormesis to prevent cancers was proposed more than three decades ago, but was never investigated in humans to determine its validity because of the dominance of the LNT model and the consequent carcinogenic concerns regarding low dose radiation. Since cancer continues to be a major health problem and the age-adjusted cancer mortality rates have declined by only ∼10% in the past 45 years, it may be prudent to investigate radiation hormesis as an alternative approach to reduce cancers. Prompt action is urged. PMID:24298226

  15. Fatigue models as practical tools: diagnostic accuracy and decision thresholds.

    PubMed

    Raslear, Thomas G; Coplen, Michael

    2004-03-01

    Human fatigue models are increasingly being used in a variety of industrial settings, both civilian and military. Current uses include education, awareness, and analysis of individual or group work schedules. Perhaps the ultimate and potentially most beneficial use of human fatigue models is to diagnose if an individual is sufficiently rested to perform a period of duty safely or effectively. When used in this way, two important questions should be asked: 1) What is the accuracy of the diagnosis for duty-specific performance in this application; and 2) What decision threshold is appropriate for this application (i.e., how "fatigued" does an individual have to be to be considered "not safe"). In the simplest situation, a diagnostic fatigue test must distinguish between two states: "fatigued" and "not fatigued," and the diagnostic decisions are "safe" (or "effective") and "not safe" (or "not effective"). The resulting four decision outcomes include diagnostic errors because diagnostic tests are not perfectly accurate. Moreover, since all outcomes have costs and benefits associated with them that differ between applications, the choice of a decision criterion is extremely important. Signal Detection Theory (SDT) has demonstrated usefulness in measuring the accuracy of diagnostic tests and optimizing diagnostic decisions. This paper describes how SDT can be applied to foster the development of fatigue models as practical diagnostic and decision-making tools. By clarifying the difference between accuracy (or sensitivity) and decision criterion (or bias) in the use of fatigue models as diagnostic and decision-making tools, the SDT framework focuses on such critical issues as duty-specific performance, variability (model and performance), and model sensitivity, efficacy, and utility. As fatigue models become increasingly used in a variety of different applications, it is important that end-users understand the interplay of these factors for their particular application.

  16. Population-based local search for protein folding simulation in the MJ energy model and cubic lattices.

    PubMed

    Kapsokalivas, L; Gan, X; Albrecht, A A; Steinhöfel, K

    2009-08-01

    We present experimental results on benchmark problems in 3D cubic lattice structures with the Miyazawa-Jernigan energy function for two local search procedures that utilise the pull-move set: (i) population-based local search (PLS) that traverses the energy landscape with greedy steps towards (potential) local minima followed by upward steps up to a certain level of the objective function; (ii) simulated annealing with a logarithmic cooling schedule (LSA). The parameter settings for PLS are derived from short LSA-runs executed in pre-processing and the procedure utilises tabu lists generated for each member of the population. In terms of the total number of energy function evaluations both methods perform equally well, however, PLS has the potential of being parallelised with an expected speed-up in the region of the population size. Furthermore, both methods require a significant smaller number of function evaluations when compared to Monte Carlo simulations with kink-jump moves.

  17. A Threshold Rule Applied to the Retrieval Decision Model

    ERIC Educational Resources Information Center

    Kraft, Donald H.

    1978-01-01

    A threshold rule is analyzed and compared to the Neyman-Pearson procedure, indicating that the threshold rule provides a necessary but not sufficient measure of the minimal performance of a retrieval system, whereas Neyman-Pearson yields a better apriori decision for retrieval. (Author/MBR)

  18. Wavelet detection of weak far-magnetic signal based on adaptive ARMA model threshold

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Lin, Chun-sheng; Fang, Shi

    2009-10-01

    Based on Mallat algorithm, a de-noising algorithm of adaptive wavelet threshold is applied for weak magnetic signal detection of far moving target in complex magnetic environment. The choice of threshold is the key problem. With the spectrum analysis of the magnetic field target, a threshold algorithm on the basis of adaptive ARMA model filter is brought forward to improve the wavelet filtering performance. The simulation of this algorithm on measured data is carried out. Compared to Donoho threshold algorithm, it shows that adaptive ARMA model threshold algorithm significantly improved the capability of weak magnetic signal detection in complex magnetic environment.

  19. Phase of care prevalence for prostate cancer in New South Wales, Australia: A population-based modelling study

    PubMed Central

    Luo, Qingwei; Smith, David P.; Clements, Mark S.; Patel, Manish I.; O’Connell, Dianne L.

    2017-01-01

    Objective To develop a method for estimating the future numbers of prostate cancer survivors requiring different levels of care. Design, setting and participants Analysis of population-based cancer registry data for prostate cancer cases (aged 18–84 years) diagnosed in 1996–2007, and a linked dataset with hospital admission data for men with prostate cancer diagnosed during 2005–2007 in New South Wales (NSW), Australia. Methods Cancer registry data (1996–2007) were used to project complete prostate cancer prevalence in NSW, Australia for 2008–2017, and treatment information from hospital records (2005–2007) was used to estimate the inpatient care needs during the first year after diagnosis. The projected complete prevalence was divided into care needs-based groups. We first divided the cohort into two groups based on patient’s age (<75 and 75–84 years). The younger cohort was further divided into initial care and monitoring phases. Cause of death data were used as a proxy for patients requiring last year of life prostate cancer care. Finally, episode data were used to estimate the future number of cases with metastatic progression. Results Of the estimated total of 60,910 men with a previous diagnosis of prostate cancer in 2017, the largest groups will be older patients (52.0%) and younger men who require monitoring (42.5%). If current treatment patterns continue, in the first year post-diagnosis 41% (1380) of patients (<75 years) will have a radical prostatectomy, and 52.6% (1752) will be likely to have either active surveillance, external beam radiotherapy or androgen deprivation therapy. About 3% will require care for subsequent metastases, and 1288 men with prostate cancer are likely to die from the disease in 2017. Conclusions This method extends the application of routinely collected population-based data, and can contribute much to the knowledge of the number of men with prostate cancer and their health care requirements. This could be of

  20. No-Impact Threshold Values for NRAP's Reduced Order Models

    SciTech Connect

    Last, George V.; Murray, Christopher J.; Brown, Christopher F.; Jordan, Preston D.; Sharma, Maneesh

    2013-02-01

    The purpose of this study was to develop methodologies for establishing baseline datasets and statistical protocols for determining statistically significant changes between background concentrations and predicted concentrations that would be used to represent a contamination plume in the Gen II models being developed by NRAP’s Groundwater Protection team. The initial effort examined selected portions of two aquifer systems; the urban shallow-unconfined aquifer system of the Edwards-Trinity Aquifer System (being used to develop the ROM for carbon-rock aquifers, and the a portion of the High Plains Aquifer (an unconsolidated and semi-consolidated sand and gravel aquifer, being used to development the ROM for sandstone aquifers). Threshold values were determined for Cd, Pb, As, pH, and TDS that could be used to identify contamination due to predicted impacts from carbon sequestration storage reservoirs, based on recommendations found in the EPA’s ''Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities'' (US Environmental Protection Agency 2009). Results from this effort can be used to inform a ''no change'' scenario with respect to groundwater impacts, rather than the use of an MCL that could be significantly higher than existing concentrations in the aquifer.

  1. Effects of mixing in threshold models of social behavior

    NASA Astrophysics Data System (ADS)

    Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  2. Effects of mixing in threshold models of social behavior.

    PubMed

    Akhmetzhanov, Andrei R; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors' behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the "ground state." Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  3. Extinction thresholds in deterministic and stochastic epidemic models.

    PubMed

    Allen, Linda J S; Lahodny, Glenn E

    2012-01-01

    The basic reproduction number, ℛ(0), one of the most well-known thresholds in deterministic epidemic theory, predicts a disease outbreak if ℛ(0)>1. In stochastic epidemic theory, there are also thresholds that predict a major outbreak. In the case of a single infectious group, if ℛ(0)>1 and i infectious individuals are introduced into a susceptible population, then the probability of a major outbreak is approximately 1-(1/ℛ(0))( i ). With multiple infectious groups from which the disease could emerge, this result no longer holds. Stochastic thresholds for multiple groups depend on the number of individuals within each group, i ( j ), j=1, …, n, and on the probability of disease extinction for each group, q ( j ). It follows from multitype branching processes that the probability of a major outbreak is approximately [Formula: see text]. In this investigation, we summarize some of the deterministic and stochastic threshold theory, illustrate how to calculate the stochastic thresholds, and derive some new relationships between the deterministic and stochastic thresholds.

  4. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  5. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  6. Regional differences in population-based cancer survival between six prefectures in Japan: application of relative survival models with funnel plots.

    PubMed

    Ito, Yuri; Ioka, Akiko; Tsukuma, Hideaki; Ajiki, Wakiko; Sugimoto, Tomoyuki; Rachet, Bernard; Coleman, Michel P

    2009-07-01

    We used new methods to examine differences in population-based cancer survival between six prefectures in Japan, after adjustment for age and stage at diagnosis. We applied regression models for relative survival to data from population-based cancer registries covering each prefecture for patients diagnosed with stomach, lung, or breast cancer during 1993-1996. Funnel plots were used to display the excess hazard ratio (EHR) for each prefecture, defined as the excess hazard of death from each cancer within 5 years of diagnosis relative to the mean excess hazard (in excess of national background mortality by age and sex) in all six prefectures combined. The contribution of age and stage to the EHR in each prefecture was assessed from differences in deviance-based R(2) between the various models. No significant differences were seen between prefectures in 5-year survival from breast cancer. For cancers of the stomach and lung, EHR in Osaka prefecture were above the upper 95% control limits. For stomach cancer, the age- and stage-adjusted EHR in Osaka were 1.29 for men and 1.43 for women, compared with Fukui and Yamagata. Differences in the stage at diagnosis of stomach cancer appeared to explain most of this excess hazard (61.3% for men, 56.8% for women), whereas differences in age at diagnosis explained very little (0.8%, 1.3%). This approach offers the potential to quantify the impact of differences in stage at diagnosis on time trends and regional differences in cancer survival. It underlines the utility of population-based cancer registries for improving cancer control.

  7. Laser thresholds in pulp exposure: a rat animal model

    NASA Astrophysics Data System (ADS)

    White, Joel M.; Goodis, Harold E.; Kudler, Joel J.

    1995-05-01

    Laser technology is now being clinically investigated for the removal of carious enamel and dentin. This study used an animal model to evaluate histological pulpal effects from laser exposure. The molars of 24 Sprague-Dawley rats (n equals 264) were exposed to either a pulsed 1.06 micrometers Nd:YAG laser (120 microseconds, 320 micrometer diameter fiber), air rotor drill preparation or left untreated as controls. The following treatment conditions were investigated: control group (n equals 54); high speed drill with carbide bur (n equals 39); laser exposure at 50 mJ/p at 10 Hz (n equals 27), 100 mJ/p at 10 Hz (n equals 66) and 100 mJ/p at 20 Hz (n equals 39). A sixth treatment condition was investigated: root surface hypersensitivity, which included incremental laser exposure from 30 to 100 mJ/p at 10 Hz (n equals 39). The animals were euthanized either immediately after treatment, at one week, or at one month. The jaws were fixed and bioprepared. Remaining dentin thickness was measured, and ranged from 0.17 +/- 0.04 mm to 0.35 +/- 0.09 mm. The pulp tissue was examined for histologic inflammatory response. No evidence of pulpal involvement or adverse pulpal effects were found at any time period in teeth receiving 50 mJ/p. When histologic samples were compared with controls, all observations were similar. Of the 210 exposed teeth, 2 teeth receiving 100 mJ/p demonstrated abscess formation and were exfoliated. Further, in the rat molar when remaining dentin thickness was less than 0.5 mm, exposed to 100 mJ/p, threshold pulpal effects occurred. The response of rat pulp to laser exposure indicated no histologically measurable response to pulsed laser energy at 50 mJ/p.

  8. A mathematical model for the kinetics of Methanobacterium bryantii M.o.H. considering hydrogen thresholds.

    PubMed

    Karadagli, Fatih; Rittmann, Bruce E

    2007-08-01

    We develop a kinetic model that builds on the foundation of classic Monod kinetics, but incorporates new phenomena such as substrate thresholds and survival mode observed in experiments with the H2-oxidizing methanogen Methanobacterium bryantii M.o.H. We apply our model to the experimental data presented in our companion paper on H2 thresholds. The model accurately describes H2 consumption, CH4 generation, biomass growth, substrate thresholds, and survival state during batch experiments. Methane formation stops when its Gibbs free energy is equal zero, although this does not interrupt H2 oxidation. The thermodynamic threshold for H2 oxidation occurs when the free energy for oxidizing H2 and transferring electrons to biomass is no longer negative, at approximately 0.4 nM. This threshold is not controlled by the Gibbs free energy equation of methanogenesis from H2 + HCO3- as we show in our companion paper. Beyond this threshold, the microorganisms shift to a low-maintenance metabolism called "the survival state" in response to extended H2 starvation; adding the starvation response as another new feature of the kinetic model. A kinetic threshold (or S (min)), a natural feature of the Monod kinetics, is also captured by the model at H2 concentration of around approximately 2,400 nM. S (min) is the minimum substrate concentration to maintain steady-state biomass concentration. Our model will be useful for interpreting threshold results and designing new studies to understand thresholds and their ecological implications.

  9. Nonlinear Dynamic Modeling of Neuron Action Potential Threshold During Synaptically Driven Broadband Intracellular Activity

    PubMed Central

    Roach, Shane M.; Song, Dong; Berger, Theodore W.

    2012-01-01

    Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947

  10. The application of cure models in the presence of competing risks: a tool for improved risk communication in population-based cancer patient survival.

    PubMed

    Eloranta, Sandra; Lambert, Paul C; Andersson, Therese M-L; Björkholm, Magnus; Dickman, Paul W

    2014-09-01

    Quantifying cancer patient survival from the perspective of cure is clinically relevant. However, most cure models estimate cure assuming no competing causes of death. We use a relative survival framework to demonstrate how flexible parametric cure models can be used in combination with competing-risks theory to incorporate noncancer deaths. Under a model that incorporates statistical cure, we present the probabilities that cancer patients (1) have died from their cancer, (2) have died from other causes, (3) will eventually die from their cancer, or (4) will eventually die from other causes, all as a function of time since diagnosis. We further demonstrate how conditional probabilities can be used to update the prognosis among survivors (eg, at 1 or 5 years after diagnosis) by summarizing the proportion of patients who will not die from their cancer. The proposed method is applied to Swedish population-based data for persons diagnosed with melanoma, colon cancer, or acute myeloid leukemia between 1973 and 2007.

  11. Assessing potential population impact of statin treatment for primary prevention of atherosclerotic cardiovascular diseases in the USA: population-based modelling study

    PubMed Central

    Yang, Quanhe; Zhong, Yuna; Gillespie, Cathleen; Merritt, Robert; Bowman, Barbara; George, Mary G; Flanders, W Dana

    2017-01-01

    Objective New cholesterol treatment guidelines from American College of Cardiology/American Heart Association recommend statin treatment for more of US population to prevent atherosclerotic cardiovascular disease (ASCVD). It is important to assess how new guidelines may affect population-level health. This study assessed the impact of statin use for primary prevention of ASCVD under the new guidelines. Methods We used data from 2010 US Multiple Cause Mortality, Third National Health and Nutrition Examination Survey (NHANES III) Linked Mortality File (1988–2006, n=8941) and NHANES 2005–2010 (n=3178) participants 40–75 years of age for the present study. Results Among 33.0 million adults meeting new guidelines for primary prevention of ASCVD, 8.8 million were taking statins; 24.2 million, including 7.7 million with diabetes, are eligible for statin treatment. If all those with diabetes used a statin, 2514 (95% CI 592 to 4142) predicted ASCVD deaths would be prevented annually with 482 (0 to 2239) predicted annual additional cases of myopathy based on randomised clinical trials (RCTs), and 11 801 (9251 to 14 916) using population-based study. Among 16.5 million without diabetes, 5425 (1276 to 8935) ASCVD deaths would be prevented annually with 16 406 (4922 to 26 250) predicted annual additional cases of diabetes and between 1030 (0 to 4791) and 24 302 (19 363 to 30 292) additional cases of myopathy based on RCTs and population-based study. Assuming 80% eligible population take statins with 80% medication adherence, among those without diabetes, the corresponding numbers were 3472 (817 to 5718) deaths, 10 500 (3150 to 16 800) diabetes, 660 (0 to 3066) myopathy (RCTs), and 15 554 (12 392 to 19 387) myopathy (population-based). The estimated total annual cost of statins use ranged from US$1.65 to US$6.5 billion if 100% of eligible population take statins. Conclusions This population-based modelling study focused on impact of statin use on

  12. Development, Calibration, and Validation of a U.S. White Male Population-Based Simulation Model of Esophageal Adenocarcinoma

    PubMed Central

    Hur, Chin; Hayeck, Tristan J.; Yeh, Jennifer M.; Richards, Ethan B.; Spechler, Stuart J.; Gazelle, G. Scott; Kong, Chung Yin

    2010-01-01

    Background The incidence of esophageal adenocarcinoma (EAC) has risen rapidly in the U.S. and western world. The aim of the study was to begin the investigation of this rapid rise by developing, calibrating, and validating a mathematical disease simulation model of EAC using available epidemiologic data. Methods The model represents the natural history of EAC, including the essential biologic health states from normal mucosa to detected cancer. Progression rates between health states were estimated via calibration, which identified distinct parameter sets producing model outputs that fit epidemiologic data; specifically, the prevalence of pre-cancerous lesions and EAC cancer incidence from the published literature and Surveillance, Epidemiology, and End Results (SEER) data. As an illustrative example of a clinical and policy application, the calibrated and validated model retrospectively analyzed the potential benefit of an aspirin chemoprevention program. Results Model outcomes approximated calibration targets; results of the model's fit and validation are presented. Approximately 7,000 cases of EAC could have been prevented over a 30-year period if all white males started aspirin chemoprevention at age 40 in 1965. Conclusions The model serves as the foundation for future analyses to determine a cost-effective screening and management strategy to prevent EAC morbidity and mortality. PMID:20208996

  13. The Best Obesity Indices to Use in a Single Factor Model Indicating Metabolic Syndrome: a Population Based Study.

    PubMed

    Motamed, Nima; Zamani, Farhad; Rabiee, Behnam; Saeedian, Fatemeh Sima; Maadi, Mansooreh; Akhavan-Niaki, Haleh; Asouri, Mohsen

    2016-02-01

    Although metabolic syndrome (MetS) is a major health problem worldwide, there is no universal agreement on its definition. One of the major disagreements is dealing with the issue of obesity in this definition. This study was conducted to determine a preferably better index of obesity which can be interrelated with other components of MetS in a single factor model of MetS. Out of 6140 participants of a cohort study of subjects aged 10-90 years in northern Iran, the baseline data of 5616 participants aged 18-75 was considered.  Confirmatory factor analysis was conducted using AMOS software to evaluate a single factor model of MetS in which blood pressure, triglyceride (TG), high density lipoprotein (HDL), fasting blood sugar (FBS) and obesity measures including waist circumference (WC), body mass index (BMI),  waist to hip ratio (WHR) and waist to height ratio (WHtR) were used as indicators of metabolic syndrome. Four single factor models differing from each other by obesity indices were evaluated. The models were evaluated in all 5616 subjects and 4931 subjects without diabetes mellitus according to sex separately. All single factor models had appropriate fit indices with CFI > 0.95, GFI > 0.95 and RMSEA < 0.08 in non-diabetic population, wherein all models obtained the best values of fit indices in men and good fit indices in women. In the general population of men, the single factor models built based on WHR (Chi-square=6.9, df=2, P-value=0.031, RMSEA = 0.028, CI = 0.007-0.052, CFI = 0.994, GFI = 0.999 and AIC = 22.9)  and WHtR (Chi-square = 9.97, df = 2, P-value = 0.007, RMSEA = 0.036, CI = 0.016-0.059, CFI = 0.992, GFI = 0.998 and AIC = 25.97) were fitted properly with data while in th general population of women, the model based on WHR obtained better fit indices (Chi-square = 7.5, df = 2, P-value = 0.023, RMSEA = 0.033, CI = 0.011-0.060, CFI = 0.994, GFI = 0.998 and AIC = 23.5). Models based on WHtR obtained better regression weights than WHR. While single

  14. Validation of three BRCA1/2 mutation-carrier probability models Myriad, BRCAPRO and BOADICEA in a population-based series of 183 German families.

    PubMed

    Schneegans, S M; Rosenberger, A; Engel, U; Sander, M; Emons, G; Shoukier, M

    2012-06-01

    Many studies have evaluated the performance of risk assessment models for BRCA1/2 mutation carrier probabilities in different populations, but to our knowledge very few studies have been conducted in the German population so far. In the recent study, we validated the performance of three risk calculation models by names BRCAPRO, Myriad and BOADICEA in 183 German families who had undergone molecular testing of mutations in BRCA1 and BRCA2 with an indication based on clinical criteria regarding their family history of cancer. The sensitivity and specificity at the conventional threshold of 10% as well as for a threshold of 20% were evaluated. The ability to discriminate between carriers and non-carriers was judged by the area under the receiver operating characteristics curve. We further focused on the performance characteristic of these models in patients carrying large genomic rearrangements as a subtype of mutations which is currently gaining increasing importance. BRCAPRO and BOADICEA performed almost equally well in our patient population, but we found a lack of agreement to Myriad. The results obtained from this study were consistent with previously published results from other population and racial/ethnic groups. We suggest using model specific decision thresholds instead of the recommended universal value of 10%. We further suggest integrating the CaGene5 software package, which includes BRCAPRO and Myriad, in the genetic counselling of German families with suspected inherited breast and ovarian cancer because of the good performance of BRCAPRO and the substantial ease of use of this software.

  15. Error threshold transition in the random-energy model

    NASA Astrophysics Data System (ADS)

    Campos, Paulo R.

    2002-12-01

    We perform a statistical analysis of the error threshold transition in quasispecies evolution on a random-energy fitness landscape. We obtain a precise description of the genealogical properties of the population through extensive numerical simulations. We find a clear phase transition and can distinguish two regimes of evolution: The first, for low mutation rates, is characterized by strong selection, and the second, for high mutation rates, is characterized by quasineutral evolution.

  16. External validation of a COPD prediction model using population-based primary care data: a nested case-control study

    PubMed Central

    Nwaru, Bright I; Simpson, Colin R; Sheikh, Aziz; Kotz, Daniel

    2017-01-01

    Emerging models for predicting risk of chronic obstructive pulmonary disease (COPD) require external validation in order to assess their clinical value. We validated a previous model for predicting new onset COPD in a different database. We randomly drew 38,597 case-control pairs (total N = 77,194) of individuals aged ≥35 years and matched for sex, age, and general practice from the United Kingdom Clinical Practice Research Datalink database. We assessed accuracy of the model to discriminate between COPD cases and non-cases by calculating area under the receiver operator characteristic (ROCAUC) for the prediction scores. Analogous to the development model, ever smoking (OR 6.70; 95%CI 6.41–6.99), prior asthma (OR 6.43; 95%CI 5.85–7.07), and higher socioeconomic deprivation (OR 2.90; 95%CI 2.72–3.09 for highest vs. lowest quintile) increased the risk of COPD. The validated prediction scores ranged from 0–5.71 (ROCAUC 0.66; 95%CI 0.65–0.66) for males and 0–5.95 (ROCAUC 0.71; 95%CI 0.70–0.71) for females. We have confirmed that smoking, prior asthma, and socioeconomic deprivation are key risk factors for new onset COPD. Our model seems externally valid at identifying patients at risk of developing COPD. An impact assessment now needs to be undertaken to assess whether this prediction model can be applied in clinical care settings. PMID:28304375

  17. Application of a dynamic population-based model for evaluation of exposure reduction strategies in the baking industry

    NASA Astrophysics Data System (ADS)

    Meijster, Tim; Warren, Nick; Heederik, Dick; Tielemans, Erik

    2009-02-01

    Recently a dynamic population model was developed that simulates a population of bakery workers longitudinally through time and tracks the development of work-related sensitisation and respiratory symptoms in each worker. Input for this model comes from cross-sectional and longitudinal epidemiological studies which allowed estimation of exposure response relationships and disease transition probabilities This model allows us to study the development of diseases and transitions between disease states over time in relation to determinants of disease including flour dust and/or allergen exposure. Furthermore it enables more realistic modelling of the health impact of different intervention strategies at the workplace (e.g. changes in exposure may take several years to impact on ill-health and often occur as a gradual trend). A large dataset of individual full-shift exposure measurements and real-time exposure measurements were used to obtain detailed insight into the effectiveness of control measures and other determinants of exposure. Given this information a population wide reduction of the median exposure with 50% was evaluated in this paper.

  18. Partitioning of excess mortality in population-based cancer patient survival studies using flexible parametric survival models

    PubMed Central

    2012-01-01

    Background Relative survival is commonly used for studying survival of cancer patients as it captures both the direct and indirect contribution of a cancer diagnosis on mortality by comparing the observed survival of the patients to the expected survival in a comparable cancer-free population. However, existing methods do not allow estimation of the impact of isolated conditions (e.g., excess cardiovascular mortality) on the total excess mortality. For this purpose we extend flexible parametric survival models for relative survival, which use restricted cubic splines for the baseline cumulative excess hazard and for any time-dependent effects. Methods In the extended model we partition the excess mortality associated with a diagnosis of cancer through estimating a separate baseline excess hazard function for the outcomes under investigation. This is done by incorporating mutually exclusive background mortality rates, stratified by the underlying causes of death reported in the Swedish population, and by introducing cause of death as a time-dependent effect in the extended model. This approach thereby enables modeling of temporal trends in e.g., excess cardiovascular mortality and remaining cancer excess mortality simultaneously. Furthermore, we illustrate how the results from the proposed model can be used to derive crude probabilities of death due to the component parts, i.e., probabilities estimated in the presence of competing causes of death. Results The method is illustrated with examples where the total excess mortality experienced by patients diagnosed with breast cancer is partitioned into excess cardiovascular mortality and remaining cancer excess mortality. Conclusions The proposed method can be used to simultaneously study disease patterns and temporal trends for various causes of cancer-consequent deaths. Such information should be of interest for patients and clinicians as one way of improving prognosis after cancer is through adapting treatment

  19. A new threshold dose-response model including random effects for data from developmental toxicity studies.

    PubMed

    Hunt, Daniel L; Rai, Shesh N

    2005-01-01

    Usually, in teratological dose finding studies, there are not only threshold effects but also extra variations that cannot be accounted for by the beta-binomial model alone. The beta-binomial model assumes correlation between fetuses in the same litter. The general random effect threshold (RE) model allows the additional variability that arises due to correlation and between litter variability to be modeled, in combination with threshold in the model. The goal of this research was to investigate a threshold dose-response model with random effects (RE) to model the variability that exists between litters of animals in studies of toxic agents. Data from a developmental toxicity study of a toxic agent were analysed, using the proposed RE threshold dose-response model, which is an extension of logit in form. Also, an approximate likelihood function was used to derive parameter estimates from this model, and tests were performed to determine the significance of the model parameters, in particular, the RE parameter. A simulation study was conducted to assess the performance of the RE threshold model in estimating the model parameters. 2005 John Wiley & Sons, Ltd.

  20. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  1. Modelling the regulatory system for diabetes mellitus with a threshold window

    NASA Astrophysics Data System (ADS)

    Yang, Jin; Tang, Sanyi; Cheke, Robert A.

    2015-05-01

    Piecewise (or non-smooth) glucose-insulin models with threshold windows for type 1 and type 2 diabetes mellitus are proposed and analyzed with a view to improving understanding of the glucose-insulin regulatory system. For glucose-insulin models with a single threshold, the existence and stability of regular, virtual, pseudo-equilibria and tangent points are addressed. Then the relations between regular equilibria and a pseudo-equilibrium are studied. Furthermore, the sufficient and necessary conditions for the global stability of regular equilibria and the pseudo-equilibrium are provided by using qualitative analysis techniques of non-smooth Filippov dynamic systems. Sliding bifurcations related to boundary node bifurcations were investigated with theoretical and numerical techniques, and insulin clinical therapies are discussed. For glucose-insulin models with a threshold window, the effects of glucose thresholds or the widths of threshold windows on the durations of insulin therapy and glucose infusion were addressed. The duration of the effects of an insulin injection is sensitive to the variation of thresholds. Our results indicate that blood glucose level can be maintained within a normal range using piecewise glucose-insulin models with a single threshold or a threshold window. Moreover, our findings suggest that it is critical to individualise insulin therapy for each patient separately, based on initial blood glucose levels.

  2. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  3. A model measuring therapeutic inertia and the associated factors among diabetes patients: A nationwide population-based study in Taiwan.

    PubMed

    Huang, Li-Ying; Shau, Wen-Yi; Yeh, Hseng-Long; Chen, Tsung-Tai; Hsieh, Jun Yi; Su, Syi; Lai, Mei-Shu

    2015-01-01

    This article presents an analysis conducted on the patterns related to therapeutic inertia with the aim of uncovering how variables at the patient level and the healthcare provider level influence the intensification of therapy when it is clinically indicated. A cohort study was conducted on 899,135 HbA1c results from 168,876 adult diabetes patients with poorly controlled HbA1c levels. HbA1c results were used to identify variations in the prescription of hypoglycemic drugs. Logistic regression and hierarchical linear models (HLMs) were used to determine how differences among healthcare providers and patient characteristics influence therapeutic inertia. We estimated that 38.5% of the patients in this study were subject to therapeutic inertia. The odds ratio of cardiologists choosing to intensify therapy was 0.708 times that of endocrinologists. Furthermore, patients in medical centers were shown to be 1.077 times more likely to be prescribed intensified treatment than patients in primary clinics. The HLMs presented results similar to those of the logistic model. Overall, we determined that 88.92% of the variation in the application of intensified treatment was at the within-physician level. Reducing therapeutic inertia will likely require educational initiatives aimed at ensuring adherence to clinical practice guidelines in the care of diabetes patients. © 2014, The American College of Clinical Pharmacology.

  4. Development of a population-based cost-effectiveness model of chronic graft-versus-host disease in Spain.

    PubMed

    Crespo, Carlos; Pérez-Simón, José Anton; Rodríguez, José Manuel; Sierra, Jordi; Brosa, Max

    2012-08-01

    Chronic graft-versus-host disease (cGvHD) is the leading cause of late nonrelapse mortality (transplant-related mortality) after hematopoietic stem cell transplant. Given that there are a wide range of treatment options for cGvHD, assessment of the associated costs and efficacy can help clinicians and health care providers allocate health care resources more efficiently. The purpose of this study was to assess the cost-effectiveness of extracorporeal photopheresis (ECP) compared with rituximab (Rmb) and with imatinib (Imt) in patients with cGvHD at 5 years from the perspective of the Spanish National Health System. The model assessed the incremental cost-effectiveness/utility ratio of ECP versus Rmb or Imt for 1000 hypothetical patients by using microsimulation cost-effectiveness techniques. Model probabilities were obtained from the literature. Treatment pathways and adverse events were evaluated taking clinical opinion and published reports into consideration. Local data on costs (2010 Euros) and health care resources utilization were validated by the clinical authors. Probabilistic sensitivity analyses were used to assess the robustness of the model. The greater efficacy of ECP resulted in a gain of 0.011 to 0.024 quality-adjusted life-year in the first year and 0.062 to 0.094 at year 5 compared with Rmb or Imt. The results showed that the higher acquisition cost of ECP versus Imt was compensated for at 9 months by greater efficacy; this higher cost was partially compensated for (€517) by year 5 versus Rmb. After 9 months, ECP was dominant (cheaper and more effective) compared with Imt. The incremental cost-effectiveness ratio of ECP versus Rmb was €29,646 per life-year gained and €24,442 per quality-adjusted life-year gained at year 2.5. Probabilistic sensitivity analysis confirmed the results. The main study limitation was that to assess relative treatment effects, only small studies were available for indirect comparison. ECP as a third-line therapy for

  5. Direct analysis of unphased SNP genotype data in population-based association studies via Bayesian partition modelling of haplotypes.

    PubMed

    Morris, Andrew P

    2005-09-01

    We describe a novel method for assessing the strength of disease association with single nucleotide polymorphisms (SNPs) in a candidate gene or small candidate region, and for estimating the corresponding haplotype relative risks of disease, using unphased genotype data directly. We begin by estimating the relative frequencies of haplotypes consistent with observed SNP genotypes. Under the Bayesian partition model, we specify cluster centres from this set of consistent SNP haplotypes. The remaining haplotypes are then assigned to the cluster with the "nearest" centre, where distance is defined in terms of SNP allele matches. Within a logistic regression modelling framework, each haplotype within a cluster is assigned the same disease risk, reducing the number of parameters required. Uncertainty in phase assignment is addressed by considering all possible haplotype configurations consistent with each unphased genotype, weighted in the logistic regression likelihood by their probabilities, calculated according to the estimated relative haplotype frequencies. We develop a Markov chain Monte Carlo algorithm to sample over the space of haplotype clusters and corresponding disease risks, allowing for covariates that might include environmental risk factors or polygenic effects. Application of the algorithm to SNP genotype data in an 890-kb region flanking the CYP2D6 gene illustrates that we can identify clusters of haplotypes with similar risk of poor drug metaboliser (PDM) phenotype, and can distinguish PDM cases carrying different high-risk variants. Further, the results of a detailed simulation study suggest that we can identify positive evidence of association for moderate relative disease risks with a sample of 1,000 cases and 1,000 controls.

  6. Associations of iron metabolism genes with blood manganese levels: a population-based study with validation data from animal models

    PubMed Central

    2011-01-01

    Background Given mounting evidence for adverse effects from excess manganese exposure, it is critical to understand host factors, such as genetics, that affect manganese metabolism. Methods Archived blood samples, collected from 332 Mexican women at delivery, were analyzed for manganese. We evaluated associations of manganese with functional variants in three candidate iron metabolism genes: HFE [hemochromatosis], TF [transferrin], and ALAD [δ-aminolevulinic acid dehydratase]. We used a knockout mouse model to parallel our significant results as a novel method of validating the observed associations between genotype and blood manganese in our epidemiologic data. Results Percentage of participants carrying at least one copy of HFE C282Y, HFE H63D, TF P570S, and ALAD K59N variant alleles was 2.4%, 17.7%, 20.1%, and 6.4%, respectively. Percentage carrying at least one copy of either C282Y or H63D allele in HFE gene was 19.6%. Geometric mean (geometric standard deviation) manganese concentrations were 17.0 (1.5) μg/l. Women with any HFE variant allele had 12% lower blood manganese concentrations than women with no variant alleles (β = -0.12 [95% CI = -0.23 to -0.01]). TF and ALAD variants were not significant predictors of blood manganese. In animal models, Hfe-/- mice displayed a significant reduction in blood manganese compared with Hfe+/+ mice, replicating the altered manganese metabolism found in our human research. Conclusions Our study suggests that genetic variants in iron metabolism genes may contribute to variability in manganese exposure by affecting manganese absorption, distribution, or excretion. Genetic background may be critical to consider in studies that rely on environmental manganese measurements. PMID:22074419

  7. Approaches in methodology for population-based longitudinal study on neuroprotective model for healthy longevity (TUA) among Malaysian Older Adults.

    PubMed

    Shahar, Suzana; Omar, Azahadi; Vanoh, Divya; Hamid, Tengku Aizan; Mukari, Siti Zamratol Mai-Sarah; Din, Normah Che; Rajab, Nor Fadilah; Mohammed, Zainora; Ibrahim, Rahimah; Loo, Won Hui; Meramat, Asheila; Kamaruddin, Mohd Zul Amin; Bagat, Mohamad Fazdillah; Razali, Rosdinom

    2016-12-01

    A number of longitudinal studies on aging have been designed to determine the predictors of healthy longevity, including the neuroprotective factors, however, relatively few studies included a wide range of factors and highlighted the challenges faced during data collection. Thus, the longitudinal study on neuroprotective model for healthy longevity (LRGS TUA) has been designed to prospectively investigate the magnitude of cognitive decline and its risk factors through a comprehensive multidimensional assessment comprising of biophysical health, auditory and visual function, nutrition and dietary pattern and psychosocial aspects. At baseline, subjects were interviewed for their status on sociodemographic, health, neuropsychological test, psychosocial and dietary intake. Subjects were also measured for anthropometric and physical function and fitness. Biospecimens including blood, buccal swap, hair and toenail were collected, processed and stored. A subsample was assessed for sensory function, i.e., vision and auditory. During follow-up, at 18 and 36 months, most of the measurements, along with morbidity and mortality outcomes will be collected. The description of mild cognitive impairment, successful aging and usual aging process is presented here. A total 2322 respondents were recruited in the data analysis at baseline. Most of the respondents were categorized as experiencing usual aging (73 %), followed by successful aging (11 %) and mild cognitive impairment (16 %). The LRGS TUA study is the most comprehensive longitudinal study on aging in Malaysia, and will contribute to the understanding of the aging process and factors associated with healthy aging and mental well-being of a multiethnic population in Malaysia.

  8. Reversed thresholds in partial credit models: a reason for collapsing categories?

    PubMed

    Wetzel, Eunike; Carstensen, Claus H

    2014-12-01

    When questionnaire data with an ordered polytomous response format are analyzed in the framework of item response theory using the partial credit model or the generalized partial credit model, reversed thresholds may occur. This led to the discussion of whether reversed thresholds violate model assumptions and indicate disordering of the response categories. Adams, Wu, and Wilson showed that reversed thresholds are merely a consequence of low frequencies in the categories concerned and that they do not affect the order of the rating scale. This article applies an empirical approach to elucidate the topic of reversed thresholds using data from the Revised NEO Personality Inventory as well as a simulation study. It is shown that categories differentiate between participants with different trait levels despite reversed thresholds and that category disordering can be analyzed independently of the ordering of the thresholds. Furthermore, we show that reversed thresholds often only occur in subgroups of participants. Thus, researchers should think more carefully about collapsing categories due to reversed thresholds. © The Author(s) 2014.

  9. Budget Impact Analysis of Switching to Digital Mammography in a Population-Based Breast Cancer Screening Program: A Discrete Event Simulation Model

    PubMed Central

    Comas, Mercè; Arrospide, Arantzazu; Mar, Javier; Sala, Maria; Vilaprinyó, Ester; Hernández, Cristina; Cots, Francesc; Martínez, Juan; Castells, Xavier

    2014-01-01

    Objective To assess the budgetary impact of switching from screen-film mammography to full-field digital mammography in a population-based breast cancer screening program. Methods A discrete-event simulation model was built to reproduce the breast cancer screening process (biennial mammographic screening of women aged 50 to 69 years) combined with the natural history of breast cancer. The simulation started with 100,000 women and, during a 20-year simulation horizon, new women were dynamically entered according to the aging of the Spanish population. Data on screening were obtained from Spanish breast cancer screening programs. Data on the natural history of breast cancer were based on US data adapted to our population. A budget impact analysis comparing digital with screen-film screening mammography was performed in a sample of 2,000 simulation runs. A sensitivity analysis was performed for crucial screening-related parameters. Distinct scenarios for recall and detection rates were compared. Results Statistically significant savings were found for overall costs, treatment costs and the costs of additional tests in the long term. The overall cost saving was 1,115,857€ (95%CI from 932,147 to 1,299,567) in the 10th year and 2,866,124€ (95%CI from 2,492,610 to 3,239,638) in the 20th year, representing 4.5% and 8.1% of the overall cost associated with screen-film mammography. The sensitivity analysis showed net savings in the long term. Conclusions Switching to digital mammography in a population-based breast cancer screening program saves long-term budget expense, in addition to providing technical advantages. Our results were consistent across distinct scenarios representing the different results obtained in European breast cancer screening programs. PMID:24832200

  10. How do physicians decide to treat: an empirical evaluation of the threshold model

    PubMed Central

    2014-01-01

    Background According to the threshold model, when faced with a decision under diagnostic uncertainty, physicians should administer treatment if the probability of disease is above a specified threshold and withhold treatment otherwise. The objectives of the present study are to a) evaluate if physicians act according to a threshold model, b) examine which of the existing threshold models [expected utility theory model (EUT), regret-based threshold model, or dual-processing theory] explains the physicians’ decision-making best. Methods A survey employing realistic clinical treatment vignettes for patients with pulmonary embolism and acute myeloid leukemia was administered to forty-one practicing physicians across different medical specialties. Participants were randomly assigned to the order of presentation of the case vignettes and re-randomized to the order of “high” versus “low” threshold case. The main outcome measure was the proportion of physicians who would or would not prescribe treatment in relation to perceived changes in threshold probability. Results Fewer physicians choose to treat as the benefit/harms ratio decreased (i.e. the threshold increased) and more physicians administered treatment as the benefit/harms ratio increased (and the threshold decreased). When compared to the actual treatment recommendations, we found that the regret model was marginally superior to the EUT model [Odds ratio (OR) = 1.49; 95% confidence interval (CI) 1.00 to 2.23; p = 0.056]. The dual-processing model was statistically significantly superior to both EUT model [OR = 1.75, 95% CI 1.67 to 4.08; p < 0.001] and regret model [OR = 2.61, 95% CI 1.11 to 2.77; p = 0.018]. Conclusions We provide the first empirical evidence that physicians’ decision-making can be explained by the threshold model. Of the threshold models tested, the dual-processing theory of decision-making provides the best explanation for the observed empirical results. PMID

  11. A phenomenological model on the kink mode threshold varying with the inclination of sheath boundary

    SciTech Connect

    Sun, X.; Intrator, T. P.; Sears, J.; Weber, T.; Liu, M.

    2013-11-15

    In nature and many laboratory plasmas, a magnetic flux tube threaded by current or a flux rope has a footpoint at a boundary. The current driven kink mode is one of the fundamental ideal magnetohydrodynamic instabilities in plasmas. It has an instability threshold that has been found to strongly depend on boundary conditions (BCs). We provide a theoretical model to explain the transition of this threshold dependence between nonline tied and line tied boundary conditions. We evaluate model parameters using experimentally measured plasma data, explicitly verify several kink eigenfunctions, and validate the model predictions for boundary conditions BCs that span the range between NLT and LT BCs. Based on this model, one could estimate the kink threshold given knowledge of the displacement of a flux rope end, or conversely estimate flux rope end motion based on knowledge of it kink stability threshold.

  12. Two-threshold model for scaling laws of noninteracting snow avalanches

    USGS Publications Warehouse

    Faillettaz, J.; Louchet, F.; Grasso, J.-R.

    2004-01-01

    A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.

  13. The threshold of a stochastic delayed SIR epidemic model with vaccination

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Jiang, Daqing

    2016-11-01

    In this paper, we study the threshold dynamics of a stochastic delayed SIR epidemic model with vaccination. We obtain sufficient conditions for extinction and persistence in the mean of the epidemic. The threshold between persistence in the mean and extinction of the stochastic system is also obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number Rbar0 of the deterministic system. Results show that time delay has important effects on the persistence and extinction of the epidemic.

  14. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover

  15. The threshold of a stochastic delayed SIR epidemic model with temporary immunity

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Chen, Qingmei; Jiang, Daqing

    2016-05-01

    This paper is concerned with the asymptotic properties of a stochastic delayed SIR epidemic model with temporary immunity. Sufficient conditions for extinction and persistence in the mean of the epidemic are established. The threshold between persistence in the mean and extinction of the epidemic is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.

  16. The Rasch Rating Model and the Disordered Threshold Controversy

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Wu, Margaret L.; Wilson, Mark

    2012-01-01

    The Rasch rating (or partial credit) model is a widely applied item response model that is used to model ordinal observed variables that are assumed to collectively reflect a common latent variable. In the application of the model there is considerable controversy surrounding the assessment of fit. This controversy is most notable when the set of…

  17. Emerging organisational models of primary healthcare and unmet needs for care: insights from a population-based survey in Quebec province.

    PubMed

    Levesque, Jean-Frédéric; Pineault, Raynald; Hamel, Marjolaine; Roberge, Danièle; Kapetanakis, Costas; Simard, Brigitte; Prud'homme, Alexandre

    2012-07-02

    Reform of primary healthcare (PHC) organisations is underway in Canada. The capacity of various types of PHC organizations to respond to populations' needs remains to be assessed. The main objective of this study was to evaluate the association of PHC affiliation with unmet needs for care. Population-based survey of 9205 randomly selected adults in two regions of Quebec, Canada. Outcomes Self-reported unmet needs for care and identification of the usual source of PHC. Among eligible adults, 18% reported unmet needs for care in the last six months. Reasons reported for unmet needs were: waiting times (59% of cases); unavailability of usual doctor (42%); impossibility to obtain an appointment (36%); doctors not accepting new patients (31%). Regression models showed that unmet needs were decreasing with age and was lower among males, the least educated, and unemployed or retired. Controlling for other factors, unmet needs were higher among the poor and those with worse health status. Having a family doctor was associated with fewer unmet needs. People reporting a usual source of care in the last two-years were more likely to report unmet need for care. There were no differences in unmet needs for care across types of PHC organisations when controlling for affiliation with a family physician. Reform models of primary healthcare consistent with the medical home concept did not differ from other types of organisations in our study. Further research looking at primary healthcare reform models at other levels of implementation should be done.

  18. Threshold voltage roll-off modelling of bilayer graphene field-effect transistors

    NASA Astrophysics Data System (ADS)

    Saeidmanesh, M.; Ismail, Razali; Khaledian, M.; Karimi, H.; Akbari, E.

    2013-12-01

    An analytical model is presented for threshold voltage roll-off of double gate bilayer graphene field-effect transistors. To this end, threshold voltage models of short- and long-channel states have been developed. In the short-channel case, front and back gate potential distributions have been modelled and used. In addition, the tunnelling probability is modelled and its effect is taken into consideration in the potential distribution model. To evaluate the accuracy of the potential model, FlexPDE software is employed with proper boundary conditions and a good agreement is observed. Using the proposed models, the effect of several structural parameters on the threshold voltage and its roll-off are studied at room temperature.

  19. Falling in the elderly: Do statistical models matter for performance criteria of fall prediction? Results from two large population-based studies.

    PubMed

    Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Fantino, Bruno; Levinoff, Elise J; Allali, Gilles; Beauchet, Olivier

    2016-01-01

    To compare performance criteria (i.e., sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve and accuracy) of linear and non-linear statistical models for fall risk in older community-dwellers. Participants were recruited in two large population-based studies, "Prévention des Chutes, Réseau 4" (PCR4, n=1760, cross-sectional design, retrospective collection of falls) and "Prévention des Chutes Personnes Agées" (PCPA, n=1765, cohort design, prospective collection of falls). Six linear statistical models (i.e., logistic regression, discriminant analysis, Bayes network algorithm, decision tree, random forest, boosted trees), three non-linear statistical models corresponding to artificial neural networks (multilayer perceptron, genetic algorithm and neuroevolution of augmenting topologies [NEAT]) and the adaptive neuro fuzzy interference system (ANFIS) were used. Falls ≥1 characterizing fallers and falls ≥2 characterizing recurrent fallers were used as outcomes. Data of studies were analyzed separately and together. NEAT and ANFIS had better performance criteria compared to other models. The highest performance criteria were reported with NEAT when using PCR4 database and falls ≥1, and with both NEAT and ANFIS when pooling data together and using falls ≥2. However, sensitivity and specificity were unbalanced. Sensitivity was higher than specificity when identifying fallers, whereas the converse was found when predicting recurrent fallers. Our results showed that NEAT and ANFIS were non-linear statistical models with the best performance criteria for the prediction of falls but their sensitivity and specificity were unbalanced, underscoring that models should be used respectively for the screening of fallers and the diagnosis of recurrent fallers. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  20. Product versus additive threshold models for analysis of reproduction outcomes in animal genetics.

    PubMed

    David, I; Bodin, L; Gianola, D; Legarra, A; Manfredi, E; Robert-Granié, C

    2009-08-01

    The phenotypic observation of some reproduction traits (e.g., insemination success, interval from lambing to insemination) is the result of environmental and genetic factors acting on 2 individuals: the male and female involved in a mating couple. In animal genetics, the main approach (called additive model) proposed for studying such traits assumes that the phenotype is linked to a purely additive combination, either on the observed scale for continuous traits or on some underlying scale for discrete traits, of environmental and genetic effects affecting the 2 individuals. Statistical models proposed for studying human fecundability generally consider reproduction outcomes as the product of hypothetical unobservable variables. Taking inspiration from these works, we propose a model (product threshold model) for studying a binary reproduction trait that supposes that the observed phenotype is the product of 2 unobserved phenotypes, 1 for each individual. We developed a Gibbs sampling algorithm for fitting a Bayesian product threshold model including additive genetic effects and showed by simulation that it is feasible and that it provides good estimates of the parameters. We showed that fitting an additive threshold model to data that are simulated under a product threshold model provides biased estimates, especially for individuals with high breeding values. A main advantage of the product threshold model is that, in contrast to the additive model, it provides distinct estimates of fixed effects affecting each of the 2 unobserved phenotypes.

  1. The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide

    PubMed Central

    Folly, Walter Sydney Dutra

    2011-01-01

    Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431

  2. Threshold pion photoproduction in a light-cone quark model

    NASA Astrophysics Data System (ADS)

    Konen, W.; Drechsel, D.

    1991-07-01

    The instantaneous and seagull graphs are calculated for pion photoproduction in a relativistic light-cone model of the nucleon. In both pseudoscalar and pseudovector coupling we find the ratios A (-): A (0): A (+) = 1: ( {-1}/{2}μ):( {-9}/{5}μ) in the nonrelativistic limit. These results correspond to the sum of seagull and Z-graph in the nonrelativistic quark model. In pseudovector coupling also the numerical results for realistic-model parameters are close to those values.

  3. Using Fixed Thresholds with Grouped Data in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Koran, Jennifer; Hancock, Gregory R.

    2010-01-01

    Valuable methods have been developed for incorporating ordinal variables into structural equation models using a latent response variable formulation. However, some model parameters, such as the means and variances of latent factors, can be quite difficult to interpret because the latent response variables have an arbitrary metric. This limitation…

  4. A generalized methodology for identification of threshold for HRU delineation in SWAT model

    NASA Astrophysics Data System (ADS)

    M, J.; Sudheer, K.; Chaubey, I.; Raj, C.

    2016-12-01

    The distributed hydrological model, Soil and Water Assessment Tool (SWAT) is a comprehensive hydrologic model widely used for making various decisions. The simulation accuracy of the distributed hydrological model differs due to the mechanism involved in the subdivision of the watershed. Soil and Water Assessment Tool (SWAT) considers sub-dividing the watershed and the sub-basins into small computing units known as 'hydrologic response units (HRU). The delineation of HRU is done based on unique combinations of land use, soil types, and slope within the sub-watersheds, which are not spatially defined. The computations in SWAT are done at HRU level and are then aggregated up to the sub-basin outlet, which is routed through the stream system. Generally, the HRUs are delineated by considering a threshold percentage of land use, soil and slope are to be given by the modeler to decrease the computation time of the model. The thresholds constrain the minimum area for constructing an HRU. In the current HRU delineation practice in SWAT, the land use, soil and slope of the watershed within a sub-basin, which is less than the predefined threshold, will be surpassed by the dominating land use, soil and slope, and introduce some level of ambiguity in the process simulations in terms of inappropriate representation of the area. But the loss of information due to variation in the threshold values depends highly on the purpose of the study. Therefore this research studies the effects of threshold values of HRU delineation on the hydrological modeling of SWAT on sediment simulations and suggests guidelines for selecting the appropriate threshold values considering the sediment simulation accuracy. The preliminary study was done on Illinois watershed by assigning different thresholds for land use and soil. A general methodology was proposed for identifying an appropriate threshold for HRU delineation in SWAT model that considered computational time and accuracy of the simulation

  5. Investigating the genetic architecture of conditional strategies using the environmental threshold model

    PubMed Central

    Hazel, Wade N.; Tomkins, Joseph L.

    2015-01-01

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a ‘half-sib common environment’ and a ‘family-level split environment’ experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic ‘proximate’ cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions. PMID:26674955

  6. Investigating the genetic architecture of conditional strategies using the environmental threshold model.

    PubMed

    Buzatto, Bruno A; Buoro, Mathieu; Hazel, Wade N; Tomkins, Joseph L

    2015-12-22

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a 'half-sib common environment' and a 'family-level split environment' experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic 'proximate' cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions.

  7. A self-assessment predictive model for type 2 diabetes or impaired fasting glycaemia derived from a population-based survey.

    PubMed

    Asadollahi, Khairollah; Asadollahi, Parisa; Azizi, Monire; Abangah, Ghobad

    2017-09-01

    There is no cure for diabetes and its prevention is interesting for both people and health policy makers. The aim of this study was to construct a simple scoring system to predict diabetes and suggest a self assessment predictive model for type 2 diabetes in Iran. This study was a part of a comprehensive population based survey performed in Ilam province during 2011-2012, including 2158 cases≥25years. All demographic and laboratory results were entered into the prepared sheets and were analysed using SPSS 16. By identification of relative risks of diabetes and IFG, a predictive model was constructed and proposed for these abnormalities. Totally, 2158 people comprising 72% female, 60% from urban regions, mean age of 45.5±14years were investigated and the average height, weight, FBS and waist of participants were as follows respectively: 164±8.9cm, 68.4±12.3kg, 5.7±2.8mmol/l (102.6±49.9mg/dl) and 82.3±14.3cm. The prevalence of IFG, diabetes and hyperglycaemia among all participants were 7.8%, 11.8% and 19.6% respectively. Regression analysis revealed familial history of diabetes, place of life, age, hypertension, daily exercise, marital status, gender, waist size, smoking, and BMI as the most relevant risk factors for diabetes and hyperglycemia. A self-assessment predictive model was constructed for general population living in the west of Iran. This is the first self-assessment predictive model for diabetes in Iran. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Natural History of Dependency in the Elderly: A 24-Year Population-Based Study Using a Longitudinal Item Response Theory Model.

    PubMed

    Edjolo, Arlette; Proust-Lima, Cécile; Delva, Fleur; Dartigues, Jean-François; Pérès, Karine

    2016-02-15

    We aimed to describe the hierarchical structure of Instrumental Activities of Daily Living (IADL) and basic Activities of Daily Living (ADL) and trajectories of dependency before death in an elderly population using item response theory methodology. Data were obtained from a population-based French cohort study, the Personnes Agées QUID (PAQUID) Study, of persons aged ≥65 years at baseline in 1988 who were recruited from 75 randomly selected areas in Gironde and Dordogne. We evaluated IADL and ADL data collected at home every 2-3 years over a 24-year period (1988-2012) for 3,238 deceased participants (43.9% men). We used a longitudinal item response theory model to investigate the item sequence of 11 IADL and ADL combined into a single scale and functional trajectories adjusted for education, sex, and age at death. The findings confirmed the earliest losses in IADL (shopping, transporting, finances) at the partial limitation level, and then an overlapping of concomitant IADL and ADL, with bathing and dressing being the earliest ADL losses, and finally total losses for toileting, continence, eating, and transferring. Functional trajectories were sex-specific, with a benefit of high education that persisted until death in men but was only transient in women. An in-depth understanding of this sequence provides an early warning of functional decline for better adaptation of medical and social care in the elderly.

  9. The relationship between the Five-Factor Model personality traits and peptic ulcer disease in a large population-based adult sample.

    PubMed

    Realo, Anu; Teras, Andero; Kööts-Ausmees, Liisi; Esko, Tõnu; Metspalu, Andres; Allik, Jüri

    2015-12-01

    The current study examined the relationship between the Five-Factor Model personality traits and physician-confirmed peptic ulcer disease (PUD) diagnosis in a large population-based adult sample, controlling for the relevant behavioral and sociodemographic factors. Personality traits were assessed by participants themselves and by knowledgeable informants using the NEO Personality Inventory-3 (NEO PI-3). When controlling for age, sex, education, and cigarette smoking, only one of the five NEO PI-3 domain scales - higher Neuroticism - and two facet scales - lower A1: Trust and higher C1: Competence - made a small, yet significant contribution (p < 0.01) to predicting PUD in logistic regression analyses. In the light of these relatively modest associations, our findings imply that it is certain behavior (such as smoking) and sociodemographic variables (such as age, gender, and education) rather than personality traits that are associated with the diagnosis of PUD at a particular point in time. Further prospective studies with a longitudinal design and multiple assessments would be needed to fully understand if the FFM personality traits serve as risk factors for the development of PUD. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  10. A physics-based model of threshold voltage for amorphous oxide semiconductor thin-film transistors

    NASA Astrophysics Data System (ADS)

    Chen, Chi-Le; Chen, Wei-Feng; Zhou, Lei; Wu, Wei-Jing; Xu, Miao; Wang, Lei; Peng, Jun-Biao

    2016-03-01

    In the application of the Lambert W function, the surface potential for amorphous oxide semiconductor thin-film transistors (AOS TFTs) under the subthreshold region is approximated by an asymptotic equation only considering the tail states. While the surface potential under the above-threshold region is approximated by another asymptotic equation only considering the free carriers. The intersection point between these two asymptotic equations represents the transition from the weak accumulation to the strong accumulation. Therefore, the gate voltage corresponding to the intersection point is defined as threshold voltage of AOS TFTs. As a result, an analytical expression for the threshold voltage is derived from this novel definition. It is shown that the threshold voltage achieved by the proposed physics-based model is agreeable with that extracted by the conventional linear extrapolation method. Furthermore, we find that the free charge per unit area in the channel starts increasing sharply from the threshold voltage point, where the concentration of the free carriers is a little larger than that of the localized carriers. The proposed model for the threshold voltage of AOS TFTs is not only physically meaningful but also mathematically convenient, so it is expected to be useful for characterizing and modeling AOS TFTs.

  11. Modeling the Threshold Wind Speed for Saltation Initiation over Heterogeneous Sand Beds

    NASA Astrophysics Data System (ADS)

    Turney, F. A.; Martin, R. L.; Kok, J. F.

    2015-12-01

    Initiation of aeolian sediment transport is key to understanding the formation of dunes, emission of dust into the atmosphere, and landscape erosion. Previous models of the threshold wind speed required for saltation initiation have assumed that the particle bed is monodisperse and homogeneous in arrangement, thereby ignoring what is in reality a distribution of particle lifting thresholds, influenced by variability in soil particle sizes and bed geometry. To help overcome this problem, we present a numerical model that determines the distribution of threshold wind speeds required for particle lifting for a given soil size distribution. The model results are evaluated against high frequency wind speed and saltation data from a recent field campaign in Oceano Dunes in Southern California. The results give us insight into the range of lifting thresholds present during incipient sediment transport and the simplifications that are often made to characterize the process. In addition, this study provides a framework for moving beyond the 'fluid threshold' paradigm, which is known to be inaccurate, especially for near-threshold conditions.

  12. Risk factors and a clinical prediction model for low maternal thyroid function during early pregnancy: two population-based prospective cohort studies.

    PubMed

    Korevaar, Tim I M; Nieboer, Daan; Bisschop, Peter H L T; Goddijn, Mariette; Medici, Marco; Chaker, Layal; de Rijke, Yolanda B; Jaddoe, Vincent W V; Visser, Theo J; Steyerberg, Ewout W; Tiemeier, Henning; Vrijkotte, Tanja G; Peeters, Robin P

    2016-12-01

    Low maternal thyroid function during early pregnancy is associated with various adverse outcomes including impaired neurocognitive development of the offspring, premature delivery and abnormal birthweight. To aid doctors in the risk assessment of thyroid dysfunction during pregnancy, we set out to investigate clinical risk factors and derive a prediction model based on easily obtainable clinical variables. In total, 9767 women during early pregnancy (≤18 week) were selected from two population-based prospective cohorts: the Generation R Study (N = 5985) and the ABCD study (N = 3782). We aimed to investigate the association of easily obtainable clinical subject characteristics such as maternal age, BMI, smoking status, ethnicity, parity and gestational age at blood sampling with the risk of low free thyroxine (FT4) and elevated thyroid stimulating hormone (TSH), determined according to the 2·5th-97·5th reference range in TPOAb negative women. BMI, nonsmoking and ethnicity were risk factors for elevated TSH levels; however, the discriminative ability was poor (range c-statistic of 0·57-0·60). Sensitivity analysis showed that addition of TPOAbs to the model yielded a c-statistic of 0·73-0·75. Maternal age, BMI, smoking, parity and gestational age at blood sampling were risk factors for low FT4, which taken together provided adequate discrimination (range c-statistic of 0·72-0·76). Elevated TSH levels depend predominantly on TPOAb levels, and prediction of elevated TSH levels is not possible with clinical characteristics only. In contrast, the validated clinical prediction model for FT4 had high discriminative value to assess the likelihood of low FT4 levels. © 2016 John Wiley & Sons Ltd.

  13. Emerging organisational models of primary healthcare and unmet needs for care: insights from a population-based survey in Quebec province

    PubMed Central

    2012-01-01

    Background Reform of primary healthcare (PHC) organisations is underway in Canada. The capacity of various types of PHC organizations to respond to populations’ needs remains to be assessed. The main objective of this study was to evaluate the association of PHC affiliation with unmet needs for care. Methods Population-based survey of 9205 randomly selected adults in two regions of Quebec, Canada. Outcomes Self-reported unmet needs for care and identification of the usual source of PHC. Results Among eligible adults, 18 % reported unmet needs for care in the last six months. Reasons reported for unmet needs were: waiting times (59 % of cases); unavailability of usual doctor (42 %); impossibility to obtain an appointment (36 %); doctors not accepting new patients (31 %). Regression models showed that unmet needs were decreasing with age and was lower among males, the least educated, and unemployed or retired. Controlling for other factors, unmet needs were higher among the poor and those with worse health status. Having a family doctor was associated with fewer unmet needs. People reporting a usual source of care in the last two-years were more likely to report unmet need for care. There were no differences in unmet needs for care across types of PHC organisations when controlling for affiliation with a family physician. Conclusion Reform models of primary healthcare consistent with the medical home concept did not differ from other types of organisations in our study. Further research looking at primary healthcare reform models at other levels of implementation should be done. PMID:22748060

  14. Postscript: Parallel Distributed Processing in Localist Models without Thresholds

    ERIC Educational Resources Information Center

    Plaut, David C.; McClelland, James L.

    2010-01-01

    The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…

  15. Postscript: Parallel Distributed Processing in Localist Models without Thresholds

    ERIC Educational Resources Information Center

    Plaut, David C.; McClelland, James L.

    2010-01-01

    The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…

  16. Interactive breast mass segmentation using a convex active contour model with optimal threshold values.

    PubMed

    Acho, Sussan Nkwenti; Rae, William Ian Duncombe

    2016-10-01

    A convex active contour model requires a predefined threshold value to determine the global solution for the best contour to use when doing mass segmentation. Fixed thresholds or manual tuning of threshold values for optimum mass boundary delineation are impracticable. A proposed method is presented to determine an optimized mass-specific threshold value for the convex active contour derived from the probability matrix of the mass with the particle swarm optimization method. We compared our results with the Chan-Vese segmentation and a published global segmentation model on masses detected on direct digital mammograms. The regional term of the convex active contour model maximizes the posterior partitioning probability for binary segmentation. Suppose the probability matrix is binary thresholded using the particle swarm optimization to obtain a value T1, we define the optimal threshold value for the global minimizer of the convex active contour as the mean intensity of all pixels whose probabilities are greater than T1. The mean Jaccard similarity indices were 0.89±0.07 for the proposed/Chan-Vese method and 0.88±0.06 for the proposed/published segmentation model. The mean Euclidean distance between Fourier descriptors of the segmented areas was 0.05±0.03 for the proposed/Chan-Vese method and 0.06±0.04 for the proposed/published segmentation model. This efficient method avoids problems of initial level set contour placement and contour re-initialization. Moreover, optimum segmentation results are realized for all masses improving on the fixed threshold value of 0.5 proposed elsewhere. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Does Imaging Technology Cause Cancer? Debunking the Linear No-Threshold Model of Radiation Carcinogenesis.

    PubMed

    Siegel, Jeffry A; Welsh, James S

    2016-04-01

    In the past several years, there has been a great deal of attention from the popular media focusing on the alleged carcinogenicity of low-dose radiation exposures received by patients undergoing medical imaging studies such as X-rays, computed tomography scans, and nuclear medicine scintigraphy. The media has based its reporting on the plethora of articles published in the scientific literature that claim that there is "no safe dose" of ionizing radiation, while essentially ignoring all the literature demonstrating the opposite point of view. But this reported "scientific" literature in turn bases its estimates of cancer induction on the linear no-threshold hypothesis of radiation carcinogenesis. The use of the linear no-threshold model has yielded hundreds of articles, all of which predict a definite carcinogenic effect of any dose of radiation, regardless of how small. Therefore, hospitals and professional societies have begun campaigns and policies aiming to reduce the use of certain medical imaging studies based on perceived risk:benefit ratio assumptions. However, as they are essentially all based on the linear no-threshold model of radiation carcinogenesis, the risk:benefit ratio models used to calculate the hazards of radiological imaging studies may be grossly inaccurate if the linear no-threshold hypothesis is wrong. Here, we review the myriad inadequacies of the linear no-threshold model and cast doubt on the various studies based on this overly simplistic model. © The Author(s) 2015.

  18. A profile-aware resist model with variable threshold

    NASA Astrophysics Data System (ADS)

    Moulis, Sylvain; Farys, Vincent; Belledent, Jérôme; Thérèse, Romain; Lan, Song; Zhao, Qian; Feng, Mu; Depre, Laurent; Dover, Russell

    2012-11-01

    The pursuit of ever smaller transistors has pushed technological innovations in the field of lithography. In order to continue following the path of Moore's law, several solutions have been proposed: EUV, e-beam and double patterning lithography. As EUV and e-beam lithography are still not ready for mass production for 20 nm and 14 nm nodes, double patterning lithography play an important role for these nodes. In this work, we focus on a Self-Aligned Double-Patterning process (SADP) which consists of depositing a spacer material on each side of a mandrel exposed during a first lithography step, dividing the pitch into two, after being transferred into the substrate, and then cutting the unwanted patterns through a second lithography exposure. In the specific case where spacers are deposited directly on the flanks of the resist, it is crucial to control its profile as it could induce final CD errors or even spacer collapse. One possibility to prevent these defects from occurring is to predict the profile of the resist at the OPc verification stage. For that, we need an empirical resist model that is able to predict such behaviour. This work is a study of a profile-aware resist model that is calibrated using both atomic force microscopy (AFM) and scanning electron microscopy (SEM) data, both taken using a focus and exposure matrix (FEM).

  19. Estimation of iodine nutrition and thyroid function status in late-gestation pregnant women in the United States: Development and application of a population-based pregnancy model.

    PubMed

    Lumen, A; George, N I

    2017-01-01

    Previously, a deterministic biologically-based dose-response (BBDR) pregnancy model was developed to evaluate moderate thyroid axis disturbances with and without thyroid-active chemical exposure in a near-term pregnant woman and fetus. In the current study, the existing BBDR model was adapted to include a wider functional range of iodine nutrition, including more severe iodine deficiency conditions, and to incorporate empirically the effects of homeostatic mechanisms. The extended model was further developed into a population-based model and was constructed using a Monte Carlo-based probabilistic framework. In order to characterize total (T4) and free (fT4) thyroxine levels for a given iodine status at the population-level, the distribution of iodine intake for late-gestation pregnant women in the U.S was reconstructed using various reverse dosimetry methods and available biomonitoring data. The range of median (mean) iodine intake values resulting from three different methods of reverse dosimetry tested was 196.5-219.9μg of iodine/day (228.2-392.9μg of iodine/day). There was minimal variation in model-predicted maternal serum T4 and ft4 thyroxine levels from use of the three reconstructed distributions of iodine intake; the range of geometric mean for T4 and fT4, was 138-151.7nmol/L and 7.9-8.7pmol/L, respectively. The average value of the ratio of the 97.5th percentile to the 2.5th percentile equaled 3.1 and agreed well with similar estimates from recent observations in third-trimester pregnant women in the U.S. In addition, the reconstructed distributions of iodine intake allowed us to estimate nutrient inadequacy for late-gestation pregnant women in the U.S. via the probability approach. The prevalence of iodine inadequacy for third-trimester pregnant women in the U.S. was estimated to be between 21% and 44%. Taken together, the current work provides an improved tool for evaluating iodine nutritional status and the corresponding thyroid function status in

  20. A threshold-based weather model for predicting stripe rust infection in winter wheat

    USDA-ARS?s Scientific Manuscript database

    Wheat stripe rust (WSR) (caused by Puccinia striiformis sp. tritici) is a major threat in most wheat growing regions worldwide, with potential to inflict regular yield losses when environmental conditions are favorable. We propose a threshold-based disease-forecasting model using a stepwise modeling...

  1. Study on the threshold of a stochastic SIR epidemic model and its extensions

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli

    2016-09-01

    This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.

  2. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.

    PubMed

    Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A

    2010-10-11

    We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.

  3. Modeling Soil Quality Thresholds to Ecosystem Recovery at Fort Benning, Georgia, USA

    SciTech Connect

    Garten Jr., C.T.

    2004-03-08

    The objective of this research was to use a simple model of soil C and N dynamics to predict nutrient thresholds to ecosystem recovery on degraded soils at Fort Benning, Georgia, in the southeastern USA. The model calculates aboveground and belowground biomass, soil C inputs and dynamics, soil N stocks and availability, and plant N requirements. A threshold is crossed when predicted soil N supplies fall short of predicted N required to sustain biomass accrual at a specified recovery rate. Four factors were important to development of thresholds to recovery: (1) initial amounts of aboveground biomass, (2) initial soil C stocks (i.e., soil quality), (3) relative recovery rates of biomass, and (4) soil sand content. Thresholds to ecosystem recovery predicted by the model should not be interpreted independent of a specified recovery rate. Initial soil C stocks influenced the predicted patterns of recovery by both old field and forest ecosystems. Forests and old fields on soils with varying sand content had different predicted thresholds to recovery. Soil C stocks at barren sites on Fort Benning generally lie below predicted thresholds to 100% recovery of desired future ecosystem conditions defined on the basis of aboveground biomass (18000 versus 360 g m{sup -2} for forests and old fields, respectively). Calculations with the model indicated that reestablishment of vegetation on barren sites to a level below the desired future condition is possible at recovery rates used in the model, but the time to 100% recovery of desired future conditions, without crossing a nutrient threshold, is prolonged by a reduced rate of forest growth. Predicted thresholds to ecosystem recovery were less on soils with more than 70% sand content. The lower thresholds for old field and forest recovery on more sandy soils are apparently due to higher relative rates of net soil N mineralization in more sandy soils. Calculations with the model indicate that a combination of desired future

  4. Medication Adherence Patterns after Hospitalization for Coronary Heart Disease. A Population-Based Study Using Electronic Records and Group-Based Trajectory Models

    PubMed Central

    Librero, Julián; Sanfélix-Gimeno, Gabriel; Peiró, Salvador

    2016-01-01

    Objective To identify adherence patterns over time and their predictors for evidence-based medications used after hospitalization for coronary heart disease (CHD). Patients and Methods We built a population-based retrospective cohort of all patients discharged after hospitalization for CHD from public hospitals in the Valencia region (Spain) during 2008 (n = 7462). From this initial cohort, we created 4 subcohorts with at least one prescription (filled or not) from each therapeutic group (antiplatelet, beta-blockers, ACEI/ARB, statins) within the first 3 months after discharge. Monthly adherence was defined as having ≥24 days covered out of 30, leading to a repeated binary outcome measure. We assessed the membership to trajectory groups of adherence using group-based trajectory models. We also analyzed predictors of the different adherence patterns using multinomial logistic regression. Results We identified a maximum of 5 different adherence patterns: 1) Nearly-always adherent patients; 2) An early gap in adherence with a later recovery; 3) Brief gaps in medication use or occasional users; 4) A slow decline in adherence; and 5) A fast decline. These patterns represented variable proportions of patients, the descending trajectories being more frequent for the beta-blocker and ACEI/ARB cohorts (16% and 17%, respectively) than the antiplatelet and statin cohorts (10% and 8%, respectively). Predictors of poor or intermediate adherence patterns were having a main diagnosis of unstable angina or other forms of CHD vs. AMI in the index hospitalization, being born outside Spain, requiring copayment or being older. Conclusion Distinct adherence patterns over time and their predictors were identified. This may be a useful approach for targeting improvement interventions in patients with poor adherence patterns. PMID:27551748

  5. Evaluation of the FRAX model for hip fracture predictions in the population-based Kuopio Osteoporosis Risk Factor and Prevention Study (OSTPRE).

    PubMed

    Sund, Reijo; Honkanen, Risto; Johansson, Helena; Odén, Anders; McCloskey, Eugene; Kanis, John; Kröger, Heikki

    2014-07-01

    Calibration of the Finnish FRAX model was evaluated using a locally derived population-based cohort of postmenopausal women (n = 13,917). Hip fractures were observed from national register-based data and verified from radiological records. For a subpopulation of 11,182 women, there were enough data to calculate the fracture probabilities using the Finnish FRAX tool (without bone mineral density). A 10-year period prevalence of hip fractures to this subpopulation was 0.66 %. The expected numbers of hip fractures were significantly higher than the self reported ones (O/E ratio 0.46; 95 % CI 0.33-0.63), had a tendency to be greater than the observed ones (O/E ratio 0.83; 95 % CI 0.65-1.04), and calibration in terms of goodness-of-fit of absolute probabilities was questionable (P = 0.015). Strikingly, the 10-year period prevalence of hip fractures to the whole cohort was higher (0.84 %) than for the women with FRAX measurements (0.66 %). This was mainly the result of difference between people who had and who had not responded to postal enquiries (0.71 vs. 1.77 %, P < 0.0001). Self-reports missed to capture 38 % of all hip fractures in those who responded and about 45 % of hip fractures in women who had a FRAX estimate. The Finnish FRAX tool seems to provide appropriate discrimination for hip fracture risk, but caution is required in the interpretation of absolute risk, especially if used for population that may not be representing general population per se. Our study also showed that patients with no response had significantly higher hip fracture risk and that the use of purely self-reported hip fractures in calculations results in biased incidence and period prevalence estimates. Such important biases may remain unnoticed if there are no data from other sources available.

  6. Estimated incidence of cardiovascular complications related to type 2 diabetes in Mexico using the UKPDS outcome model and a population-based survey

    PubMed Central

    2011-01-01

    Background To estimate the incidence of complications, life expectancy and diabetes related mortality in the Mexican diabetic population over the next two decades using data from a nation-wide, population based survey and the United Kingdom Prospective Diabetes Study (UKPDS) outcome model Methods The cohort included all patients with type 2 diabetes evaluated during the National Health and Nutrition Survey (ENSANut) 2006. ENSANut is a probabilistic multistage stratified survey whose aim was to measure the prevalence of chronic diseases. A total of 47,152 households were visited. Results are shown stratified by gender, time since diagnosis (> or ≤ to 10 years) and age at the time of diagnosis (> or ≤ 40 years). Results The prevalence of diabetes in our cohort was 14.4%. The predicted 20 year-incidence for chronic complications per 1000 individuals are: ischemic heart disease 112, myocardial infarction 260, heart failure 113, stroke 101, and amputation 62. Furthermore, 539 per 1000 patients will have a diabetes-related premature death. The average life expectancy for the diabetic population is 10.9 years (95%CI 10.7-11.2); this decreases to 8.3 years after adjusting for quality of life (CI95% 8.1-8.5). Male sex and cases diagnosed after age 40 have the highest risk for developing at least one major complication during the next 20 years. Conclusions Based on the current clinical profile of Mexican patients with diabetes, the burden of disease related complications will be tremendous over the next two decades. PMID:21214916

  7. Modeling of ablation threshold dependence on pulse duration for dielectrics with ultrashort pulsed laser

    NASA Astrophysics Data System (ADS)

    Sun, Mingying; Zhu, Jianqiang; Lin, Zunqi

    2017-01-01

    We present a numerical model of plasma formation in ultrafast laser ablation on the dielectrics surface. Ablation threshold dependence on pulse duration is predicted with the model and the numerical results for water agrees well with the experimental data for pulse duration from 140 fs to 10 ps. Influences of parameters and approximations of photo- and avalanche-ionization on the ablation threshold prediction are analyzed in detail for various pulse lengths. The calculated ablation threshold is strongly dependent on electron collision time for all the pulse durations. The complete photoionization model is preferred for pulses shorter than 1 ps rather than the multiphoton ionization approximations. The transition time of inverse bremsstrahlung absorption needs to be considered when pulses are shorter than 5 ps and it can also ensure the avalanche ionization (AI) coefficient consistent with that in multiple rate equations (MREs) for pulses shorter than 300 fs. The threshold electron density for AI is only crucial for longer pulses. It is reasonable to ignore the recombination loss for pulses shorter than 100 fs. In addition to thermal transport and hydrodynamics, neglecting the threshold density for AI and recombination could also contribute to the disagreements between the numerical and the experimental results for longer pulses.

  8. Effect of otologic drill noise on ABR thresholds in a guinea pig model.

    PubMed

    Suits, G W; Brummett, R E; Nunley, J

    1993-10-01

    The noise generated by the otologic drill has been implicated as a cause of sensorineural hearing loss after ear surgery. However, clinical studies on this subject are contradictory and difficult to interpret. Therefore a guinea pig model was used to study whether the level of noise generated by the otologic drill can cause threshold shifts in the auditory brainstem response (ABR). The source noise was a recording obtained during a human cadaver mastoidectomy using a microphone and an accelerometer. Ten female Topeka-strain guinea pigs were exposed to the recorded drill noise for a period of 55 minutes. Exposure included both air-conducted energy from a speaker and bone-conducted energy from a bone vibrator applied directly to the skull. ABR threshold measurements were taken pre-exposure (baseline), immediately after exposure, and at weekly intervals thereafter for 3 weeks. Three control animals were subjected to the same procedure without the sound exposure. A significant threshold shift (p < 0.0001) was seen for each frequency tested (2, 4, 8, 16, 20, and 32 kHz) immediately after exposure to noise in all experimental animals. Thresholds returned to baseline within 3 weeks. We conclude that the level of noise generated by the otologic drill in mastoid surgery can cause a temporary threshold shift in this guinea pig model.

  9. Thresholds in vegetation responses to drought: Implications for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    Tague, C.; Dugger, A. L.

    2011-12-01

    While threshold behavior is often associated with soil and subsurface runoff generation, dynamic vegetation responses to water stress may be an important contributor to threshold type behavior in rainfall runoff models. Vegetation water loss varies with vegetation type and biomass and transpiration dynamics in many settings are regulated by stomatal function. In water limited environments the timing and frequency of stomatal closure varies from year to year as a function of water stress. Stomatal closure and associated fine time scale (hourly to weekly) plant transpiration may appear as threshold (on/off) behavior. Total seasonal to annual plant water use, however, typically show a continuous relationship with atmospheric conditions and soil moisture. Thus while short-time scale behavior may demonstrate non-linear, threshold type behavior, continuous relationships at slightly longer time scales can be used to capture the role of vegetation mediated water loss and its associated impact on storage and runoff. Many rainfall runoff models rely on these types of relationships. However these relationships may change if water stress influences vegetation structure as it does in drought conditions. Forest dieback under drought is a dramatic example of a threshold event, and one that is expected to occur with increasing frequency under a warmer climate. Less dramatic but still important are changes in leaf and root biomass in response to drought. We demonstrate these effects using a coupled ecosystem carbon cycling and hydrology model and show that by accounting for drought driven changes in vegetation dynamics we improve our ability to capture inter-annual variation in streamflow for a semi-arid watershed in New Mexico. We also use the model to predict spatial patterns of more catastrophic vegetation dieback with moisture stress and show that we can accurately capture the spatial pattern of ponderosa pine dieback during a early 2000s drought in New Mexico. We use these

  10. Analysis and modeling of zero-threshold voltage native devices with industry standard BSIM6 model

    NASA Astrophysics Data System (ADS)

    Gupta, Chetan; Agarwal, Harshit; Lin, Y. K.; Ito, Akira; Hu, Chenming; Singh Chauhan, Yogesh

    2017-04-01

    In this paper, we present the modeling of zero-threshold voltage (V TH) bulk MOSFET, also called native devices, using enhanced BSIM6 model. Devices under study show abnormally high leakage current in weak inversion, leading to degraded subthreshold slope. The reasons for such abnormal behavior are identified using technology computer-aided design (TCAD) simulations. Since the zero-V TH transistors have quite low doping, the depletion layer from drain may extend upto the source (at some non-zero value of V DS) which leads to punch-through phenomenon. This source-drain leakage current adds with the main channel current, causing the unexpected current characteristics in these devices. TCAD simulations show that, as we increase the channel length (L eff) and channel doping (N SUB), the source-drain leakage due to punch-through decreases. We propose a model to capture the source-drain leakage in these devices. The model incorporates gate, drain, body biases and channel length as well as channel doping dependency too. The proposed model is validated with the measured data of production level device over various conditions of biases and channel lengths.

  11. The relation between a microscopic threshold-force model and macroscopic models of adhesion

    NASA Astrophysics Data System (ADS)

    Hulikal, Srivatsan; Bhattacharya, Kaushik; Lapusta, Nadia

    2017-06-01

    This paper continues our recent work on the relationship between discrete contact interactions at the microscopic scale and continuum contact interactions at the macroscopic scale (Hulikal et al., J. Mech. Phys. Solids 76, 144-161, 2015). The focus of this work is on adhesion. We show that a collection of a large number of discrete elements governed by a threshold-force based model at the microscopic scale collectively gives rise to continuum fracture mechanics at the macroscopic scale. A key step is the introduction of an efficient numerical method that enables the computation of a large number of discrete contacts. Finally, while this work focuses on scaling laws, the methodology introduced in this paper can also be used to study rough-surface adhesion.

  12. The relation between a microscopic threshold-force model and macroscopic models of adhesion

    NASA Astrophysics Data System (ADS)

    Hulikal, Srivatsan; Bhattacharya, Kaushik; Lapusta, Nadia

    2017-01-01

    This paper continues our recent work on the relationship between discrete contact interactions at the microscopic scale and continuum contact interactions at the macroscopic scale (Hulikal et al., J. Mech. Phys. Solids 76, 144-161, 2015). The focus of this work is on adhesion. We show that a collection of a large number of discrete elements governed by a threshold-force based model at the microscopic scale collectively gives rise to continuum fracture mechanics at the macroscopic scale. A key step is the introduction of an efficient numerical method that enables the computation of a large number of discrete contacts. Finally, while this work focuses on scaling laws, the methodology introduced in this paper can also be used to study rough-surface adhesion.

  13. Hillslope threshold response to rainfall: (2) development and use of a macroscale model

    Treesearch

    Chris B. Graham; Jeffrey J. McDonnell

    2010-01-01

    Hillslope hydrological response to precipitation is extremely complex and poorly modeled. One possible approach for reducing the complexity of hillslope response and its mathematical parameterization is to look for macroscale hydrological behavior. Hillslope threshold response to storm precipitation is one such macroscale behavior observed at field sites across the...

  14. Delayed thresholds and heavy-flavor production in the dual parton model

    SciTech Connect

    Capella, A.; Sukhatme, U.; Tan, C.; Tran Thanh Van, J.

    1987-07-01

    It is shown that the two-chain structure of the cut Pomeron in the dual parton model for low-p/sub T/ multiparticle production provides a natural explanation for the phenomenon of delayed thresholds for heavy-flavor production in proton-proton collisions.

  15. The threshold of a stochastic SIVS epidemic model with nonlinear saturated incidence

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli; Zhang, Tiansi; Yuan, Sanling

    2016-02-01

    A stochastic version of the SIS epidemic model with vaccination (SIVS) is studied. When the noise is small, the threshold parameter is identified, which determines the extinction and persistence of the epidemic. Besides, the results show that large noise will suppress the epidemic from prevailing regardless of the saturated incidence. The results are illustrated by computer simulations.

  16. Coherence Threshold and the Continuity of Processing: The RI-Val Model of Comprehension

    ERIC Educational Resources Information Center

    O'Brien, Edward J.; Cook, Anne E.

    2016-01-01

    Common to all models of reading comprehension is the assumption that a reader's level of comprehension is heavily influenced by their standards of coherence (van den Broek, Risden, & Husbye-Hartman, 1995). Our discussion focuses on a subcomponent of the readers' standards of coherence: the coherence threshold. We situate this discussion within…

  17. Using participatory agent-based models to measure flood managers' decision thresholds in extreme event response

    NASA Astrophysics Data System (ADS)

    Metzger, A.; Douglass, E.; Gray, S. G.

    2016-12-01

    Extreme flooding impacts to coastal cities are not only a function of storm characteristics, but are heavily influenced by decision-making and preparedness in event-level response. While recent advances in climate and hydrological modeling make it possible to predict the influence of climate change on storm and flooding patterns, flood managers still face a great deal of uncertainty related to adapting organizational responses and decision thresholds to these changing conditions. Some decision thresholds related to mitigation of extreme flood impacts are well-understood and defined by organizational protocol, but others are difficult to quantify due to reliance on contextual expert knowledge, experience, and complexity of information necessary to make certain decisions. Our research attempts to address this issue by demonstrating participatory modeling methods designed to help flood managers (1) better understand and parameterize local decision thresholds in extreme flood management situations, (2) collectively learn about scaling management decision thresholds to future local flooding scenarios and (3) identify effective strategies for adaptating flood mitigation actions and organizational response to climate change-intensified flooding. Our agent-based system dynamic models rely on expert knowledge from local flood managers and sophisticated, climate change-informed hydrological models to simulate current and future flood scenarios. Local flood managers from interact with these models by receiving dynamic information and making management decisions as a flood scenario progresses, allowing parametrization of decision thresholds under different scenarios. Flooding impacts are calculated in each iteration as a means of discussing effectiveness of responses and prioritizing response alternatives. We discuss the findings of this participatory modeling and educational process from a case study of Boston, MA, and discuss transferability of these methods to other types

  18. Using participatory agent-based models to measure flood managers' decision thresholds in extreme event response

    NASA Astrophysics Data System (ADS)

    Metzger, A.; Douglass, E.; Gray, S. G.

    2016-02-01

    Extreme flooding impacts to coastal cities are not only a function of storm characteristics, but are heavily influenced by decision-making and preparedness in event-level response. While recent advances in climate and hydrological modeling make it possible to predict the influence of climate change on storm and flooding patterns, flood managers still face a great deal of uncertainty related to adapting organizational responses and decision thresholds to these changing conditions. Some decision thresholds related to mitigation of extreme flood impacts are well-understood and defined by organizational protocol, but others are difficult to quantify due to reliance on contextual expert knowledge, experience, and complexity of information necessary to make certain decisions. Our research attempts to address this issue by demonstrating participatory modeling methods designed to help flood managers (1) better understand and parameterize local decision thresholds in extreme flood management situations, (2) collectively learn about scaling management decision thresholds to future local flooding scenarios and (3) identify effective strategies for adaptating flood mitigation actions and organizational response to climate change-intensified flooding. Our agent-based system dynamic models rely on expert knowledge from local flood managers and sophisticated, climate change-informed hydrological models to simulate current and future flood scenarios. Local flood managers from interact with these models by receiving dynamic information and making management decisions as a flood scenario progresses, allowing parametrization of decision thresholds under different scenarios. Flooding impacts are calculated in each iteration as a means of discussing effectiveness of responses and prioritizing response alternatives. We discuss the findings of this participatory modeling and educational process from a case study of Boston, MA, and discuss transferability of these methods to other types

  19. Local Bifurcations and Optimal Theory in a Delayed Predator-Prey Model with Threshold Prey Harvesting

    NASA Astrophysics Data System (ADS)

    Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel

    2015-06-01

    We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.

  20. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    SciTech Connect

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; Brown, Christopher F.

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  1. Determination of validation threshold for coordinate measuring methods using a metrological compatibility model

    NASA Astrophysics Data System (ADS)

    Gromczak, Kamila; Gąska, Adam; Kowalski, Marek; Ostrowska, Ksenia; Sładek, Jerzy; Gruza, Maciej; Gąska, Piotr

    2017-01-01

    The following paper presents a practical approach to the validation process of coordinate measuring methods at an accredited laboratory, using a statistical model of metrological compatibility. The statistical analysis of measurement results obtained using a highly accurate system was intended to determine the permissible validation threshold values. The threshold value constitutes the primary criterion for the acceptance or rejection of the validated method, and depends on both the differences between measurement results with corresponding uncertainties and the individual correlation coefficient. The article specifies and explains the types of measuring methods that were subject to validation and defines the criterion value governing their acceptance or rejection in the validation process.

  2. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    DOE PAGES

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; ...

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  3. Epidemic threshold of the susceptible-infected-susceptible model on complex networks

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Keun; Shim, Pyoung-Seop; Noh, Jae Dong

    2013-06-01

    We demonstrate that the susceptible-infected-susceptible (SIS) model on complex networks can have an inactive Griffiths phase characterized by a slow relaxation dynamics. It contrasts with the mean-field theoretical prediction that the SIS model on complex networks is active at any nonzero infection rate. The dynamic fluctuation of infected nodes, ignored in the mean field approach, is responsible for the inactive phase. It is proposed that the question whether the epidemic threshold of the SIS model on complex networks is zero or not can be resolved by the percolation threshold in a model where nodes are occupied in degree-descending order. Our arguments are supported by the numerical studies on scale-free network models.

  4. Predicting the epidemic threshold of the susceptible-infected-recovered model

    PubMed Central

    Wang, Wei; Liu, Quan-Hui; Zhong, Lin-Feng; Tang, Ming; Gao, Hui; Stanley, H. Eugene

    2016-01-01

    Researchers have developed several theoretical methods for predicting epidemic thresholds, including the mean-field like (MFL) method, the quenched mean-field (QMF) method, and the dynamical message passing (DMP) method. When these methods are applied to predict epidemic threshold they often produce differing results and their relative levels of accuracy are still unknown. We systematically analyze these two issues—relationships among differing results and levels of accuracy—by studying the susceptible-infected-recovered (SIR) model on uncorrelated configuration networks and a group of 56 real-world networks. In uncorrelated configuration networks the MFL and DMP methods yield identical predictions that are larger and more accurate than the prediction generated by the QMF method. As for the 56 real-world networks, the epidemic threshold obtained by the DMP method is more likely to reach the accurate epidemic threshold because it incorporates full network topology information and some dynamical correlations. We find that in most of the networks with positive degree-degree correlations, an eigenvector localized on the high k-core nodes, or a high level of clustering, the epidemic threshold predicted by the MFL method, which uses the degree distribution as the only input information, performs better than the other two methods. PMID:27091705

  5. Predicting the epidemic threshold of the susceptible-infected-recovered model

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Liu, Quan-Hui; Zhong, Lin-Feng; Tang, Ming; Gao, Hui; Stanley, H. Eugene

    2016-04-01

    Researchers have developed several theoretical methods for predicting epidemic thresholds, including the mean-field like (MFL) method, the quenched mean-field (QMF) method, and the dynamical message passing (DMP) method. When these methods are applied to predict epidemic threshold they often produce differing results and their relative levels of accuracy are still unknown. We systematically analyze these two issues—relationships among differing results and levels of accuracy—by studying the susceptible-infected-recovered (SIR) model on uncorrelated configuration networks and a group of 56 real-world networks. In uncorrelated configuration networks the MFL and DMP methods yield identical predictions that are larger and more accurate than the prediction generated by the QMF method. As for the 56 real-world networks, the epidemic threshold obtained by the DMP method is more likely to reach the accurate epidemic threshold because it incorporates full network topology information and some dynamical correlations. We find that in most of the networks with positive degree-degree correlations, an eigenvector localized on the high k-core nodes, or a high level of clustering, the epidemic threshold predicted by the MFL method, which uses the degree distribution as the only input information, performs better than the other two methods.

  6. Cost-Effectiveness of Orthogeriatric and Fracture Liaison Service Models of Care for Hip Fracture Patients: A Population-Based Study.

    PubMed

    Leal, Jose; Gray, Alastair M; Hawley, Samuel; Prieto-Alhambra, Daniel; Delmestri, Antonella; Arden, Nigel K; Cooper, Cyrus; Javaid, M Kassim; Judge, Andrew

    2017-02-01

    Fracture liaison services are recommended as a model of best practice for organizing patient care and secondary fracture prevention for hip fracture patients, although variation exists in how such services are structured. There is considerable uncertainty as to which model is most cost-effective and should therefore be mandated. This study evaluated the cost- effectiveness of orthogeriatric (OG)- and nurse-led fracture liaison service (FLS) models of post-hip fracture care compared with usual care. Analyses were conducted from a health care and personal social services payer perspective, using a Markov model to estimate the lifetime impact of the models of care. The base-case population consisted of men and women aged 83 years with a hip fracture. The risk and costs of hip and non-hip fractures were derived from large primary and hospital care data sets in the UK. Utilities were informed by a meta-regression of 32 studies. In the base-case analysis, the orthogeriatric-led service was the most effective and cost-effective model of care at a threshold of £30,000 per quality-adjusted life years gained (QALY). For women aged 83 years, the OG-led service was the most cost-effective at £22,709/QALY. If only health care costs are considered, OG-led service was cost-effective at £12,860/QALY and £14,525/QALY for women and men aged 83 years, respectively. Irrespective of how patients were stratified in terms of their age, sex, and Charlson comorbidity score at index hip fracture, our results suggest that introducing an orthogeriatrician-led or a nurse-led FLS is cost-effective when compared with usual care. Although considerable uncertainty remains concerning which of the models of care should be preferred, introducing an orthogeriatrician-led service seems to be the most cost-effective service to pursue. © 2016 American Society for Bone and Mineral Research.

  7. Video object segmentation via adaptive threshold based on background model diversity

    NASA Astrophysics Data System (ADS)

    Boubekeur, Mohamed Bachir; Luo, SenLin; Labidi, Hocine; Benlefki, Tarek

    2015-03-01

    The background subtraction could be presented as classification process when investigating the upcoming frames in a video stream, taking in consideration in some cases: a temporal information, in other cases the spatial consistency, and these past years both of the considerations above. The classification often relied in most of the cases on a fixed threshold value. In this paper, a framework for background subtraction and moving object detection based on adaptive threshold measure and short/long frame differencing procedure is proposed. The presented framework explored the case of adaptive threshold using mean squared differences for a sampled background model. In addition, an intuitive update policy which is neither conservative nor blind is presented. The algorithm succeeded on extracting the moving foreground and isolating an accurate background.

  8. Threshold dose model: It`s time to line up our ducks

    SciTech Connect

    Logan, S.E.

    1997-12-01

    Evidence has accumulated that discredits the linear no-threshold (LNT) relationship for low-level radiation exposure and provides the basis on which to adopt a threshold model. This paper examines steps that need to be taken to bring about the change. Some of the evidence supporting the threshold concept comes from incidents such as the atom bombs over Japan and the Chernobyl accident. Other evidence concerns extended exposure such as that with shipyard workers, radiation treatment, and high-background-radiation geographic regions. Not only does the evidence show an absence of net health damage at doses below some level, but radiation hormesis or net benefits are shown to occur at low levels of radiation dose.

  9. Research of adaptive threshold model and its application in iris tracking

    NASA Astrophysics Data System (ADS)

    Zhao, Qijie; Tu, Dawei; Wang, Rensan; Gao, Daming

    2005-02-01

    The relationship between gray value of pixels and macro-information in image has been analyzed with the method in statistical mechanics. After simulating and curve fitting with the experiment data by statistic and regression method, an adaptive threshold model between average gray value and image threshold has been proposed in terms of Boltzmann statistics. On the other hand, the image characteristics around the eye region and the states of eyeball also have been analyzed, and an algorithm to extract the eye feature and locate its position on the image has been proposed, furthermore, another algorithm has been proposed to find the iris characteristic line and then to coordinate the iris center. At last, considering the cases of head gesture, different head position, and the opening state of eyes, some experiments have been respectively done with the function based on the adaptive threshold model and the designed algorithms in eye-gaze input human-computer interaction (HCI) system. The experiment results show that the algorithms can widely be applied in different cases, and real-time iris tracking can be performed with the adaptive threshold model and algorithms.

  10. Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection

    NASA Astrophysics Data System (ADS)

    Naveau, Philippe; Huser, Raphael; Ribereau, Pierre; Hannart, Alexis

    2016-04-01

    In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.

  11. A predictive model for risk of prehypertension and hypertension and expected benefit after population-based life-style modification (KCIS No. 24).

    PubMed

    Tseng, Chuen-Den; Yen, Amy Ming-Fang; Chiu, Sherry Yueh-Hsia; Chen, Li-Sheng; Chen, Hsiu-Hsi; Chang, Shu-Hui

    2012-02-01

    Few reports have identified and quantified significant risk factors responsible for multistate natural course of progression to hypertension and also regression of prehypertension to normal, which provides baseline risks to estimate the size of expected benefit derived from population-based life-style modification. Data used for estimating clinical parameters governing temporal natural course of hypertension are derived from 42,027 participants attending screening annually between 1999 and 2002. Information on transition history between normal, prehypertension, stage 1 and stage 2 hypertension between screens was therefore collected to compute multistep composite risk scores without intervention program. The expected benefits of risk reduction in prehypertension and hypertension under different intervention programs by modifying the related risk factors from abnormal to normal ranges were estimated. The majority of risk factors play a more remarkable role in prehypertension and stage 1 hypertension but less in stage 2 hypertension. The greater the number of risk factors included in the intervention programs becomes, the lower the mean risk score is expected to achieve. The 5-year predicted cumulative risk for stage 2 hypertension decreased from 23.6% in the absence of intervention program to 14% with the provision of "six-component intervention" in men. The results were similar for women. Multiple risk factors responsible for multistep transitions between prehypertension and hypertension were identified by using population-based screening data to derive multistep composite risk scores, which are useful for the expected benefit of reducing risk of hypertension by providing population-based life-style modification.

  12. Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.

    PubMed

    Musso, Fabio; Feverati, Giovanni

    2012-01-01

    We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution.

  13. Detecting Departure From Additivity Along a Fixed-Ratio Mixture Ray With a Piecewise Model for Dose and Interaction Thresholds

    PubMed Central

    Gennings, Chris; Wagner, Elizabeth D.; Simmons, Jane Ellen; Plewa, Michael J.

    2010-01-01

    For mixtures of many chemicals, a ray design based on a relevant, fixed mixing ratio is useful for detecting departure from additivity. Methods for detecting departure involve modeling the response as a function of total dose along the ray. For mixtures with many components, the interaction may be dose dependent. Therefore, we have developed the use of a three-segment model containing both a dose threshold and an interaction threshold. Prior to the dose threshold, the response is that of background; between the dose threshold and the interaction threshold, an additive relationship exists; the model allows for departure from additivity beyond the interaction threshold. With such a model, we can conduct a hypothesis test of additivity, as well as a test for a region of additivity. The methods are illustrated with cytotoxicity data that arise when Chinese hamster ovary cells are exposed to a mixture of nine haloacetic acids. PMID:21359103

  14. Historical human exposure to perfluoroalkyl acids in the United States and Australia reconstructed from biomonitoring data using population-based pharmacokinetic modelling.

    PubMed

    Gomis, Melissa I; Vestergren, Robin; MacLeod, Matthew; Mueller, Jochen F; Cousins, Ian T

    2017-08-14

    Perfluorooctanoic acid (PFOA), perfluorooctanesulfonic acid (PFOS) and perfluorohexanesulfonic acid (PFHxS) are found in the blood of humans and wildlife worldwide. Since the beginning of the 21st century, a downward trend in the human body burden, especially for PFOS and PFOA, has been observed while there is no clear temporal trend in wildlife. The inconsistency between the concentration decline in human serum and in wildlife could be indicative of a historical exposure pathway for humans linked to consumer products that has been reduced or eliminated. In this study, we reconstruct the past human exposure trends in two different regions, USA and Australia, by inferring the historical intake from cross-sectional biomonitoring data of PFOS, PFOA and PFHxS using a population-based pharmacokinetic model. For PFOS in the USA, the reconstructed daily intake peaked at 4.5ng/kg-bw/day between 1988 and 1999 while in Australia it peaked at 4.0ng/kg-bw/day between 1984 and 1996. For PFOA in the USA and Australia, the peak reconstructed daily intake was 1.1ng/kg-bw/day in 1995 and 3.6ng/kg-bw/day in 1992, respectively, and started to decline in 2000 and 1995, respectively. The model could not be satisfactorily fitted to the biomonitoring data for PFHxS within reasonable boundaries for its intrinsic elimination half-life, and thus reconstructing intakes of PFHxS was not possible. Our results indicate that humans experienced similar exposure levels and trends to PFOS and PFOA in the USA and Australia. Our findings support the hypothesis that near-field consumer product exposure pathways were likely dominant prior to the phase-out in industrialized countries. The intrinsic elimination half-life, which represents elimination processes that are common for all humans, and elimination processes unique to women (i.e., menstruation, cord-blood transfer and breastfeeding) were also investigated. The intrinsic elimination half-lives for PFOS and PFOA derived from model fitting for men

  15. Threshold Dynamics in Stochastic SIRS Epidemic Models with Nonlinear Incidence and Vaccination.

    PubMed

    Wang, Lei; Teng, Zhidong; Tang, Tingting; Li, Zhiming

    2017-01-01

    In this paper, the dynamical behaviors for a stochastic SIRS epidemic model with nonlinear incidence and vaccination are investigated. In the models, the disease transmission coefficient and the removal rates are all affected by noise. Some new basic properties of the models are found. Applying these properties, we establish a series of new threshold conditions on the stochastically exponential extinction, stochastic persistence, and permanence in the mean of the disease with probability one for the models. Furthermore, we obtain a sufficient condition on the existence of unique stationary distribution for the model. Finally, a series of numerical examples are introduced to illustrate our main theoretical results and some conjectures are further proposed.

  16. Simulation of Healing Threshold in Strain-Induced Inflammation through a Discrete Informatics Model.

    PubMed

    Ibrahim, Israr; Oruganti, Sanjay Venkata; Pidaparti, Ramana

    2017-02-15

    Respiratory diseases such as asthma and acute respiratory distress syndrome as well as acute lung injury involve inflammation at the cellular level. The inflammation process is very complex and is characterized by the emergence of cytokines along with other changes in cellular processes. Due to the complexity of the various constituents that makes up the inflammation dynamics, it is necessary to develop models that can complement experiments to fully understand inflammatory diseases. In this study, we developed a discrete informatics model based on cellular automata (CA) approach to investigate the influence of elastic field (stretch/strain) on the dynamics of inflammation and account for probabilistic adaptation based on statistical interpretation of existing experimental data. Our simulation model investigated the effects of low, medium and high strain conditions on inflammation dynamics. Results suggest that the model is able to indicate the threshold of innate healing of tissue as a response to strain experienced by the tissue. When strain is under the threshold, the tissue is still capable of adapting its structure to heal the damaged part. However, there exists a strain threshold where healing capability breaks down. The results obtained demonstrate that the developed discrete informatics based CA model is capable of modeling and giving insights into inflammation dynamics parameters under various mechanical strain/stretch environments.

  17. A Model of Threshold Behavior Reveals Rescue Mechanisms of Bystander Proteins in Conformational Diseases

    PubMed Central

    Sandefur, Conner I.; Schnell, Santiago

    2011-01-01

    Conformational diseases result from the failure of a specific protein to fold into its correct functional state. The misfolded proteins can lead to the toxic aggregation of proteins. Protein misfolding in conformational diseases often displays a threshold behavior characterized by a sudden shift between nontoxic to toxic levels of misfolded proteins. In some conformational diseases, evidence suggests that misfolded proteins interact with bystander proteins (unfolded and native folded proteins), eliciting a misfolded phenotype. These bystander isomers would follow their normal physiological pathways in absence of misfolded proteins. In this article, we present a general mechanism of bystander and misfolded protein interaction which we have used to investigate how the threshold behavior in protein misfolding is triggered in conformational diseases. Using a continuous flow reactor model of the endoplasmic reticulum, we found that slight changes in the bystander protein residence time in the endoplasmic reticulum or the ratio of basal misfolded to bystander protein inflow rates can trigger the threshold behavior in protein misfolding. Our analysis reveals three mechanisms to rescue bystander proteins in conformational diseases. The results of our model can now help direct experiments to understand the threshold behavior and develop therapeutic strategies targeting the modulation of conformational diseases. PMID:21504722

  18. A preliminary threshold model of parasitism in the Cockle Cerastoderma edule using delayed exchange of stability

    NASA Astrophysics Data System (ADS)

    O'Grady, E. A.; Culloty, S. C.; Kelly, T. C.; O'Callaghan, M. J. A.; Rachinskii, D.

    2015-02-01

    Thresholds occur, and play an important role, in the dynamics of many biological communities. In this paper, we model a persistence type threshold which has been shown experimentally to exist in hyperparasitised flukes in the cockle, a shellfish. Our model consists of a periodically driven slow-fast host-parasite system of equations for a slow flukes population (host) and a fast Unikaryon hyperparasite population (parasite). The model exhibits two branches of the critical curve crossing in a transcritical bifurcation scenario. We discuss two thresholds due to immediate and delayed exchange of stability effects; and we derive algebraic relationships for parameters of the periodic solution in the limit of the infinite ratio of the time scales. Flukes, which are the host species in our model, parasitise cockles and in turn are hyperparasitised by the microsporidian Unikaryon legeri; the life cycle of flukes includes several life stages and a number of different hosts. That is, the flukes-hyperparasite system in a cockle is, naturally, part of a larger estuarine ecosystem of interacting species involving parasites, shellfish and birds which prey on shellfish. A population dynamics model which accounts for one system of such multi-species interactions and includes the fluke-hyperparasite model in a cockle as a subsystem is presented. We provide evidence that the threshold effect we observed in the flukes-hyperparasite subsystem remains apparent in the multi-species system. Assuming that flukes damage cockles, and taking into account that the hyperparasite is detrimental to flukes, it is natural to suggest that the hyperparasitism may support the abundance of cockles and, thereby, the persistence of the estuarine ecosystem, including shellfish and birds. We confirm the possibility of the existence of this scenario in our model, at least partially, by removing the hyperparasite and demonstrating that this may result in a substantial drop in cockle numbers. The result

  19. Threshold Graph Limits and Random Threshold Graphs

    PubMed Central

    Diaconis, Persi; Holmes, Susan; Janson, Svante

    2010-01-01

    We study the limit theory of large threshold graphs and apply this to a variety of models for random threshold graphs. The results give a nice set of examples for the emerging theory of graph limits. PMID:20811581

  20. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    PubMed

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC

  1. Using a combined population-based and kinetic modelling approach to assess timescales and durations of magma migration activities prior to the 1669 flank eruption of Mt. Etna

    NASA Astrophysics Data System (ADS)

    Kahl, M.; Morgan, D. J.; Viccaro, M.; Dingwell, D. B.

    2015-12-01

    The March-July eruption of Mt. Etna in 1669 is ranked as one of the most destructive and voluminous eruptions of Etna volcano in historical times. To assess threats from future eruptions, a better understanding of how and over what timescales magma moved underground prior to and during the 1669 eruption is required. We present a combined population based and kinetic modelling approach [1-2] applied to 185 olivine crystals that erupted during the 1669 eruption. By means of this approach we provide, for the first time, a dynamic picture of magma mixing and magma migration activity prior to and during the 1669 flank eruption of Etna volcano. Following the work of [3] we have studied 10 basaltic lava samples (five SET1 and five SET2 samples) that were erupted from different fissures that opened between 950 and 700 m a.s.l. Following previous work [1-2] we were able to classify different populations of olivine based on their overall core and rim compositional record and the prevalent zoning type (i.e. normal vs. reverse). The core plateau compositions of the SET1 and SET2 olivines range from Fo70 up to Fo83 with a single peak at Fo75-76. The rims differ significantly and can be distinguished into two different groups. Olivine rims from the SET1 samples are generally more evolved and range from Fo50 to Fo64 with a maximum at Fo55-57. SET2 olivine rims vary between Fo65-75 with a peak at Fo69. SET1 and SET2 olivines display normal zonation with cores at Fo75-76 and diverging rim records (Fo55-57 and Fo65-75). The diverging core and rim compositions recorded in the SET1 and SET2 olivines can be attributed to magma evolution possibly in three different magmatic environments (MEs): M1 (=Fo75-76), M2 (=Fo69) and M3 (=Fo55-57) with magma transfer and mixing amongst them. The MEs established in this study differ slightly from those identified in previous works [1-2]. We note the relative lack of olivines with Fo-rich core and rim compositions indicating a major mafic magma

  2. Quasi-3D modeling of surface potential and threshold voltage of Triple Metal Quadruple Gate MOSFETs

    NASA Astrophysics Data System (ADS)

    Gupta, Santosh Kumar; Shah, Mihir Kumar P.

    2017-01-01

    In this paper we present electrostatic model of 3D Triple Metal Quadruple Gate (TMQG) MOSFET of rectangular cross-section based on quasi-3D method. The analytical equations for channel potential and characteristic length have been derived by decomposing TMQG into two 2D perpendicular cross-sections (triple metal double gate, TMDG) and the effective characteristic length of TMQG is found using equivalent number of gates (ENG) method. For each of the TMDG, 2D Poisson's equation is solved by parabolic approximation and proper boundary conditions to calculate channel potential. The threshold voltage expression is developed using inversion carrier charge sheet density method. The developed models for channel potential and threshold voltage are validated using numerical simulations of TMQG. The developed model provides the design guidelines for TMQG with improved HCEs and SCEs.

  3. Threshold parameters for a model of epidemic spread among households and workplaces

    PubMed Central

    Pellis, L.; Ferguson, N. M.; Fraser, C.

    2009-01-01

    The basic reproduction number R 0 is one of the most important concepts in modern infectious disease epidemiology. However, for more realistic and more complex models than those assuming homogeneous mixing in the population, other threshold quantities can be defined that are sometimes more useful and easily derived in terms of model parameters. In this paper, we present a model for the spread of a permanently immunizing infection in a population socially structured into households and workplaces/schools, and we propose and discuss a new household-to-household reproduction number R H for it. We show how R H overcomes some of the limitations of a previously proposed threshold parameter, and we highlight its relationship with the effort required to control an epidemic when interventions are targeted at randomly selected households. PMID:19324683

  4. Frequency analysis of tick quotes on foreign currency markets and the double-threshold agent model

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2006-09-01

    Power spectrum densities for the number of tick quotes per minute (market activity) on three currency markets (USD/JPY, EUR/USD, and JPY/EUR) are analyzed for periods from January 2000 to December 2000. We find some peaks on the power spectrum densities at a few minutes. We develop the double-threshold agent model and confirm that the corresponding periodicity can be observed for the activity of this model even though market participants perceive common weaker periodic information than threshold for decision-making of them. This model is numerically performed and theoretically investigated by utilizing the mean-field approximation. We propose a hypothesis that the periodicities found on the power spectrum densities can be observed due to nonlinearity and diversity of market participants.

  5. The Translation Invariant Massive Nelson Model: III. Asymptotic Completeness Below the Two-Boson Threshold

    NASA Astrophysics Data System (ADS)

    Dybalski, Wojciech; Møller, Jacob Schach

    2015-11-01

    We show asymptotic completeness of two-body scattering for a class of translation invariant models describing a single quantum particle (the electron) linearly coupled to a massive scalar field (bosons). Our proof is based on a recently established Mourre estimate for these models. In contrast to previous approaches, it requires no number cutoff, no restriction on the particle-field coupling strength, and no restriction on the magnitude of total momentum. Energy, however, is restricted by the two-boson threshold, admitting only scattering of a dressed electron and a single asymptotic boson. The class of models we consider include the UV-cutoff Nelson and polaron models.

  6. Implementation of Fixed-point Neuron Models with Threshold, Ramp and Sigmoid Activation Functions

    NASA Astrophysics Data System (ADS)

    Zhang, Lei

    2017-07-01

    This paper presents the hardware implementation of single-neuron models with three types of activation functions using fixed-point data format on Field Programmable Gate Arrays (FPGA). Activation function defines the transfer behavior of a neuron model and consequently the Artificial Neural Network (ANN) constructed using it. This paper compared single neuron models designed with bipolar ramp, threshold and sigmoid activation functions. It is also demonstrated that the FPGA hardware implementation performance can be significantly improved by using 16-bit fixed-point data format instead of 32-bit floating-point data format for the neuron model with sigmoid activation function.

  7. Threshold voltage model of junctionless cylindrical surrounding gate MOSFETs including fringing field effects

    NASA Astrophysics Data System (ADS)

    Gupta, Santosh Kumar

    2015-12-01

    2D Analytical model of the body center potential (BCP) in short channel junctionless Cylindrical Surrounding Gate (JLCSG) MOSFETs is developed using evanescent mode analysis (EMA). This model also incorporates the gate bias dependent inner and outer fringing capacitances due to the gate-source/drain fringing fields. The developed model provides results in good agreement with simulated results for variations of different physical parameters of JLCSG MOSFET viz. gate length, channel radius, doping concentration, and oxide thickness. Using the BCP, an analytical model for the threshold voltage has been derived and validated against results obtained from 3D device simulator.

  8. Computational model of collective nest selection by ants with heterogeneous acceptance thresholds

    PubMed Central

    Masuda, Naoki; O'shea-Wheller, Thomas A.; Doran, Carolina; Franks, Nigel R.

    2015-01-01

    Collective decision-making is a characteristic of societies ranging from ants to humans. The ant Temnothorax albipennis is known to use quorum sensing to collectively decide on a new home; emigration to a new nest site occurs when the number of ants favouring the new site becomes quorate. There are several possible mechanisms by which ant colonies can select the best nest site among alternatives based on a quorum mechanism. In this study, we use computational models to examine the implications of heterogeneous acceptance thresholds across individual ants in collective nest choice behaviour. We take a minimalist approach to develop a differential equation model and a corresponding non-spatial agent-based model. We show, consistent with existing empirical evidence, that heterogeneity in acceptance thresholds is a viable mechanism for efficient nest choice behaviour. In particular, we show that the proposed models show speed–accuracy trade-offs and speed–cohesion trade-offs when we vary the number of scouts or the quorum threshold. PMID:26543578

  9. Gauge threshold corrections for {N}=2 heterotic local models with flux, and mock modular forms

    NASA Astrophysics Data System (ADS)

    Carlevaro, Luca; Israël, Dan

    2013-03-01

    We determine threshold corrections to the gauge couplings in local models of {N}=2 smooth heterotic compactifications with torsion, given by the direct product of a warped Eguchi-Hanson space and a two-torus, together with a line bundle. Using the worldsheet cft description previously found and by suitably regularising the infinite target space volume divergence, we show that threshold corrections to the various gauge factors are governed by the non-holomorphic completion of the Appell-Lerch sum. While its holomorphic Mock-modular component captures the contribution of states that localise on the blown-up two-cycle, the non-holomorphic correction originates from non-localised bulk states. We infer from this analysis universality properties for {N}=2 heterotic local models with flux, based on target space modular invariance and the presence of such non-localised states. We finally determine the explicit dependence of these one-loop gauge threshold corrections on the moduli of the two-torus, and by S-duality we extract the corresponding string-loop and E1-instanton corrections to the Kähler potential and gauge kinetic functions of the dual type i model. In both cases, the presence of non-localised bulk states brings about novel perturbative and non-perturbative corrections, some features of which can be interpreted in the light of analogous corrections to the effective theory in compact models.

  10. Computational model of collective nest selection by ants with heterogeneous acceptance thresholds.

    PubMed

    Masuda, Naoki; O'shea-Wheller, Thomas A; Doran, Carolina; Franks, Nigel R

    2015-06-01

    Collective decision-making is a characteristic of societies ranging from ants to humans. The ant Temnothorax albipennis is known to use quorum sensing to collectively decide on a new home; emigration to a new nest site occurs when the number of ants favouring the new site becomes quorate. There are several possible mechanisms by which ant colonies can select the best nest site among alternatives based on a quorum mechanism. In this study, we use computational models to examine the implications of heterogeneous acceptance thresholds across individual ants in collective nest choice behaviour. We take a minimalist approach to develop a differential equation model and a corresponding non-spatial agent-based model. We show, consistent with existing empirical evidence, that heterogeneity in acceptance thresholds is a viable mechanism for efficient nest choice behaviour. In particular, we show that the proposed models show speed-accuracy trade-offs and speed-cohesion trade-offs when we vary the number of scouts or the quorum threshold.

  11. Two-threshold model for scaling laws of noninteracting snow avalanches.

    PubMed

    Faillettaz, Jerome; Louchet, Francois; Grasso, Jean-Robert

    2004-11-12

    The sizes of snow slab failure that trigger snow avalanches are power-law distributed. Such a power-law probability distribution function has also been proposed to characterize different landslide types. In order to understand this scaling for gravity-driven systems, we introduce a two-threshold 2D cellular automaton, in which failure occurs irreversibly. Taking snow slab avalanches as a model system, we find that the sizes of the largest avalanches just preceding the lattice system breakdown are power-law distributed. By tuning the maximum value of the ratio of the two failure thresholds our model reproduces the range of power-law exponents observed for land, rock, or snow avalanches. We suggest this control parameter represents the material cohesion anisotropy.

  12. Global threshold dynamics of an SIVS model with waning vaccine-induced immunity and nonlinear incidence.

    PubMed

    Yang, Junyuan; Martcheva, Maia; Wang, Lin

    2015-10-01

    Vaccination is the most effective method of preventing the spread of infectious diseases. For many diseases, vaccine-induced immunity is not life long and the duration of immunity is not always fixed. In this paper, we propose an SIVS model taking the waning of vaccine-induced immunity and general nonlinear incidence into consideration. Our analysis shows that the model exhibits global threshold dynamics in the sense that if the basic reproduction number is less than 1, then the disease-free equilibrium is globally asymptotically stable implying the disease dies out; while if the basic reproduction number is larger than 1, then the endemic equilibrium is globally asymptotically stable indicating that the disease persists. This global threshold result indicates that if the vaccination coverage rate is below a critical value, then the disease always persists and only if the vaccination coverage rate is above the critical value, the disease can be eradicated.

  13. Modeling of Surface Thermodynamics and Damage Thresholds in the IR and THz Regime

    DTIC Science & Technology

    2007-01-01

    United States; c Air Force Reasearch Lab, Human Effectivness Directorate Optical Branch, 2624 Louis Bauer Drive, San Antonio, TX, United States...thermal damage sustained by the tissue, and can also determine damage thresholds for total optical power delivered to the tissue. Currently , the surface...effect on both temperature response and damage predictions. Current configuration abilities allow us to model a multi-layer material of infinite

  14. Threshold for chaos and thermalization in the one-dimensional mean-field bose-hubbard model.

    PubMed

    Cassidy, Amy C; Mason, Douglas; Dunjko, Vanja; Olshanii, Maxim

    2009-01-16

    We study the threshold for chaos and its relation to thermalization in the 1D mean-field Bose-Hubbard model, which, in particular, describes atoms in optical lattices. We identify the threshold for chaos, which is finite in the thermodynamic limit, and show that it is indeed a precursor of thermalization. Far above the threshold, the state of the system after relaxation is governed by the usual laws of statistical mechanics.

  15. Integrating physiological threshold experiments with climate modeling to project mangrove species' range expansion.

    PubMed

    Cavanaugh, Kyle C; Parker, John D; Cook-Patton, Susan C; Feller, Ilka C; Williams, A Park; Kellner, James R

    2015-05-01

    Predictions of climate-related shifts in species ranges have largely been based on correlative models. Due to limitations of these models, there is a need for more integration of experimental approaches when studying impacts of climate change on species distributions. Here, we used controlled experiments to identify physiological thresholds that control poleward range limits of three species of mangroves found in North America. We found that all three species exhibited a threshold response to extreme cold, but freeze tolerance thresholds varied among species. From these experiments, we developed a climate metric, freeze degree days (FDD), which incorporates both the intensity and the frequency of freezes. When included in distribution models, FDD accurately predicted mangrove presence/absence. Using 28 years of satellite imagery, we linked FDD to observed changes in mangrove abundance in Florida, further exemplifying the importance of extreme cold. We then used downscaled climate projections of FDD to project that these range limits will move northward by 2.2-3.2 km yr(-1) over the next 50 years. © 2014 John Wiley & Sons Ltd.

  16. Genetic evaluation of calf and heifer survival in Iranian Holstein cattle using linear and threshold models.

    PubMed

    Forutan, M; Ansari Mahyari, S; Sargolzaei, M

    2015-02-01

    Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection.

  17. Threshold Models for Genome-Enabled Prediction of Ordinal Categorical Traits in Plant Breeding

    PubMed Central

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Pérez-Rodríguez, Paulino; de los Campos, Gustavo; Eskridge, Kent; Crossa, José

    2014-01-01

    Categorical scores for disease susceptibility or resistance often are recorded in plant breeding. The aim of this study was to introduce genomic models for analyzing ordinal characters and to assess the predictive ability of genomic predictions for ordered categorical phenotypes using a threshold model counterpart of the Genomic Best Linear Unbiased Predictor (i.e., TGBLUP). The threshold model was used to relate a hypothetical underlying scale to the outward categorical response. We present an empirical application where a total of nine models, five without interaction and four with genomic × environment interaction (G×E) and genomic additive × additive × environment interaction (G×G×E), were used. We assessed the proposed models using data consisting of 278 maize lines genotyped with 46,347 single-nucleotide polymorphisms and evaluated for disease resistance [with ordinal scores from 1 (no disease) to 5 (complete infection)] in three environments (Colombia, Zimbabwe, and Mexico). Models with G×E captured a sizeable proportion of the total variability, which indicates the importance of introducing interaction to improve prediction accuracy. Relative to models based on main effects only, the models that included G×E achieved 9–14% gains in prediction accuracy; adding additive × additive interactions did not increase prediction accuracy consistently across locations. PMID:25538102

  18. Modeling aeolian sediment transport thresholds on physically rough Martian surfaces: A shear stress partitioning approach

    NASA Astrophysics Data System (ADS)

    Gillies, John A.; Nickling, William G.; King, James; Lancaster, Nicholas

    2010-09-01

    This paper explores the effect that large roughness elements (0.30 m × 0.26 m × 0.36 m) may have on entrainment of sediment by Martian winds using a shear stress partitioning approach based on a model developed by Raupach et al. (Raupach, M.R., Gillette, D.A., Leys, J.F., 1993. The effect of roughness elements on wind erosion threshold. Journal of Geophysical Research 98(D2), 3023-3029). This model predicts the shear stress partitioning ratio defined as the percent reduction in shear stress on the intervening surface between the roughness elements as compared to the surface in the absence of those elements. This ratio is based on knowledge of the geometric properties of the roughness elements, the characteristic drag coefficients of the elements and the surface, and the assumed effect these elements have on the spatial distribution of the mean and maximum shear stresses. On Mars, unlike on Earth, the shear stress partitioning caused by roughness can be non-linear in that the drag coefficients for the surface as well as for the roughness itself show Reynolds number dependencies for the reported range of Martian wind speeds. The shear stress partitioning model of Raupach et al. is used to evaluate how conditions of the Martian atmosphere will affect the threshold shear stress ratio for Martian surfaces over a range of values of roughness density. Using, as an example, a 125 µm diameter particle with an estimated threshold shear stress on Mars of ≈ 0.06 N m - 2 (shear velocity, u* ≈ 2 m s - 1 on a smooth surface), we evaluate the effect of roughness density on the threshold shear stress ratio for this diameter particle. In general, on Mars higher regional shear stresses are required to initiate particle entrainment for surfaces that have the same physical roughness as defined by the roughness density term ( λ) compared with terrestrial surfaces mainly because of the low Martian atmospheric density.

  19. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.

    PubMed

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.

  20. Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.

  1. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures

    PubMed Central

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878

  2. Model-independent constraints on hadronic form factors with above-threshold poles

    NASA Astrophysics Data System (ADS)

    Caprini, Irinel; Grinstein, Benjamín; Lebed, Richard F.

    2017-08-01

    Model-independent constraints on hadronic form factors, in particular those describing exclusive semileptonic decays, can be derived from the knowledge of field correlators calculated in perturbative QCD, using analyticity and unitarity. The location of poles corresponding to below-threshold resonances, i.e., stable states that cannot decay into a pair of hadrons from the crossed channel of the form factor, must be known a priori, and their effect, accounted for through the use of Blaschke factors, is to reduce the strength of the constraints in the semileptonic region. By contrast, above-threshold resonances appear as poles on unphysical Riemann sheets, and their presence does not affect the original model-independent constraints. We discuss the possibility that the above-threshold poles can provide indirect information on the form factors on the first Riemann sheet, either through information from their residues or by constraining the discontinuity function. The bounds on form factors can be improved by imposing, in an exact way, the additional information in the extremal problem. The semileptonic K →π ℓν and D →π ℓν decays are considered as illustrations.

  3. Empirical scalings and modeling of error field penetration thresholds in tokamaks

    NASA Astrophysics Data System (ADS)

    Schaefer, C.; Lanctot, M. J.; Meneghini, O.; Smith, S. P.; Logan, N. C.; Haskey, S.

    2016-10-01

    Recent experiments in several tokamaks show that applied n=2 fields can lead to disruptive n=1 locked modes at field thresholds similar to those found for n=1 fields. This has important implications for the allowable size of error fields in next-step devices. In order to extrapolate field thresholds to ITER, an error field database (EFDB) is being developed under the OMFIT integrated modeling framework. The initial phase of development involves analysis of the applied 3D field, detection of island onset, characterization of island structure, reconstruction of the plasma equilibrium, determination of measurable plasma parameters at the relevant rational surfaces, and archiving in a dedicated MDSplus tree. The EFDB is both an extension of previous data assembly efforts and a means of documenting the parametric dependencies of error field penetration thresholds for a variety of tokamaks, across different plasma regimes, and for arbitrary applied field configurations. Through analysis of available data, empirical scalings for n=1 and n=2 fields are resolved. The trends are compared to functional dependencies predicted by drift-MHD models. Work supported by the US Department of Energy under the Science Undergraduate Laboratory Internship (SULI) program, DE-FC02-04ER54698 and DE-AC52-07NA27344.

  4. Reentry Near the Percolation Threshold in a Heterogeneous Discrete Model for Cardiac Tissue

    NASA Astrophysics Data System (ADS)

    Alonso, Sergio; Bär, Markus

    2013-04-01

    Arrhythmias in cardiac tissue are related to irregular electrical wave propagation in the heart. Cardiac tissue is formed by a discrete cell network, which is often heterogeneous. A localized region with a fraction of nonconducting links surrounded by homogeneous conducting tissue can become a source of reentry and ectopic beats. Extensive simulations in a discrete model of cardiac tissue show that a wave crossing a heterogeneous region of cardiac tissue can disintegrate into irregular patterns, provided the fraction of nonconducting links is close to the percolation threshold of the cell network. The dependence of the reentry probability on this fraction, the system size, and the degree of excitability can be inferred from the size distribution of nonconducting clusters near the percolation threshold.

  5. Above-threshold numerical modeling of high-index-contrast photonic-crystal quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Napartovich, A. P.; Elkin, N. N.; Vysotsky, D. V.; Kirch, J.; Sigler, C.; Botez, D.; Mawst, L. J.; Belyanin, A.

    2015-03-01

    Three-dimensional above-threshold analyses of high-index-contrast (HC) photonic-crystal (PC) quantum-cascade-laser arrays (QCLA) structures, for operation at watt-range CW powers in a single spatial mode, have been performed. Threeelement HC-PC structures are formed by alternating active- antiguided and passive-guided regions along with respective metal-electrode spatial profiling. The 3-D numerical code takes into account absorption and edge-radiation losses. Rigrod's approximation is used for the gain. The specific feature of QCLA is that only the transverse component of the magnetic field sees the gain. Results of above-threshold laser modeling in various approximate versions of laser-cavity description are compared with the results of linear, full-vectorial modeling by using the COMSOL package. Additionally, modal gains for several higher-order optical modes, on a `frozen gain background' produced by the fundamental-mode, are computed by the Arnoldi algorithm. The gain spatial-hole burning effect results in growth of the competing modes' gain with drive current. Approaching the lasing threshold for a competing higher-order mode sets a limit on the single-mode operation range. The modal structure and stability are studied over a wide range in the variation of the inter-element widths. Numerical analyses predict that the proper choice of construction parameters ensures stable single-mode operation at high drive levels above threshold. The output power from a single- mode operated QCLA at a wavelength of 4.7 μm is predicted to be available at multi-watt levels, although this power may be restricted by thermal effects.

  6. Electric Field Model of Transcranial Electric Stimulation in Nonhuman Primates: Correspondence to Individual Motor Threshold.

    PubMed

    Lee, Won Hee; Lisanby, Sarah H; Laine, Andrew F; Peterchev, Angel V

    2015-09-01

    To develop a pipeline for realistic head models of nonhuman primates (NHPs) for simulations of noninvasive brain stimulation, and use these models together with empirical threshold measurements to demonstrate that the models capture individual anatomical variability. Based on structural MRI data, we created models of the electric field (E-field) induced by right unilateral (RUL) electroconvulsive therapy (ECT) in four rhesus macaques. Individual motor threshold (MT) was measured with transcranial electric stimulation (TES) administered through the RUL electrodes in the same subjects. The interindividual anatomical differences resulted in 57% variation in median E-field strength in the brain at fixed stimulus current amplitude. Individualization of the stimulus current by MT reduced the E-field variation in the target motor area by 27%. There was significant correlation between the measured MT and the ratio of simulated electrode current and E-field strength (r(2) = 0.95, p = 0.026). Exploratory analysis revealed significant correlations of this ratio with anatomical parameters including of the superior electrode-to-cortex distance, vertex-to-cortex distance, and brain volume (r(2) > 0.96, p < 0.02). The neural activation threshold was estimated to be 0.45 ±0.07 V/cm for 0.2-ms stimulus pulse width. These results suggest that our individual-specific NHP E-field models appropriately capture individual anatomical variability relevant to the dosing of TES/ECT. These findings are exploratory due to the small number of subjects. This study can contribute insight in NHP studies of ECT and other brain stimulation interventions, help link the results to clinical studies, and ultimately lead to more rational brain stimulation dosing paradigms.

  7. Electric Field Model of Transcranial Electric Stimulation in Nonhuman Primates: Correspondence to Individual Motor Threshold

    PubMed Central

    Lee, Won Hee; Lisanby, Sarah H.; Laine, Andrew F.

    2015-01-01

    Objective To develop a pipeline for realistic head models of nonhuman primates (NHPs) for simulations of noninvasive brain stimulation, and use these models together with empirical threshold measurements to demonstrate that the models capture individual anatomical variability. Methods Based on structural MRI data, we created models of the electric field (E-field) induced by right unilateral (RUL) electroconvulsive therapy (ECT) in four rhesus macaques. Individual motor threshold (MT) was measured with transcranial electric stimulation (TES) administered through the RUL electrodes in the same subjects. Results The interindividual anatomical differences resulted in 57% variation in median E-field strength in the brain at fixed stimulus current amplitude. Individualization of the stimulus current by MT reduced the E-field variation in the target motor area by 27%. There was significant correlation between the measured MT and the ratio of simulated electrode current and E-field strength (r2 = 0.95, p = 0.026). Exploratory analysis revealed significant correlations of this ratio with anatomical parameters including of the superior electrode-to-cortex distance, vertex-to-cortex distance, and brain volume (r2 > 0.96, p < 0.02). The neural activation threshold was estimated to be 0.45 ± 0.07 V/cm for 0.2 ms stimulus pulse width. Conclusion These results suggest that our individual-specific NHP E-field models appropriately capture individual anatomical variability relevant to the dosing of TES/ECT. These findings are exploratory due to the small number of subjects. Significance This work can contribute insight in NHP studies of ECT and other brain stimulation interventions, help link the results to clinical studies, and ultimately lead to more rational brain stimulation dosing paradigms. PMID:25910001

  8. Two-dimensional threshold voltage model of a nanoscale silicon-on-insulator tunneling field-effect transistor

    NASA Astrophysics Data System (ADS)

    Li, Yu-Chen; Zhang, He-Ming; Zhang, Yu-Ming; Hu, Hui-Yong; Wang, Bin; Lou, Yong-Le; Zhou, Chun-Yu

    2013-03-01

    The tunneling field-effect transistor (TFET) is a potential candidate for the post-CMOS era. In this paper, a threshold voltage model is developed for this new kind of device. First, two-dimensional (2D) models are used to describe the distributions of potential and electric field in the channel and two depletion regions. Then based on the physical definition of threshold voltage for the nanoscale TFET, the threshold voltage model is developed. The accuracy of the proposed model is verified by comparing the calculated results with the 2D device simulation data. It has been demonstrated that the effects of varying the device parameters can easily be investigated using the model presented in this paper. This threshold voltage model provides a valuable reference to TFET device design, simulation, and fabrication.

  9. A model to predict threshold concentrations for toxic effects of chlorinated benzenes in sediment

    SciTech Connect

    Fuchsman, P.C.; Duda, D.J.; Barber, T.R.

    1999-09-01

    A probabilistic model was developed to predict effects threshold concentrations for chlorinated benzenes in sediment. Based on published quantitative structure-activity relationships relating the toxicity of chlorinated benzenes to the degree of chlorination, congeners with the same number of chlorine substitutions were considered toxicologically equivalent. Hexachlorobenzene was excluded from the assessment based on a lack of aquatic toxicity at the water solubility limit. The equilibrium partitioning approach was applied in a probabilistic analysis to derive predicted effects thresholds (PETs) for each chlorinated benzene group, with model input distributions defined by published log K{sub ow} values and aquatic toxicity data extracted from the published literature. The probabilistic distributions of PETs generally increased with chlorination, with 20th percentile values ranging from 3.2 mg/kg{sub 1{degree}OC} for chlorobenzene to 67 mg/kg{sub 1%OC} for tetrachlorobenzene congeners. The toxicity of total chlorinated benzenes in sediment can be assessed by applying the PETs in a toxic index model, based on the assumption that multiple chlorinated benzene congeners will show approximately additive toxicity, as characteristic of nonpolar narcotic toxicants. The 20th percentile PET values are one to two orders of magnitude higher than published screening-level guidelines, suggesting that the screening-level guidelines will provide overly conservative assessments in most cases. Relevant spiked sediment toxicity data are very limited but seem consistent with the probabilistic model; additional testing could be conducted to confirm the model's predictions.

  10. Threshold Dynamics in Stochastic SIRS Epidemic Models with Nonlinear Incidence and Vaccination

    PubMed Central

    Wang, Lei; Tang, Tingting

    2017-01-01

    In this paper, the dynamical behaviors for a stochastic SIRS epidemic model with nonlinear incidence and vaccination are investigated. In the models, the disease transmission coefficient and the removal rates are all affected by noise. Some new basic properties of the models are found. Applying these properties, we establish a series of new threshold conditions on the stochastically exponential extinction, stochastic persistence, and permanence in the mean of the disease with probability one for the models. Furthermore, we obtain a sufficient condition on the existence of unique stationary distribution for the model. Finally, a series of numerical examples are introduced to illustrate our main theoretical results and some conjectures are further proposed. PMID:28194223

  11. A Modified Mechanical Threshold Stress Constitutive Model for Austenitic Stainless Steels

    NASA Astrophysics Data System (ADS)

    Prasad, K. Sajun; Gupta, Amit Kumar; Singh, Yashjeet; Singh, Swadesh Kumar

    2016-12-01

    This paper presents a modified mechanical threshold stress (m-MTS) constitutive model. The m-MTS model incorporates variable athermal and dynamic strain aging (DSA) Components to accurately predict the flow stress behavior of austenitic stainless steels (ASS)-316 and 304. Under strain rate variations between 0.01-0.0001 s-1, uniaxial tensile tests were conducted at temperatures ranging from 50-650 °C to evaluate the material constants of constitutive models. The test results revealed the high dependence of flow stress on strain, strain rate and temperature. In addition, it was observed that DSA occurred at elevated temperatures and very low strain rates, causing an increase in flow stress. While the original MTS model is capable of predicting the flow stress behavior for ASS, statistical parameters point out the inefficiency of the model when compared to other models such as Johnson Cook model, modified Zerilli-Armstrong (m-ZA) model, and modified Arrhenius-type equations (m-Arr). Therefore, in order to accurately model both the DSA and non-DSA regimes, the original MTS model was modified by incorporating variable athermal and DSA components. The suitability of the m-MTS model was assessed by comparing the statistical parameters. It was observed that the m-MTS model was highly accurate for the DSA regime when compared to the existing models. However, models like m-ZA and m-Arr showed better results for the non-DSA regime.

  12. Hydrodynamic Lyapunov modes and strong stochasticity threshold in Fermi-Pasta-Ulam models.

    PubMed

    Yang, Hong-Liu; Radons, Günter

    2006-06-01

    The existence of a strong stochasticity threshold (SST) has been detected in many Hamiltonian lattice systems, including the Fermi-Pasta-Ulam (FPU) model, which is characterized by a crossover of the system dynamics from weak to strong chaos with increasing energy density epsilon. Correspondingly, the relaxation time to energy equipartition and the largest Lyapunov exponent exhibit different scaling behavior in the regimes below and beyond the threshold value. In this paper, we attempt to go one step further in this direction to explore further changes in the energy density dependence of other Lyapunov exponents and of hydrodynamic Lyapunov modes (HLMs). In particular, we find that for the FPU-beta and FPU-alpha(beta) models the scalings of the energy density dependence of all Lyapunov exponents experience a similar change at the SST as that of the largest Lyapunov exponent. In addition, the threshold values of the crossover of all Lyapunov exponents are nearly identical. These facts lend support to the point of view that the crossover in the system dynamics at the SST manifests a global change in the geometric structure of phase space. They also partially answer the question of why the simple assumption that the ambient manifold representing the system dynamics is quasi-isotropic works quite well in the analytical calculation of the largest Lyapunov exponent. Furthermore, the FPU-beta model is used as an example to show that HLMs exist in Hamiltonian lattice models with continuous symmetries. Some measures are defined to indicate the significance of HLMs. Numerical simulations demonstrate that there is a smooth transition in the energy density dependence of these variables corresponding to the crossover in Lyapunov exponents at the SST. In particular, our numerical results indicate that strong chaos is essential for the appearance of HLMs and those modes become more significant with increasing degree of chaoticity.

  13. Genetic parameters for hoof health traits estimated with linear and threshold models using alternative cohorts.

    PubMed

    Malchiodi, F; Koeck, A; Mason, S; Christen, A M; Kelton, D F; Schenkel, F S; Miglior, F

    2017-04-01

    A national genetic evaluation program for hoof health could be achieved by using hoof lesion data collected directly by hoof trimmers. However, not all cows in the herds during the trimming period are always presented to the hoof trimmer. This preselection process may not be completely random, leading to erroneous estimations of the prevalence of hoof lesions in the herd and inaccuracies in the genetic evaluation. The main objective of this study was to estimate genetic parameters for individual hoof lesions in Canadian Holsteins by using an alternative cohort to consider all cows in the herd during the period of the hoof trimming sessions, including those that were not examined by the trimmer over the entire lactation. A second objective was to compare the estimated heritabilities and breeding values for resistance to hoof lesions obtained with threshold and linear models. Data were recorded by 23 hoof trimmers serving 521 herds located in Alberta, British Columbia, and Ontario. A total of 73,559 hoof-trimming records from 53,654 cows were collected between 2009 and 2012. Hoof lesions included in the analysis were digital dermatitis, interdigital dermatitis, interdigital hyperplasia, sole hemorrhage, sole ulcer, toe ulcer, and white line disease. All variables were analyzed as binary traits, as the presence or the absence of the lesions, using a threshold and a linear animal model. Two different cohorts were created: Cohort 1, which included only cows presented to hoof trimmers, and Cohort 2, which included all cows present in the herd at the time of hoof trimmer visit. Using a threshold model, heritabilities on the observed scale ranged from 0.01 to 0.08 for Cohort 1 and from 0.01 to 0.06 for Cohort 2. Heritabilities estimated with the linear model ranged from 0.01 to 0.07 for Cohort 1 and from 0.01 to 0.05 for Cohort 2. Despite a low heritability, the distribution of the sire breeding values showed large and exploitable variation among sires. Higher breeding

  14. Computationally Efficient Implementation of a Novel Algorithm for the General Unified Threshold Model of Survival (GUTS)

    PubMed Central

    Albert, Carlo; Vogel, Sören

    2016-01-01

    The General Unified Threshold model of Survival (GUTS) provides a consistent mathematical framework for survival analysis. However, the calibration of GUTS models is computationally challenging. We present a novel algorithm and its fast implementation in our R package, GUTS, that help to overcome these challenges. We show a step-by-step application example consisting of model calibration and uncertainty estimation as well as making probabilistic predictions and validating the model with new data. Using self-defined wrapper functions, we show how to produce informative text printouts and plots without effort, for the inexperienced as well as the advanced user. The complete ready-to-run script is available as supplemental material. We expect that our software facilitates novel re-analysis of existing survival data as well as asking new research questions in a wide range of sciences. In particular the ability to quickly quantify stressor thresholds in conjunction with dynamic compensating processes, and their uncertainty, is an improvement that complements current survival analysis methods. PMID:27340823

  15. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    NASA Astrophysics Data System (ADS)

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-12-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails.

  16. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    PubMed Central

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-01-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830

  17. Spinodals, scaling, and ergodicity in a threshold model with long-range stress transfer.

    PubMed

    Ferguson, C D; Klein, W; Rundle, J B

    1999-08-01

    We present both theoretical and numerical analyses of a cellular automaton version of a slider-block model or threshold model that includes long-range interactions. Theoretically we develop a coarse-grained description in the mean-field (infinite range) limit and discuss the relevance of the metastable state, limit of stability (spinodal), and nucleation to the phenomenology of the model. We also simulate the model and confirm the relevance of the theory for systems with long- but finite-range interactions. Results of particular interest include the existence of Gutenberg-Richter-like scaling consistent with that found on real earthquake fault systems, the association of large events with nucleation near the spinodal, and the result that such systems can be described, in the mean-field limit, with techniques appropriate to systems in equilibrium.

  18. A continuous damage random thresholds model for simulating the fracture behavior of nacre.

    PubMed

    Nukala, Phani K V V; Simunovic, Srdan

    2005-10-01

    This study investigates the fracture properties of nacre using a discrete lattice model based on continuous damage random threshold fuse network. The discrete lattice topology of the model is based on nacre's unique brick and mortar microarchitecture. The mechanical behavior of each of the bonds in the discrete lattice model is governed by the characteristic modular damage evolution of the organic matrix and the mineral bridges between the aragonite platelets. The numerical results obtained using this simple discrete lattice model are in very good agreement with the previously obtained experimental results, such as nacre's stiffness, tensile strength, and work of fracture. The analysis indicates that nacre's superior toughness is a direct consequence of ductility (maximum shear strain) of the organic matrix in terms of repeated unfolding of protein molecules, and its fracture strength is a result of its ordered brick and mortar architecture with significant overlap of the platelets, and shear strength of the organic matrix.

  19. Evaluating intercepts from demographic models to understand resource limitation and resource thresholds

    USGS Publications Warehouse

    Reynolds-Hogland, M. J.; Hogland, J.S.; Mitchell, M.S.

    2008-01-01

    Understanding resource limitation is critical to effective management and conservation of wild populations, however resource limitation is difficult to quantify partly because resource limitation is a dynamic process. Specifically, a resource that is limiting at one time may become non-limiting at another time, depending upon changes in its availability and changes in the availability of other resources. Methods for understanding resource limitation, therefore, must consider the dynamic effects of resources on demography. We present approaches for interpreting results of demographic modeling beyond analyzing model rankings, model weights, slope estimates, and model averaging. We demonstrate how interpretation of y-intercepts, odds ratios, and rates of change can yield insights into resource limitation as a dynamic process, assuming logistic regression is used to link estimates of resources with estimates of demography. In addition, we show how x-intercepts can be evaluated with respect to odds ratios to understand resource thresholds. ?? 2007 Elsevier B.V. All rights reserved.

  20. On the underlying assumptions of threshold Boolean networks as a model for genetic regulatory network behavior

    PubMed Central

    Tran, Van; McCall, Matthew N.; McMurray, Helene R.; Almudevar, Anthony

    2013-01-01

    Boolean networks (BoN) are relatively simple and interpretable models of gene regulatory networks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks (GRN). We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN). Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled. We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions. Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles. PMID:24376454

  1. Modeling the residual effects and threshold saturation of training: a case study of Olympic swimmers.

    PubMed

    Hellard, Philippe; Avalos, Marta; Millet, Gregoire; Lacoste, Lucien; Barale, Frederic; Chatard, Jean-Claude

    2005-02-01

    The aim of this study was to model the residual effects of training on the swimming performance and to compare a model that includes threshold saturation (MM) with the Banister model (BM). Seven Olympic swimmers were studied over a period of 4 +/- 2 years. For 3 training loads (low-intensity w(LIT), high-intensity w(HIT), and strength training w(ST)), 3 residual training effects were determined: short-term (STE) during the taper phase (i.e., 3 weeks before the performance [weeks 0, 1, and 2]), intermediate-term (ITE) during the intensity phase (weeks 3, 4, and 5), and long-term (LTE) during the volume phase (weeks 6, 7, and 8). ITE and LTE were positive for w(HIT) and w(LIT), respectively (p < 0.05). Low-intensity training load during taper was related to performances by a parabolic relationship (p < 0.05). Different quality measures indicated that MM compares favorably with BM. Identifying individual training thresholds may help individualize the distribution of training loads.

  2. Modeling of surface thermodynamics and damage thresholds in the IR and THz regime

    NASA Astrophysics Data System (ADS)

    Clark, C. D., III; Thomas, Robert J.; Maseberg, Paul D. S.; Buffington, Gavin D.; Irvin, Lance J.; Stolarski, Jacob; Rockwell, Benjamin A.

    2007-02-01

    The Air Force Research Lab has developed a configurable, two-dimensional, thermal model to predict laser-tissue interactions, and to aid in predictive studies for safe exposure limits. The model employs a finite-difference, time-dependent method to solve the two-dimensional cylindrical heat equation (radial and axial) in a biological system construct. Tissues are represented as multi-layer structures, with optical and thermal properties defined for each layer, are homogeneous throughout the layer. Multiple methods for computing the source term for the heat equation have been implemented, including simple linear absorption definitions and full beam propagation through finite-difference methods. The model predicts the occurrence of thermal damage sustained by the tissue, and can also determine damage thresholds for total optical power delivered to the tissue. Currently, the surface boundary conditions incorporate energy loss through free convection, surface radiation, and evaporative cooling. Implementing these boundary conditions is critical for correctly calculating the surface temperature of the tissue, and, therefore, damage thresholds. We present an analysis of the interplay between surface boundary conditions, ambient conditions, and blood perfusion within tissues.

  3. Load redistribution rules for progressive failure in shallow landslides: Threshold mechanical models

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2017-01-01

    Rainfall-induced landslides are often preceded by progressive failures that culminate in abrupt mass release. Local failure progression is captured by a landslide hydro-mechanical triggering model that represents the soil mantle as interacting columns linked by tensile and compressive mechanical "bonds." Mechanical bonds may fail at a prescribed threshold leaving a modeling challenge of how to redistribute their load to neighboring intact soil columns. We employed an elastic spring-block model to analytically derive redistribution rules defined by the stiffness ratio of compressive to tensile bonds. These linear-elastic rules were generalized to real soil using measurable Young's modulus and Poisson's ratio. Results indicate that "local" failure characteristics of ductile-like soils (e.g., clay) are reproduced by low stiffness ratios, whereas "global" failure of brittle sandy soils corresponds to large stiffness ratios. Systematic analyses yield guidelines for selecting load redistribution rules for failure of geological materials and mass-movement phenomena represented by discrete threshold-mechanics.

  4. Modeling the residual effects and threshold saturation of training: a case study of Olympic swimmers

    PubMed Central

    Hellard, Philippe; Avalos, Marta; Millet, Grégoire; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude

    2005-01-01

    The aim of this study was to model the residual effects of training on the swimming performance and to compare a model including threshold saturation (MM) to the Banister model (BM). Seven Olympic swimmers were studied over a period of 4 ± 2 years. For three training loads (low-intensity wLIT, high-intensity wHIT and strength training wST), three residual training effects were determined: short-term (STE) during the taper phase, i.e. three weeks before the performance (weeks 0, −1, −2), intermediate-term (ITE) during the intensity phase (weeks −3, −4 and −5) and long-term (LTE) during the volume phase (weeks −6, −7, −8). ITE and LTE were positive for wHIT and wLIT, respectively (P < 0.05). wLIT during taper was related to performances by a parabolic relationship (P < 0.05). Different quality measures indicated that MM compares favorably with BM. Identifying individual training thresholds may help individualizing the distribution of training loads. PMID:15705048

  5. Effect of microgravity on visual contrast threshold during STS Shuttle missions: Visual Function Tester-Model 2 (VFT-2)

    NASA Technical Reports Server (NTRS)

    Oneal, Melvin R.; Task, H. Lee; Genco, Louis V.

    1992-01-01

    Viewgraphs on effect of microgravity on visual contrast threshold during STS shuttle missions are presented. The purpose, methods, and results are discussed. The visual function tester model 2 is used.

  6. Threshold conditions for integrated pest management models with pesticides that have residual effects.

    PubMed

    Tang, Sanyi; Liang, Juhua; Tan, Yuanshun; Cheke, Robert A

    2013-01-01

    Impulsive differential equations (hybrid dynamical systems) can provide a natural description of pulse-like actions such as when a pesticide kills a pest instantly. However, pesticides may have long-term residual effects, with some remaining active against pests for several weeks, months or years. Therefore, a more realistic method for modelling chemical control in such cases is to use continuous or piecewise-continuous periodic functions which affect growth rates. How to evaluate the effects of the duration of the pesticide residual effectiveness on successful pest control is key to the implementation of integrated pest management (IPM) in practice. To address these questions in detail, we have modelled IPM including residual effects of pesticides in terms of fixed pulse-type actions. The stability threshold conditions for pest eradication are given. Moreover, effects of the killing efficiency rate and the decay rate of the pesticide on the pest and on its natural enemies, the duration of residual effectiveness, the number of pesticide applications and the number of natural enemy releases on the threshold conditions are investigated with regard to the extent of depression or resurgence resulting from pulses of pesticide applications and predator releases. Latin Hypercube Sampling/Partial Rank Correlation uncertainty and sensitivity analysis techniques are employed to investigate the key control parameters which are most significantly related to threshold values. The findings combined with Volterra's principle confirm that when the pesticide has a strong effect on the natural enemies, repeated use of the same pesticide can result in target pest resurgence. The results also indicate that there exists an optimal number of pesticide applications which can suppress the pest most effectively, and this may help in the design of an optimal control strategy.

  7. Genetic analysis of the temperament of Nellore cattle using linear and threshold models.

    PubMed

    Lucena, C R S; Neves, H H R; Carvalheiro, R; Oliveira, J A; Queiroz, S A

    2015-03-01

    Temperament is an important trait for the management and welfare of animals and for reducing accidents involving people who work with cattle. The present study aimed to estimate the genetic parameters related to the temperament score (T) and weaning weight (WW) of Nellore cattle, reared in a beef cattle breeding program in Brazil. Data were analyzed using two different two-trait statistical models, both considering WW and T: (1) a linear-linear model in which variance components (VCs) were estimated using restricted maximum likelihood; and (2) a linear-threshold model in which VCs were estimated via Bayesian inference. WW was included in the analyses of T to minimize any possible effects of sequential selection and to allow for estimation of the genetic correlation between these two traits. The heritability estimates for T were 0.21 ± 0.003 (model 1) and 0.26 (model 2, with a 95% credibility interval (95% CI) of 0.21 to 0.32). The estimated genetic correlations between WW and T were of a moderate magnitude (-0.33 ± 0.01 (model 1) and -0.34 (95% CI: -0.40, -0.28, model 2). The genetic correlations between the estimated breeding values (EBVs) obtained for the animals based on the two models were high (>0.92). The use of different models had little influence on the classification of animals based on EBVs or the accuracy of the EBVs.

  8. Response style analysis with threshold and multi-process IRT models: A review and tutorial.

    PubMed

    Böckenholt, Ulf; Meiser, Thorsten

    2017-02-01

    Two different item response theory model frameworks have been proposed for the assessment and control of response styles in rating data. According to one framework, response styles can be assessed by analysing threshold parameters in Rasch models for ordinal data and in mixture-distribution extensions of such models. A different framework is provided by multi-process item response tree models, which can be used to disentangle response processes that are related to the substantive traits and response tendencies elicited by the response scale. In this tutorial, the two approaches are reviewed, illustrated with an empirical data set of the two-dimensional 'Personal Need for Structure' construct, and compared in terms of multiple criteria. Mplus is used as a software framework for (mixed) polytomous Rasch models and item response tree models as well as for demonstrating how parsimonious model variants can be specified to test assumptions on the structure of response styles and attitude strength. Although both frameworks are shown to account for response styles, they differ on the quantitative criteria of model selection, practical aspects of model estimation, and conceptual issues of representing response styles as continuous and multidimensional sources of individual differences in psychological assessment. © 2017 The British Psychological Society.

  9. Experimental confirmation of the polygyny threshold model for red-winged blackbirds.

    PubMed Central

    Pribil, S.; Searcy, W. A.

    2001-01-01

    The polygyny threshold model assumes that polygynous mating is costly to females and proposes that females pay the cost of polygyny only when compensated by obtaining a superior territory or male. We present, to the authors' knowledge, the first experimental field test to demonstrate that females trade mating status against territory quality as proposed by this hypothesis. Previous work has shown that female red-winged blackbirds (Agelaius phoeniceus) in Ontario prefer settling with unmated males and that this preference is adaptive because polygynous mating status lowers female reproductive success. Other evidence suggests that nesting over water increases the reproductive success of female red-winged blackbirds. Here we describe an experiment in which females were given choices between two adjacent territories, one owned by an unmated male without any over-water nesting sites and the other by an already-mated male with over-water sites. Females overwhelmingly preferred the already-mated males, demonstrating that superior territory quality can reverse preferences based on mating status and supporting the polygyny threshold model as the explanation for polygyny in this population. PMID:11487413

  10. Linear-no-threshold is a radiation-protection standard rather than a mechanistic effect model.

    PubMed

    Breckow, Joachim

    2006-03-01

    The linear-no-threshold (LNT) controversy covers much more than the mere discussion whether or not "the LNT hypothesis is valid". It is shown that one cannot expect to find only one or even the only one dose-effect relationship. Each element within the biological reaction chain that is affected by ionizing radiation contributes in a specific way to the final biological endpoint of interest. The resulting dose-response relationship represents the superposition of all these effects. Till now there is neither a closed and clear picture of the entirety of radiation action for doses below some 10 mSv, nor does clear epidemiological evidence exist for an increase of risk for stochastic effects, in this dose range. On the other hand, radiation protection demands for quantitative risk estimates as well as for practicable dose concepts. In this respect, the LNT concept is preferred against any alternative concept. However, the LNT concept does not necessarily mean that the mechanism of cancer induction is intrinsically linear. It could hold even if the underlying multi-step mechanisms act in a non-linear way. In this case it would express a certain "attenuation" of non-linearities. Favouring LNT against threshold-, hyper-, or sub-linear models for radiation-protection purposes on the one hand, but preferring one of these models (e.g. for a specific effect) because of biological considerations for scientific purposes on the other hand, does not mean a contradiction.

  11. History-Based Response Threshold Model for Division of Labor in Multi-Agent Systems

    PubMed Central

    Lee, Wonki; Kim, DaeEun

    2017-01-01

    Dynamic task allocation is a necessity in a group of robots. Each member should decide its own task such that it is most commensurate with its current state in the overall system. In this work, the response threshold model is applied to a dynamic foraging task. Each robot employs a task switching function based on the local task demand obtained from the surrounding environment, and no communication occurs between the robots. Each individual member has a constant-sized task demand history that reflects the global demand. In addition, it has response threshold values for all of the tasks and manages the task switching process depending on the stimuli of the task demands. The robot then determines the task to be executed to regulate the overall division of labor. This task selection induces a specialized tendency for performing a specific task and regulates the division of labor. In particular, maintaining a history of the task demands is very effective for the dynamic foraging task. Various experiments are performed using a simulation with multiple robots, and the results show that the proposed algorithm is more effective as compared to the conventional model. PMID:28555031

  12. Influence of priors in Bayesian estimation of genetic parameters for multivariate threshold models using Gibbs sampling

    PubMed Central

    Stock, Kathrin Friederike; Distl, Ottmar; Hoeschele, Ina

    2007-01-01

    Simulated data were used to investigate the influence of the choice of priors on estimation of genetic parameters in multivariate threshold models using Gibbs sampling. We simulated additive values, residuals and fixed effects for one continuous trait and liabilities of four binary traits, and QTL effects for one of the liabilities. Within each of four replicates six different datasets were generated which resembled different practical scenarios in horses with respect to number and distribution of animals with trait records and availability of QTL information. (Co)Variance components were estimated using a Bayesian threshold animal model via Gibbs sampling. The Gibbs sampler was implemented with both a flat and a proper prior for the genetic covariance matrix. Convergence problems were encountered in > 50% of flat prior analyses, with indications of potential or near posterior impropriety between about round 10 000 and 100 000. Terminations due to non-positive definite genetic covariance matrix occurred in flat prior analyses of the smallest datasets. Use of a proper prior resulted in improved mixing and convergence of the Gibbs chain. In order to avoid (near) impropriety of posteriors and extremely poorly mixing Gibbs chains, a proper prior should be used for the genetic covariance matrix when implementing the Gibbs sampler. PMID:17306197

  13. Threshold fluctuations in an N sodium channel model of the node of Ranvier.

    PubMed Central

    Rubinstein, J T

    1995-01-01

    Computer simulations of stochastic single-channel open-close kinetics are applied to an N sodium channel model of a node of Ranvier. Up to 32,000 voltage-gated sodium channels have been simulated with modified amphibian sodium channel kinetics. Poststimulus time histograms are obtained with 1000 monophasic pulse stimuli, and measurements are made of changes in the relative spread of threshold (RS) with changes in the model parameters. RS is found to be invariant with pulse durations from 100 microseconds to 3 ms. RS is approximately of inverse proportion to square-root of N. It decreases with increasing temperature and is dependent on passive electrical properties of the membrane as well as the single-channel conductance. The simulated RS and its independence of pulse duration is consistent with experimental results from the literature. Thus, the microscopic fluctuations of single, voltage-sensitive sodium channels in the amphibian peripheral node of Ranvier are sufficient to account for the macroscopic fluctuation if threshold to electrical stimulation. PMID:7756544

  14. Wireless peripheral nerve stimulation increases pain threshold in two neuropathic rat models.

    PubMed

    Rosellini, Will; Casavant, Reema; Engineer, Navzer; Beall, Patrick; Pierce, David; Jain, Ravi; Dougherty, Patrick M

    2012-06-01

    Neurostimulation approaches including spinal cord and peripheral nerve stimulation are typically used to treat intractable chronic pain in individuals who are refractory to pain medications. Our earlier studies have shown that a voltage controlled capacitive discharge (VCCD) method of stimulation of nerve activation is able to selectively recruit activity in large myelinated nerve fibers. In this study, we were able to wirelessly activate the sciatic nerve using the VCCD waveform. The purpose of this study was to determine whether this waveform can effectively improve two of the most troublesome pain symptoms experienced by patients with chronic neuropathic pain mechanical and cold hyperalgesia. Neuropathic mechanical hyperalgesia was reproduced using the Spinal Nerve Ligation (SNL) rat model whereas cold allodynia was reproduced using the Chronic Constriction Injury (CCI) model in male rats. Von Frey and cold plate tests were used to evaluate paw withdrawal threshold and latency to withdrawal before and after stimulation in experimental and control rats. Paw withdrawal threshold increased significantly compared to post-lesion baseline after VCCD stimulation in SNL rats. We also observed a significant improvement in cold allodynia in the active implant CCI rats after stimulation. These results suggest that the VCCD stimulation using a wireless microstimulator may be effective in the treatment of neuropathic pain. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Experimental confirmation of the polygyny threshold model for red-winged blackbirds.

    PubMed

    Pribil, S; Searcy, W A

    2001-08-07

    The polygyny threshold model assumes that polygynous mating is costly to females and proposes that females pay the cost of polygyny only when compensated by obtaining a superior territory or male. We present, to the authors' knowledge, the first experimental field test to demonstrate that females trade mating status against territory quality as proposed by this hypothesis. Previous work has shown that female red-winged blackbirds (Agelaius phoeniceus) in Ontario prefer settling with unmated males and that this preference is adaptive because polygynous mating status lowers female reproductive success. Other evidence suggests that nesting over water increases the reproductive success of female red-winged blackbirds. Here we describe an experiment in which females were given choices between two adjacent territories, one owned by an unmated male without any over-water nesting sites and the other by an already-mated male with over-water sites. Females overwhelmingly preferred the already-mated males, demonstrating that superior territory quality can reverse preferences based on mating status and supporting the polygyny threshold model as the explanation for polygyny in this population.

  16. National evaluation for calving ease, gestation length and birth weight by linear and threshold model methodologies.

    PubMed

    Lee, Deukhwan; Misztal, Ignacy; Bertrand, J Keith; Rekaya, Romdhane

    2002-01-01

    Data included 393,097 calving ease, 129,520 gestation length, and 412,484 birth weight records on 412,484 Gelbvieh cattle. Additionally, pedigrees were available on 72,123 animals. Included in the models were effects of sex and age of dam, treated as fixed, as well as direct, maternal genetic and permanent environmental effects and effects of contemporary group (herd-year-season), treated as random. In all analyses, birth weight and gestation length were treated as continuous traits. Calving ease (CE) was treated either as a continuous trait in a mixed linear model (LM), or as a categorical trait in linear-threshold models (LTM). Solutions in TM obtained by empirical Bayes (TMEB) and Monte Carlo (TMMC) methodologies were compared with those by LM. Due to the computational cost, only 10,000 samples were obtained for TMMC. For calving ease, correlations between LM and TMEB were 0.86 and 0.78 for direct and maternal genetic effects, respectively. The same correlations but between TMEB and TMMC were 1.00 and 0.98, respectively. The correlations between LM and TMMC were 0.85 and 0.75, respectively. The correlations for the linear traits were above.97 between LM and TMEB but as low as 0.91 between LM and TMMC, suggesting insufficient convergence of TMMC. Computing time required was about 2 hrs, 5 hrs, and 6 days for LM, TMEB and TMMC, respectively, and memory requirements were 169, 171, and 445 megabytes, respectively. Bayesian implementation of threshold model is simple, can be extended to multiple categorical traits, and allows easy calculation of accuracies; however, computing time is prohibitively long for large models.

  17. Temperature thresholds and degree-day model for Marmara gulosa (Lepidoptera: Gracillariidae).

    PubMed

    O'Neal, M J; Headrick, D H; Montez, Gregory H; Grafton-Cardwell, E E

    2011-08-01

    The developmental thresholds for Marmara gulosa Guillén & Davis (Lepidoptera: Gracillariidae) were investigated in the laboratory by using 17, 21, 25, 29, and 33 degrees C. The lowest mortality occurred in cohorts exposed to 25 and 29 degrees C. Other temperatures caused >10% mortality primarily in egg and first and second instar sap-feeding larvae. Linear regression analysis approximated the lower developmental threshold at 12.2 degrees C. High mortality and slow developmental rate at 33 degrees C indicate the upper developmental threshold is near this temperature. The degree-day (DD) model indicated that a generation requires an accumulation of 322 DD for development from egg to adult emergence. Average daily temperatures in the San Joaquin Valley could produce up to seven generations of M. gulosa per year. Field studies documented two, five, and three overlapping generations of M. gulosa in walnuts (Juglans regia L.; Juglandaceae), pummelos (Citrus maxima (Burm.) Merr.; Rutaceae), and oranges (Citrus sinensis (L.) Osbeck; Rutaceae), for a total of seven observed peelminer generations. Degree-day units between generations averaged 375 DD for larvae infesting walnut twigs; however, availability of green wood probably affected timing of infestations. Degree-day units between larval generations averaged 322 for pummelos and 309 for oranges, confirming the laboratory estimation. First infestation of citrus occurred in June in pummelo fruit and August in orange fruit when fruit neared 60 mm in diameter. Fruit size and degree-day units could be used as management tools to more precisely time insecticide treatments to target the egg stage and prevent rind damage to citrus. Degree-day units also could be used to more precisely time natural enemy releases to target larval instars that are preferred for oviposition.

  18. Future streamflow droughts in glacierized catchments: the impact of dynamic glacier modelling and changing thresholds

    NASA Astrophysics Data System (ADS)

    Van Tiel, Marit; Van Loon, Anne; Wanders, Niko; Vis, Marc; Teuling, Ryan; Stahl, Kerstin

    2017-04-01

    In glacierized catchments, snowpack and glaciers function as an important storage of water and hydrographs of highly glacierized catchments in mid- and high latitudes thus show a clear seasonality with low flows in winter and high flows in summer. Due to the ongoing climate change we expect this type of storage capacity to decrease with resultant consequences for the discharge regime. In this study we focus on streamflow droughts, here defined as below average water availability specifically in the high flow season, and which methods are most suitable to characterize future streamflow droughts as regimes change. Two glacierized catchments, Nigardsbreen (Norway) and Wolverine (Alaska), are used as case study and streamflow droughts are compared between two periods, 1975-2004 and 2071-2100. Streamflow is simulated with the HBV light model, calibrated on observed discharge and seasonal glacier mass balances, for two climate change scenarios (RCP 4.5 & RCP 8.5). In studies on future streamflow drought often the same variable threshold of the past has been applied to the future, but in regions where a regime shift is expected this method gives severe "droughts" in the historic high-flow period. We applied the new alternative transient variable threshold, a threshold that adapts to the changing hydrological regime and is thus better able to cope with this issue, but has never been thoroughly tested in glacierized catchments. As the glacier area representation in the hydrological modelling can also influence the modelled discharge and the derived streamflow droughts, we evaluated in this study both the difference between the historical variable threshold (HVT) and transient variable threshold (TVT) and two different glacier area conceptualisations (constant area (C) and dynamical area (D)), resulting in four scenarios: HVT-C, HVT-D, TVT-C and TVT-D. Results show a drastic decrease in the number of droughts in the HVT-C scenario due to increased glacier melt. The deficit

  19. Catastrophic shifts and lethal thresholds in a propagating front model of unstable tumor progression

    NASA Astrophysics Data System (ADS)

    Amor, Daniel R.; Solé, Ricard V.

    2014-08-01

    Unstable dynamics characterizes the evolution of most solid tumors. Because of an increased failure of maintaining genome integrity, a cumulative increase in the levels of gene mutation and loss is observed. Previous work suggests that instability thresholds to cancer progression exist, defining phase transition phenomena separating tumor-winning scenarios from tumor extinction or coexistence phases. Here we present an integral equation approach to the quasispecies dynamics of unstable cancer. The model exhibits two main phases, characterized by either the success or failure of cancer tissue. Moreover, the model predicts that tumor failure can be due to either a reduced selective advantage over healthy cells or excessive instability. We also derive an approximate, analytical solution that predicts the front speed of aggressive tumor populations on the instability space.

  20. Caregiving appraisal and interventions based on the progressively lowered stress threshold model.

    PubMed

    Stolley, Jacqueline M; Reed, David; Buckwalter, K C

    2002-01-01

    The purpose of this article is to describe the impact of a theoretically driven, psychoeducational intervention based on the Progressively Lowered Stress Threshold (PLST) model on caregiving appraisal among community-based caregivers of persons with Alzheimer's disease and related disorders. A total of 241 subjects completed the year-long study in four sites in Iowa, Minnesota, Indiana, and Arizona. Caregiving appraisal was measured using the four factors of the Philadelphia Geriatric Center Caregiving Appraisal Scale: mastery, burden, satisfaction, and impact. Analysis of trends over time showed that the intervention positively affected impact, burden, and satisfaction but had no effect on mastery when measured against the comparison group. The PLST model was influential in increasing positive appraisal and decreasing negative appraisal of the caregiving situation.

  1. Probabilistic transport models for plasma transport in the presence of critical thresholds: Beyond the diffusive paradigma)

    NASA Astrophysics Data System (ADS)

    Sánchez, R.; van Milligen, B. Ph.; Carreras, B. A.

    2005-05-01

    It is argued that the modeling of plasma transport in tokamaks may benefit greatly from extending the usual local paradigm to accommodate scale-free transport mechanisms. This can be done by combining Lévy distributions and a nonlinear threshold condition within the continuous time random walk concept. The advantages of this nonlocal, nonlinear extension are illustrated by constructing a simple particle density transport model that, as a result of these ideas, spontaneously exhibits much of nondiffusive phenomenology routinely observed in tokamaks. The fluid limit of the system shows that the kind of equations that are appropriate to capture these dynamics are based on fractional differential operators. In them, effective diffusivities and pinch velocities are found that are dynamically set by the system in response to the specific characteristics of the fueling source and external perturbations. This fact suggests some dramatic consequences for the extrapolation of these transport properties to larger size systems.

  2. Catastrophic shifts and lethal thresholds in a propagating front model of unstable tumor progression.

    PubMed

    Amor, Daniel R; Solé, Ricard V

    2014-08-01

    Unstable dynamics characterizes the evolution of most solid tumors. Because of an increased failure of maintaining genome integrity, a cumulative increase in the levels of gene mutation and loss is observed. Previous work suggests that instability thresholds to cancer progression exist, defining phase transition phenomena separating tumor-winning scenarios from tumor extinction or coexistence phases. Here we present an integral equation approach to the quasispecies dynamics of unstable cancer. The model exhibits two main phases, characterized by either the success or failure of cancer tissue. Moreover, the model predicts that tumor failure can be due to either a reduced selective advantage over healthy cells or excessive instability. We also derive an approximate, analytical solution that predicts the front speed of aggressive tumor populations on the instability space.

  3. Binary threshold networks as a natural null model for biological networks

    NASA Astrophysics Data System (ADS)

    Rybarsch, Matthias; Bornholdt, Stefan

    2012-08-01

    Spin models of neural networks and genetic networks are considered elegant as they are accessible to statistical mechanics tools for spin glasses and magnetic systems. However, the conventional choice of variables in spin systems may cause problems in some models when parameter choices are unrealistic from a biological perspective. Obviously, this may limit the role of a model as a template model for biological systems. Perhaps less obviously, also ensembles of random networks are affected and may exhibit different critical properties. We consider here a prototypical network model that is biologically plausible in its local mechanisms. We study a discrete dynamical network with two characteristic properties: Nodes with binary states 0 and 1, and a modified threshold function with Θ0(0)=0. We explore the critical properties of random networks of such nodes and find a critical connectivity Kc=2.0 with activity vanishing at the critical point. Finally, we observe that the present model allows a more natural implementation of recent models of budding yeast and fission yeast cell-cycle control networks.

  4. The minimal SUSY B - L model: simultaneous Wilson lines and string thresholds

    NASA Astrophysics Data System (ADS)

    Deen, Rehan; Ovrut, Burt A.; Purves, Austin

    2016-07-01

    In previous work, we presented a statistical scan over the soft supersymmetry breaking parameters of the minimal SUSY B - L model. For specificity of calculation, unification of the gauge parameters was enforced by allowing the two Z_3× Z_3 Wilson lines to have mass scales separated by approximately an order of magnitude. This introduced an additional "left-right" sector below the unification scale. In this paper, for three important reasons, we modify our previous analysis by demanding that the mass scales of the two Wilson lines be simultaneous and equal to an "average unification" mass < M U >. The present analysis is 1) more "natural" than the previous calculations, which were only valid in a very specific region of the Calabi-Yau moduli space, 2) the theory is conceptually simpler in that the left-right sector has been removed and 3) in the present analysis the lack of gauge unification is due to threshold effects — particularly heavy string thresholds, which we calculate statistically in detail. As in our previous work, the theory is renormalization group evolved from < M U > to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ˜125 GeV. The subspace of soft supersymmetry breaking masses that satisfies all such constraints is presented and shown to be substantial.

  5. Numerical modeling of rainfall thresholds for shallow landsliding in the Seattle, Washington, area

    USGS Publications Warehouse

    Godt, Jonathan W.; McKenna, Jonathan P.

    2008-01-01

    The temporal forecasting of landslide hazard has typically relied on empirical relations between rainfall characteristics and landslide occurrence to identify conditions that may cause shallow landslides. Here, we describe an alternate, deterministic approach to define rainfall thresholds for landslide occurrence in the Seattle, Washington, area. This approach combines an infinite slope-stability model with a variably saturated flow model to determine the rainfall intensity and duration that leads to shallow failure of hillside colluvium. We examine the influence of variation in particle-size distribution on the unsaturated hydraulic properties of the colluvium by performing capillary-rise tests on glacial outwash sand and three experimental soils with increasing amounts of fine-grained material. Observations of pore-water response to rainfall collected as part of a program to monitor the near-surface hydrology of steep coastal bluffs along Puget Sound were used to test the numerical model results and in an inverse modeling procedure to determine the in situ hydraulic properties. Modeling results are given in terms of a destabilizing rainfall intensity and duration, and comparisons with empirical observations of landslide occurrence and triggering rainfall indicate that the modeling approach may be useful for forecasting landslide occurrence.

  6. Thresholds in Atmosphere-Soil Moisture Interactions: Results from Climate Model Studies

    NASA Technical Reports Server (NTRS)

    Oglesby, Robert J.; Marshall, Susan; Erickson, David J., III; Roads, John O.; Robertson, Franklin R.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The potential predictability of the effects of warm season soil moisture anomalies over the central U.S. has been investigated using a series of GCM (Global Climate Model) experiments with the NCAR (National Center for Atmospheric Research) CCM3 (Community Climate Model version 3)/LSM (Land Surface Model). Three different types of experiments have been made, all starting in either March (representing precursor conditions) or June (conditions at the onset of the warm season): (1) 'anomaly' runs with large, exaggerated initial soil moisture reductions, aimed at evaluating the physical mechanisms by which soil moisture can affect the atmosphere; (2) 'predictability' runs aimed at evaluating whether typical soil moisture initial anomalies (indicative of year-to-year variability) can have a significant effect, and if so, for how long; (3) 'threshold' runs aimed at evaluating if a soil moisture anomaly must be of a specific size (i.e., a threshold crossed) before a significant impact on the atmosphere is seen. The 'anomaly' runs show a large, long-lasting response in soil moisture and also quantities such as surface temperature, sea level pressure, and precipitation; effects persist for at least a year. The 'predictability' runs, on the other hand, show very little impact of the initial soil moisture anomalies on the subsequent evolution of soil moisture and other atmospheric parameters; internal variability is most important, with the initial state of the atmosphere (representing remote effects such as SST anomalies) playing a more minor role. The 'threshold' runs, devised to help resolve the dichotomy in 'anomaly' and 'predictability' results, suggest that, at least in CCM3/LSM, the vertical profile of soil moisture is the most important factor, and that deep soil zone anomalies exert a more powerful, long-lasting effect than do anomalies in the near surface soil zone. We therefore suggest that soil moisture feedbacks may be more important in explaining prolonged

  7. Thresholds in Atmosphere-Soil Moisture Interactions: Results from Climate Model Studies

    NASA Technical Reports Server (NTRS)

    Oglesby, Robert J.; Marshall, Susan; Erickson, David J., III; Roads, John O.; Robertson, Franklin R.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The potential predictability of the effects of warm season soil moisture anomalies over the central U.S. has been investigated using a series of GCM (Global Climate Model) experiments with the NCAR (National Center for Atmospheric Research) CCM3 (Community Climate Model version 3)/LSM (Land Surface Model). Three different types of experiments have been made, all starting in either March (representing precursor conditions) or June (conditions at the onset of the warm season): (1) 'anomaly' runs with large, exaggerated initial soil moisture reductions, aimed at evaluating the physical mechanisms by which soil moisture can affect the atmosphere; (2) 'predictability' runs aimed at evaluating whether typical soil moisture initial anomalies (indicative of year-to-year variability) can have a significant effect, and if so, for how long; (3) 'threshold' runs aimed at evaluating if a soil moisture anomaly must be of a specific size (i.e., a threshold crossed) before a significant impact on the atmosphere is seen. The 'anomaly' runs show a large, long-lasting response in soil moisture and also quantities such as surface temperature, sea level pressure, and precipitation; effects persist for at least a year. The 'predictability' runs, on the other hand, show very little impact of the initial soil moisture anomalies on the subsequent evolution of soil moisture and other atmospheric parameters; internal variability is most important, with the initial state of the atmosphere (representing remote effects such as SST anomalies) playing a more minor role. The 'threshold' runs, devised to help resolve the dichotomy in 'anomaly' and 'predictability' results, suggest that, at least in CCM3/LSM, the vertical profile of soil moisture is the most important factor, and that deep soil zone anomalies exert a more powerful, long-lasting effect than do anomalies in the near surface soil zone. We therefore suggest that soil moisture feedbacks may be more important in explaining prolonged

  8. In-hospital mortality after traumatic brain injury surgery: a nationwide population-based comparison of mortality predictors used in artificial neural network and logistic regression models.

    PubMed

    Shi, Hon-Yi; Hwang, Shiuh-Lin; Lee, King-Teh; Lin, Chih-Lung

    2013-04-01

    Most reports compare artificial neural network (ANN) models and logistic regression models in only a single data set, and the essential issue of internal validity (reproducibility) of the models has not been adequately addressed. This study proposes to validate the use of the ANN model for predicting in-hospital mortality after traumatic brain injury (TBI) surgery and to compare the predictive accuracy of ANN with that of the logistic regression model. The authors of this study retrospectively analyzed 16,956 patients with TBI nationwide who were surgically treated in Taiwan between 1998 and 2009. For every 1000 pairs of ANN and logistic regression models, the area under the receiver operating characteristic curve (AUC), Hosmer-Lemeshow statistics, and accuracy rate were calculated and compared using paired t-tests. A global sensitivity analysis was also performed to assess the relative importance of input parameters in the ANN model and to rank the variables in order of importance. The ANN model outperformed the logistic regression model in terms of accuracy in 95.15% of cases, in terms of Hosmer-Lemeshow statistics in 43.68% of cases, and in terms of the AUC in 89.14% of cases. The global sensitivity analysis of in-hospital mortality also showed that the most influential (sensitive) parameters in the ANN model were surgeon volume followed by hospital volume, Charlson comorbidity index score, length of stay, sex, and age. This work supports the continued use of ANNs for predictive modeling of neurosurgery outcomes. However, further studies are needed to confirm the clinical efficacy of the proposed model.

  9. Model-based detection of synthetic bat echolocation calls using an energy threshold detector for initialization.

    PubMed

    Skowronski, Mark D; Fenton, M Brock

    2008-05-01

    Detection of echolocation calls is fundamental to quantitative analysis of bat acoustic signals. Automated methods of detection reduce the subjectivity of hand labeling of calls and speed up the detection process in an accurate and repeatable manner. A model-based detector was initialized using a baseline energy threshold detector, removing the need for hand labels to train the model, and shown to be superior to the baseline detector using synthetic calls in two experiments: (1) an artificial environment and (2) a field playback setting. Synthetic calls using a piecewise exponential frequency modulation function from five hypothetical species were employed to control the signal-to-noise ratio (SNR) in each experiment and to provide an absolute ground truth to judge detector performance. The model-based detector outperformed the baseline detector by 2.5 dB SNR in the artificial environment and 1.5 dB SNR in the field playback setting. Atmospheric absorption was measured for the synthetic calls, and 1.5 dB increased the effective detection radius by between 1 and 7 m depending on species. The results demonstrate that hand labels are not necessary for training detection models and that model-based detectors significantly increase the range of detection for a recording system.

  10. Mouse models of cystathionine β-synthase deficiency reveal significant threshold effects of hyperhomocysteinemia

    PubMed Central

    Gupta, Sapna; Kühnisch, Jirko; Mustafa, Aladdin; Lhotak, Sarka; Schlachterman, Alexander; Slifker, Michael J.; Klein-Szanto, Andres; High, Katherine A.; Austin, Richard C.; Kruger, Warren D.

    2009-01-01

    Untreated cystathionine β-synthase (CBS) deficiency in humans is characterized by extremely elevated plasma total homocysteine (tHcy>200 μM), with thrombosis as the major cause of morbidity. Treatment with vitamins and diet leads to a dramatic reduction in thrombotic events, even though patients often still have severe elevations in tHcy (>80 μM). To understand the difference between extreme and severe hyperhomocysteinemia, we have examined two mouse models of CBS deficiency: Tg-hCBS Cbs−/− mice, with a mean serum tHcy of 169 μM, and Tg-I278T Cbs−/− mice, with a mean tHcy of 296 μM. Only Tg-I278T Cbs−/− animals exhibited strong biological phenotypes, including facial alopecia, osteoporosis, endoplasmic reticulum (ER) stress in the liver and kidney, and a 20% reduction in mean survival time. Metabolic profiling of serum and liver reveals that Tg-I278T Cbs−/− mice have significantly elevated levels of free oxidized homocysteine but not protein-bound homocysteine in serum and elevation of all forms of homocysteine and S-adenosylhomocysteine in the liver compared to Tg-hCBS Cbs−/− mice. RNA profiling of livers indicate that Tg-I278T Cbs−/− and Tg-hCBS Cbs−/− mice have unique gene signatures, with minimal overlap. Our results indicate that there is a clear pathogenic threshold effect for tHcy and bring into question the idea that mild elevations in tHcy are directly pathogenic. Gupta, S., Kühnisch, J., Mustafa, A., Lhotak, S., Schlachterman, A., Slifker, M. J., Klein-Szanto, A., High, K. A., Austin, R. C., Kruger, W. D. Mouse models of cystathionine β-synthase deficiency reveal significant threshold effects of hyperhomocysteinemia. PMID:18987302

  11. Effect of resiniferatoxin on the noxious heat threshold temperature in the rat: a novel heat allodynia model sensitive to analgesics

    PubMed Central

    Almási, Róbert; Pethö, Gábor; Bölcskei, Kata; Szolcsányi, János

    2003-01-01

    An increasing-temperature hot plate (ITHP) was introduced to measure the noxious heat threshold (45.3±0.3°C) of unrestrained rats, which was reproducible upon repeated determinations at intervals of 5 or 30 min or 1 day. Morphine, diclofenac and paracetamol caused an elevation of the noxious heat threshold following i.p. pretreatment, the minimum effective doses being 3, 10 and 200 mg kg−1, respectively. Unilateral intraplantar injection of the VR1 receptor agonist resiniferatoxin (RTX, 0.048 nmol) induced a profound drop of heat threshold to the innocuous range with a maximal effect (8–10°C drop) 5 min after RTX administration. This heat allodynia was inhibited by pretreatment with morphine, diclofenac and paracetamol, the minimum effective doses being 1, 1 and 100 mg kg−1 i.p., respectively. The long-term sensory desensitizing effect of RTX was examined by bilateral intraplantar injection (0.048 nmol per paw) which produced, after an initial threshold drop, an elevation (up to 2.9±0.5°C) of heat threshold lasting for 5 days. The VR1 receptor antagonist iodo-resiniferatoxin (I-RTX, 0.05 nmol intraplantarly) inhibited by 51% the heat threshold-lowering effect of intraplantar RTX but not α,β-methylene-ATP (0.3 μmol per paw). I-RTX (0.1 or 1 nmol per paw) failed to alter the heat threshold either acutely (5–60 min) or on the long-term (5 days). The heat threshold of VR1 receptor knockout mice was not different from that of wild-type animals (45.6±0.5 vs 45.2±0.4°C). In conclusion, the RTX-induced drop of heat threshold measured by the ITHP is a novel heat allodynia model exhibiting a high sensitivity to analgesics. PMID:12746222

  12. Comparisons of Transport and Dispersion Model Predictions of the European Tracer Experiment: Area-Based and Population-Based Measures of Effectiveness

    DTIC Science & Technology

    2004-10-01

    Distribution........................................... 3-3 3-2. Contours (Based on IDT) for 3-hour Average Concentration Observations and SCIPUFF (Model 121...Predictions for the Time Periods Between 36 and 75 Hours After the Release. The solid lines correspond to contours for the SCIPUFF predictions...ATMES II Participants for which IDA Obtained Predictions (continued)) .. 1-4 1-2. Top-Ranked Model and Rankings of SCIPUFF and ARAC Based on MOE

  13. Electrodynamic model of the field effect transistor application for THz/subTHz radiation detection: Subthreshold and above threshold operation

    SciTech Connect

    Dobrovolsky, V.

    2014-10-21

    Developed in this work is an electrodynamic model of field effect transistor (FET) application for THz/subTHz radiation detection. It is based on solution of the Maxwell equations in the gate dielectric, expression for current in the channel, which takes into account both the drift and diffusion current components, and the equation of current continuity. For the regimes under and above threshold at the strong inversion the response voltage, responsivity, wave impedance, power of ohmic loss in the gate and channel have been found, and the electrical noise equivalent power (ENEP) has been estimated. The responsivity is orders of magnitude higher and ENEP under threshold is orders of magnitude less than these values above threshold. Under the threshold, the electromagnetic field in the gate oxide is identical to field of the plane waves in free-space. At the same time, for strong inversion the charging of the gate capacitance through the resistance of channel determines the electric field in oxide.

  14. Comparing Population-based Risk-stratification Model Performance Using Demographic, Diagnosis and Medication Data Extracted From Outpatient Electronic Health Records Versus Administrative Claims.

    PubMed

    Kharrazi, Hadi; Chi, Winnie; Chang, Hsien-Yen; Richards, Thomas M; Gallagher, Jason M; Knudson, Susan M; Weiner, Jonathan P

    2017-08-01

    There is an increasing demand for electronic health record (EHR)-based risk stratification and predictive modeling tools at the population level. This trend is partly due to increased value-based payment policies and the increasing availability of EHRs at the provider level. Risk stratification models, however, have been traditionally derived from claims or encounter systems. This study evaluates the challenges and opportunities of using EHR data instead of or in addition to administrative claims for risk stratification. This study used the structured EHR records and administrative claims of 85,581 patients receiving outpatient care at a large integrated provider system. Common data elements for risk stratification (ie, age, sex, diagnosis, and medication) were extracted from outpatient EHR records and administrative claims. The performance of a validated risk-stratification model was assessed using data extracted from claims alone, EHR alone, and claims and EHR combined. EHR-derived metrics overlapped considerably with administrative claims (eg, number of chronic conditions). The accuracy of the model, when using EHR data alone, was acceptable with an area under the curve of ∼0.81 for hospitalization and ∼0.85 for identifying top 1% utilizers using the concurrent model. However, when using EHR data alone, the predictive model explained a lower amount of variation in utilization-based outcomes compared with administrative claims. The results show a promising performance of models predicting cost and hospitalization using outpatient EHR's diagnosis and medication data. More research is needed to evaluate the benefits of other EHR data types (eg, lab values and vital signs) for risk stratification.

  15. Modeling habitat split: landscape and life history traits determine amphibian extinction thresholds.

    PubMed

    Fonseca, Carlos Roberto; Coutinho, Renato M; Azevedo, Franciane; Berbert, Juliana M; Corso, Gilberto; Kraenkel, Roberto A

    2013-01-01

    Habitat split is a major force behind the worldwide decline of amphibian populations, causing community change in richness and species composition. In fragmented landscapes, natural remnants, the terrestrial habitat of the adults, are frequently separated from streams, the aquatic habitat of the larvae. An important question is how this landscape configuration affects population levels and if it can drive species to extinction locally. Here, we put forward the first theoretical model on habitat split which is particularly concerned on how split distance - the distance between the two required habitats - affects population size and persistence in isolated fragments. Our diffusive model shows that habitat split alone is able to generate extinction thresholds. Fragments occurring between the aquatic habitat and a given critical split distance are expected to hold viable populations, while fragments located farther away are expected to be unoccupied. Species with higher reproductive success and higher diffusion rate of post-metamorphic youngs are expected to have farther critical split distances. Furthermore, the model indicates that negative effects of habitat split are poorly compensated by positive effects of fragment size. The habitat split model improves our understanding about spatially structured populations and has relevant implications for landscape design for conservation. It puts on a firm theoretical basis the relation between habitat split and the decline of amphibian populations.

  16. Modeling Habitat Split: Landscape and Life History Traits Determine Amphibian Extinction Thresholds

    PubMed Central

    Fonseca, Carlos Roberto; Coutinho, Renato M.; Azevedo, Franciane; Berbert, Juliana M.; Corso, Gilberto; Kraenkel, Roberto A.

    2013-01-01

    Habitat split is a major force behind the worldwide decline of amphibian populations, causing community change in richness and species composition. In fragmented landscapes, natural remnants, the terrestrial habitat of the adults, are frequently separated from streams, the aquatic habitat of the larvae. An important question is how this landscape configuration affects population levels and if it can drive species to extinction locally. Here, we put forward the first theoretical model on habitat split which is particularly concerned on how split distance – the distance between the two required habitats – affects population size and persistence in isolated fragments. Our diffusive model shows that habitat split alone is able to generate extinction thresholds. Fragments occurring between the aquatic habitat and a given critical split distance are expected to hold viable populations, while fragments located farther away are expected to be unoccupied. Species with higher reproductive success and higher diffusion rate of post-metamorphic youngs are expected to have farther critical split distances. Furthermore, the model indicates that negative effects of habitat split are poorly compensated by positive effects of fragment size. The habitat split model improves our understanding about spatially structured populations and has relevant implications for landscape design for conservation. It puts on a firm theoretical basis the relation between habitat split and the decline of amphibian populations. PMID:23818967

  17. History, development, and future of the progressively lowered stress threshold: a conceptual model for dementia care.

    PubMed

    Smith, Marianne; Gerdner, Linda A; Hall, Geri R; Buckwalter, Kathleen C

    2004-10-01

    Behavioral symptoms associated with dementia are a major concern for the person who experiences them and for caregivers who supervise, support, and assist them. The knowledge and skill of formal and informal caregivers affects the quality of care they can provide and their ability to cope with the challenges of caregiving. Nurses are in an excellent position to provide training to empower caregivers with the knowledge and skills necessary to reduce and better manage behaviors. This article reviews advances in geriatric nursing theory, practice, and research based on the Progressively Lowered Stress Threshold (PLST) model that are designed to promote more adaptive and functional behavior in older adults with advancing dementia. For more than 17 years, the model has been used to train caregivers in homes, adult day programs, nursing homes, and acute care hospitals and has served as the theoretical basis for in-home and institutional studies. Care planning principles and key elements of interventions that flow from the model are set forth, and outcomes from numerous research projects using the PLST model are presented.

  18. Evaluating the Number of Stages in Development of Squamous Cell and Adenocarcinomas across Cancer Sites Using Human Population-Based Cancer Modeling

    PubMed Central

    Kravchenko, Julia; Akushevich, Igor; Abernethy, Amy P.; Lyerly, H. Kim

    2012-01-01

    Background Adenocarcinomas (ACs) and squamous cell carcinomas (SCCs) differ by clinical and molecular characteristics. We evaluated the characteristics of carcinogenesis by modeling the age patterns of incidence rates of ACs and SCCs of various organs to test whether these characteristics differed between cancer subtypes. Methodology/Principal Findings Histotype-specific incidence rates of 14 ACs and 12 SCCs from the SEER Registry (1973–2003) were analyzed by fitting several biologically motivated models to observed age patterns. A frailty model with the Weibull baseline was applied to each age pattern to provide the best fit for the majority of cancers. For each cancer, model parameters describing the underlying mechanisms of carcinogenesis including the number of stages occurring during an individual’s life and leading to cancer (m-stages) were estimated. For sensitivity analysis, the age-period-cohort model was incorporated into the carcinogenesis model to test the stability of the estimates. For the majority of studied cancers, the numbers of m-stages were similar within each group (i.e., AC and SCC). When cancers of the same organs were compared (i.e., lung, esophagus, and cervix uteri), the number of m-stages were more strongly associated with the AC/SCC subtype than with the organ: 9.79±0.09, 9.93±0.19 and 8.80±0.10 for lung, esophagus, and cervical ACs, compared to 11.41±0.10, 12.86±0.34 and 12.01±0.51 for SCCs of the respective organs (p<0.05 between subtypes). Most SCCs had more than ten m-stages while ACs had fewer than ten m-stages. The sensitivity analyses of the model parameters demonstrated the stability of the obtained estimates. Conclusions/Significance A model containing parameters capable of representing the number of stages of cancer development occurring during individual’s life was applied to the large population data on incidence of ACs and SCCs. The model revealed that the number of m-stages differed by cancer subtype being more

  19. Pneumococcal meningitis threshold model: a potential tool to assess infectious risk of new or existing inner ear surgical interventions

    PubMed Central

    Wei, Benjamin P.C.; Shepherd, Robert K.; Robins-Browne, Roy M.; Clark, Graeme M.; O'Leary, Stephen J.

    2007-01-01

    Hypothesis A minimal threshold of S. pneumoniae is required to induce meningitis in healthy animals for intraperitoneal (hematogenous), middle ear and inner ear inoculations and this threshold may be altered by recent inner ear surgery. Background There has been an increase in the number of reported cases of cochlear implant-related pneumococcal meningitis since 2002. The pathogenesis of pneumococcal meningitis is complex and not completely understood. The bacteria can reach the central nervous system (CNS) from the upper respiratory tract mucosa via either hematogenous route or via the inner ear. The establishment of a threshold model for all potential routes of infection to the CNS in animals without cochlear implantation is an important first step to help us understand the pathogenesis of the disease in animals with cochlear implantation. Methods 54 otologically normal, adult Hooded Wistar rats (27 receiving cochleostomy and 27 controls) were inoculated with different amounts of bacterial counts via three different routes (intraperitoneal, middle ear and inner ear). Rats were monitored over 5 days for signs of meningitis. Blood, CSF and middle ear swabs were taken for bacterial culture and brains and cochleae were examined for signs of infection. Results The threshold of bacterial counts required to induce meningitis is lowest in rats receiving direct inner ear inoculation compared to both intraperitoneal and middle ear inoculation. There is no change in threshold between the group of rats with cochleostomy and the control (Fisher exact test; p < 0.05). Conclusion A minimal threshold of bacteria is required to induce meningitis in healthy animals and is different for three different routes of infection (intraperitoneal, middle ear and inner ear). Cochleostomy performed 4 weeks prior to the inoculation did not reduce the threshold of bacteria required for meningitis in all three infectious routes. This threshold model will also serve as a valuable tool, assisting

  20. Sedimentary selenium as a causal factor for adverse biological effects: Toxicity thresholds and stream modeling

    SciTech Connect

    Va Derveer, W.; Canton, S.

    1995-12-31

    Selenium (Se) in the aquatic environment exhibits a strong association with particulate organic matter and as a result, measurements of waterborne concentration can be an unreliable predictor of bioaccumulation and adverse effects. Particulate-bound Se, typically measured as sedimentary Se, has been repeatedly implicated as a causal factor for Se bioaccumulation and subsequent potential for reproductive failures in fish and/or birds at sites receiving coal-fired power plant and refinery effluents as well as irrigation drainage. In fact, the premise that adverse biological effects are largely induced by sedimentary Se satisfies all of Hill`s criteria for a causal association. Despite these findings, most efforts to control Se continue to focus on waterborne concentrations because sedimentary toxicity thresholds are largely unknown. Sedimentary Se and associated biological effects data from studies of Se-bearing industrial effluent and irrigation drainage were compiled to initiate development of biological effects thresholds, The probability of adverse effects on fish or birds appears to be low up to a sedimentary Se concentration of about 2.8 {micro}g/g dry weight and high at 6.4 {micro}g/g dry weight (10th and 50th percentile of effects data, respectively). In addition, a preliminary regression model was derived for predicting dissolved to sedimentary Se transfer in streams as an interactive function of site-specific sedimentary organic carbon content (R{sup 2} = 0,870, p < 0.001) based on irrigation drainage studies in Colorado. This dissolved Se interaction with sedimentary organic carbon provides a possible explanation for the variable biological response to waterborne Se-organic-rich sites are predisposed to greater Se bioaccumulation and subsequent biological effects than organic-poor sites.

  1. Decision tree model for predicting long-term outcomes in children with out-of-hospital cardiac arrest: a nationwide, population-based observational study

    PubMed Central

    2014-01-01

    Introduction At hospital arrival, early prognostication for children after out-of-hospital cardiac arrest (OHCA) might help clinicians formulate strategies, particularly in the emergency department. In this study, we aimed to develop a simple and generally applicable bedside tool for predicting outcomes in children after cardiac arrest. Methods We analyzed data of 5,379 children who had undergone OHCA. The data were extracted from a prospectively recorded, nationwide, Utstein-style Japanese database. The primary endpoint was survival with favorable neurological outcome (Cerebral Performance Category (CPC) scale categories 1 and 2) at 1 month after OHCA. We developed a decision tree prediction model by using data from a 2-year period (2008 to 2009, n = 3,693), and the data were validated using external data from 2010 (n = 1,686). Results Recursive partitioning analysis for 11 predictors in the development cohort indicated that the best single predictor for CPC 1 and 2 at 1 month was the prehospital return of spontaneous circulation (ROSC). The next predictor for children with prehospital ROSC was an initial shockable rhythm. For children without prehospital ROSC, the next best predictor was a witnessed arrest. Use of a simple decision tree prediction model permitted stratification into four outcome prediction groups: good (prehospital ROSC and initial shockable rhythm), moderately good (prehospital ROSC and initial nonshockable rhythm), poor (prehospital non-ROSC and witnessed arrest) and very poor (prehospital non-ROSC and unwitnessed arrest). By using this model, we identified patient groups ranging from 0.2% to 66.2% for 1-month CPC 1 and 2 probabilities. The validated decision tree prediction model demonstrated a sensitivity of 69.7% (95% confidence interval (CI) = 58.7% to 78.9%), a specificity of 95.2% (95% CI = 94.1% to 96.2%) and an area under the receiver operating characteristic curve of 0.88 (95% CI = 0.87 to 0.90) for predicting 1-month

  2. Modelling the Force of Infection for Hepatitis A in an Urban Population-Based Survey: A Comparison of Transmission Patterns in Brazilian Macro-Regions

    PubMed Central

    Ximenes, Ricardo Arraes de Alencar; Martelli, Celina Maria Turchi; Amaku, Marcos; Sartori, Ana Marli C.; de Soárez, Patricia Coelho; Novaes, Hillegonda Maria Dutilh; Pereira, Leila Maria Moreira Beltrão; Moreira, Regina Célia; Figueiredo, Gerusa Maria; de Azevedo, Raymundo Soares

    2014-01-01

    Background This study aimed to identify the transmission pattern of hepatitis A (HA) infection based on a primary dataset from the Brazilian National Hepatitis Survey in a pre-vaccination context. The national survey conducted in urban areas disclosed two epidemiological scenarios with low and intermediate HA endemicity. Methods A catalytic model of HA transmission was built based on a national seroprevalence survey (2005 to 2009). The seroprevalence data from 7,062 individuals aged 5–69 years from all the Brazilian macro-regions were included. We built up three models: fully homogeneous mixing model, with constant contact pattern; the highly assortative model and the highly assortative model with the additional component accounting for contacts with infected food/water. Curves of prevalence, force of infection (FOI) and the number of new infections with 99% confidence intervals (CIs) were compared between the intermediate (North, Northeast, Midwest and Federal District) and low (South and Southeast) endemicity areas. A contour plot was also constructed. Results The anti- HAV IgG seroprevalence was 68.8% (95% CI, 64.8%–72.5%) and 33.7% (95% CI, 32.4%–35.1%) for the intermediate and low endemicity areas, respectively, according to the field data analysis. The models showed that a higher force of infection was identified in the 10- to 19-year-old age cohort (∼9,000 infected individuals per year per 100,000 susceptible persons) in the intermediate endemicity area, whereas a higher force of infection occurred in the 15- to 29-year-old age cohort (∼6,000 infected individuals per year per 100,000 susceptible persons) for the other macro-regions. Conclusion Our findings support the shift of Brazil toward intermediate and low endemicity levels with the shift of the risk of infection to older age groups. These estimates of HA force of infection stratified by age and endemicity levels are useful information to characterize the pre-vaccination scenario in Brazil

  3. Multiple-threshold models for genetic influences on age of onset for Alzheimer disease: findings in Swedish twins.

    PubMed

    Pedersen, N L; Posner, S F; Gatz, M

    2001-12-08

    Twin studies of dementia have typically used relatively simple 2 x 2 contingency tables with one threshold to estimate the relative importance of genetic variance for liability to disease. These designs are inadequate for addressing issues of age at onset, censoring of data, and distinguishing shared environmental effects from age effects. Meyer and Breitner [1998: Am J Med Genet 81:92-97] applied a multiple-threshold model to the NAS-NRC Twin Panel (average age of onset, 63.5 years) and report that additive genetic effects and shared environmental effects account for 37% and 35% of the variation, respectively, in age of onset for Alzheimer disease. We apply a modified version of their model to the Study of Dementia in Swedish Twins (average age of onset, 75 years) and find that genetic effects account for 57%-78% of the variance, whereas shared environmental effects are of no importance. Heritability is lower when thresholds are freely estimated rather than fixed to the population prevalences. We interpret the findings to suggest that models with free thresholds confound influences on longevity with influences for the disease. Multiple-threshold models, however, do not confound age effects with shared environmental influences. Copyright 2001 Wiley-Liss, Inc.

  4. Hydrodynamic Lyapunov modes and strong stochasticity threshold in the dynamic XY model: an alternative scenario.

    PubMed

    Yang, Hong-Liu; Radons, Günter

    2008-01-01

    Crossover from weak to strong chaos in high-dimensional Hamiltonian systems at the strong stochasticity threshold (SST) was anticipated to indicate a global transition in the geometric structure of phase space. Our recent study of Fermi-Pasta-Ulam models showed that corresponding to this transition the energy density dependence of all Lyapunov exponents is identical apart from a scaling factor. The current investigation of the dynamic XY model discovers an alternative scenario for the energy dependence of the system dynamics at SSTs. Though similar in tendency, the Lyapunov exponents now show individually different energy dependencies except in the near-harmonic regime. Such a finding restricts the use of indices such as the largest Lyapunov exponent and the Ricci curvatures to characterize the global transition in the dynamics of high-dimensional Hamiltonian systems. These observations are consistent with our conjecture that the quasi-isotropy assumption works well only when parametric resonances are the dominant sources of dynamical instabilities. Moreover, numerical simulations demonstrate the existence of hydrodynamical Lyapunov modes (HLMs) in the dynamic XY model and show that corresponding to the crossover in the Lyapunov exponents there is also a smooth transition in the energy density dependence of significance measures of HLMs. In particular, our numerical results confirm that strong chaos is essential for the appearance of HLMs.

  5. Global and local threshold in a metapopulational SEIR model with quarantine

    NASA Astrophysics Data System (ADS)

    Gomes, Marcelo F. C.; Rossi, Luca; Pastore Y Piontti, Ana; Vespignani, Alessandro

    2013-03-01

    Diseases which have the possibility of transmission before the onset of symptoms pose a challenging threat to healthcare since it is hard to track spreaders and implement quarantine measures. More precisely, one main concerns regarding pandemic spreading of diseases is the prediction-and eventually control-of local outbreaks that will trigger a global invasion of a particular disease. We present a metapopulation disease spreading model with transmission from both symptomatic and asymptomatic agents and analyze the role of quarantine measures and mobility processes between subpopulations. We show that, depending on the disease parameters, it is possible to separate in the parameter space the local and global thresholds and study the system behavior as a function of the fraction of asymptomatic transmissions. This means that it is possible to have a range of parameters values where although we do not achieve local control of the outbreak it is possible to control the global spread of the disease. We validate the analytic picture in data-driven model that integrates commuting, air traffic flow and detailed information about population size and structure worldwide. Laboratory for the Modeling of Biological and Socio-Technical Systems (MoBS)

  6. Construction of a prediction model for type 2 diabetes mellitus in the Japanese population based on 11 genes with strong evidence of the association.

    PubMed

    Miyake, Kazuaki; Yang, Woosung; Hara, Kazuo; Yasuda, Kazuki; Horikawa, Yukio; Osawa, Haruhiko; Furuta, Hiroto; Ng, Maggie C Y; Hirota, Yushi; Mori, Hiroyuki; Ido, Keisuke; Yamagata, Kazuya; Hinokio, Yoshinori; Oka, Yoshitomo; Iwasaki, Naoko; Iwamoto, Yasuhiko; Yamada, Yuichiro; Seino, Yutaka; Maegawa, Hiroshi; Kashiwagi, Atsunori; Wang, He-Yao; Tanahashi, Toshihito; Nakamura, Naoto; Takeda, Jun; Maeda, Eiichi; Yamamoto, Ken; Tokunaga, Katsushi; Ma, Ronald C W; So, Wing-Yee; Chan, Juliana C N; Kamatani, Naoyuki; Makino, Hideichi; Nanjo, Kishio; Kadowaki, Takashi; Kasuga, Masato

    2009-04-01

    Prediction of the disease status is one of the most important objectives of genetic studies. To select the genes with strong evidence of the association with type 2 diabetes mellitus, we validated the associations of the seven candidate loci extracted in our earlier study by genotyping the samples in two independent sample panels. However, except for KCNQ1, the association of none of the remaining seven loci was replicated. We then selected 11 genes, KCNQ1, TCF7L2, CDKAL1, CDKN2A/B, IGF2BP2, SLC30A8, HHEX, GCKR, HNF1B, KCNJ11 and PPARG, whose associations with diabetes have already been reported and replicated either in the literature or in this study in the Japanese population. As no evidence of the gene-gene interaction for any pair of the 11 loci was shown, we constructed a prediction model for the disease using the logistic regression analysis by incorporating the number of the risk alleles for the 11 genes, as well as age, sex and body mass index as independent variables. Cumulative risk assessment showed that the addition of one risk allele resulted in an average increase in the odds for the disease of 1.29 (95% CI=1.25-1.33, P=5.4 x 10(-53)). The area under the receiver operating characteristic curve, an estimate of the power of the prediction model, was 0.72, thereby indicating that our prediction model for type 2 diabetes may not be so useful but has some value. Incorporation of data from additional risk loci is most likely to increase the predictive power.

  7. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  8. Contributions of adaptation currents to dynamic spike threshold on slow timescales: Biophysical insights from conductance-based models

    NASA Astrophysics Data System (ADS)

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin; Li, Huiyan; Che, Yanqiu

    2017-06-01

    Spike-frequency adaptation (SFA) mediated by various adaptation currents, such as voltage-gated K+ current (IM), Ca2+-gated K+ current (IAHP), or Na+-activated K+ current (IKNa), exists in many types of neurons, which has been shown to effectively shape their information transmission properties on slow timescales. Here we use conductance-based models to investigate how the activation of three adaptation currents regulates the threshold voltage for action potential (AP) initiation during the course of SFA. It is observed that the spike threshold gets depolarized and the rate of membrane depolarization (dV/dt) preceding AP is reduced as adaptation currents reduce firing rate. It is indicated that the presence of inhibitory adaptation currents enables the neuron to generate a dynamic threshold inversely correlated with preceding dV/dt on slower timescales than fast dynamics of AP generation. By analyzing the interactions of ionic currents at subthreshold potentials, we find that the activation of adaptation currents increase the outward level of net membrane current prior to AP initiation, which antagonizes inward Na+ to result in a depolarized threshold and lower dV/dt from one AP to the next. Our simulations demonstrate that the threshold dynamics on slow timescales is a secondary effect caused by the activation of adaptation currents. These findings have provided a biophysical interpretation of the relationship between adaptation currents and spike threshold.

  9. A Comment on a Threshold Rule Applied to the Retrieval Decision Model. Technical Note.

    ERIC Educational Resources Information Center

    Kraft, Donald H.

    The retrieval decision problem is considered from the viewpoint of a decision theory approach. A threshold rule based on earlier rules for indexing decisions is considered and analyzed for retrieval decisions as a measure of retrieval performance. The threshold rule is seen as a good descriptive design measure of what a reasonable retrieval system…

  10. Solving Cordelia's Dilemma: Threshold Concepts within a Punctuated Model of Learning

    ERIC Educational Resources Information Center

    Kinchin, Ian M.

    2010-01-01

    The consideration of threshold concepts is offered in the context of biological education as a theoretical framework that may have utility in the teaching and learning of biology at all levels. Threshold concepts may provide a mechanism to explain the observed punctuated nature of conceptual change. This perspective raises the profile of periods…

  11. A Truncated-Probit Item Response Model for Estimating Psychophysical Thresholds

    ERIC Educational Resources Information Center

    Morey, Richard D.; Rouder, Jeffrey N.; Speckman, Paul L.

    2009-01-01

    Human abilities in perceptual domains have conventionally been described with reference to a threshold that may be defined as the maximum amount of stimulation which leads to baseline performance. Traditional psychometric links, such as the probit, logit, and "t", are incompatible with a threshold as there are no true scores corresponding to…

  12. Solving Cordelia's Dilemma: Threshold Concepts within a Punctuated Model of Learning

    ERIC Educational Resources Information Center

    Kinchin, Ian M.

    2010-01-01

    The consideration of threshold concepts is offered in the context of biological education as a theoretical framework that may have utility in the teaching and learning of biology at all levels. Threshold concepts may provide a mechanism to explain the observed punctuated nature of conceptual change. This perspective raises the profile of periods…

  13. Multistate transitional models for measuring adherence to breast cancer screening: A population-based longitudinal cohort study with over two million women.

    PubMed

    Sutradhar, R; Gu, S; Paszat, L F

    2017-06-01

    Objective Prior work on the disparities among women in breast cancer screening adherence has been methodologically limited. This longitudinal study determines and examines the factors associated with becoming adherent. Methods In a cohort of Canadian women aged 50-74, a three-state transitional model was used to examine adherence to screening for breast cancer. The proportion of time spent being non-adherent with screening was calculated for each woman during her observation window. Using age as the time scale, a relative rate multivariable regression was implemented under the three-state transitional model, to examine the association between covariates (all time-varying) and the rate of becoming adherent. Results The cohort consisted of 2,537,960 women with a median follow-up of 8.46 years. Nearly 31% of women were continually up-to-date with breast screening. Once a woman was non-adherent, the rate of becoming adherent was higher among longer term residents (relative rate = 1.289, 95% confidence interval 1.275-1.302), those from wealthier neighbourhoods, and those who had an identifiable primary care provider who was female or had graduated in Canada. Conclusion Individual and physician-level characteristics play an important role in a woman's adherence to screening. This work improves the quality of evidence regarding disparities among women in adherence to breast cancer screening and provides a novel methodological foundation to investigate adherence for other types of screening, including cervix and colorectal cancer screening.

  14. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    PubMed

    Xu, Yifang; Collins, Leslie M

    2004-04-01

    The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.

  15. Multivariate threshold model analysis of clinical mastitis in multiparous norwegian dairy cattle.

    PubMed

    Heringstad, B; Chang, Y M; Gianola, D; Klemetsdal, G

    2004-09-01

    A Bayesian multivariate threshold model was fitted to clinical mastitis (CM) records from 372,227 daughters of 2411 Norwegian Dairy Cattle (NRF) sires. All cases of veterinary-treated CM occurring from 30 d before first calving to culling or 300 d after third calving were included. Lactations were divided into 4 intervals: -30 to 0 d, 1 to 30 d, 31 to 120 d, and 121 to 300 d after calving. Within each interval, absence or presence of CM was scored as "0" or "1" based on the CM episodes. A 12-variate (3 lactations x 4 intervals) threshold model was used, assuming that CM was a different trait in each interval. Residuals were assumed correlated within lactation but independent between lactations. The model for liability to CM had interval-specific effects of month-year of calving, age at calving (first lactation), or calving interval (second and third lactations), herd-5-yr-period, sire of the cow, plus a residual. Posterior mean of heritability of liability to CM was 0.09 and 0.05 in the first and last intervals, respectively, and between 0.06 and 0.07 for other intervals. Posterior means of genetic correlations of liability to CM between intervals ranged from 0.24 (between intervals 1 and 12) to 0.73 (between intervals 1 and 2), suggesting interval-specific genetic control of resistance to mastitis. Residual correlations ranged from 0.08 to 0.17 for adjacent intervals, and between -0.01 and 0.03 for nonadjacent intervals. Trends of mean sire posterior means by birth year of daughters were used to assess genetic change. The 12 traits showed similar trends, with little or no genetic change from 1976 to 1986, and genetic improvement in resistance to mastitis thereafter. Annual genetic change was larger for intervals in first lactation when compared with second or third lactation. Within lactation, genetic change was larger for intervals early in lactation, and more so in the first lactation. This reflects that selection against mastitis in NRF has emphasized mainly CM

  16. Exploration of lagged relationships between mastitis and milk yield in dairycows using a Bayesian structural equation Gaussian-threshold model

    PubMed Central

    Wu, Xiao-Lin; Heringstad, Bjørg; Gianola, Daniel

    2008-01-01

    A Gaussian-threshold model is described under the general framework of structural equation models for inferring simultaneous and recursive relationships between binary and Gaussian characters, and estimating genetic parameters. Relationships between clinical mastitis (CM) and test-day milk yield (MY) in first-lactation Norwegian Red cows were examined using a recursive Gaussian-threshold model. For comparison, the data were also analyzed using a standard Gaussian-threshold, a multivariate linear model, and a recursive multivariate linear model. The first 180 days of lactation were arbitrarily divided into three periods of equal length, in order to investigate how these relationships evolve in the course of lactation. The recursive model showed negative within-period effects from (liability to) CM to test-day MY in all three lactation periods, and positive between-period effects from test-day MY to (liability to) CM in the following period. Estimates of recursive effects and of genetic parameters were time-dependent. The results suggested unfavorable effects of production on liability to mastitis, and dynamic relationships between mastitis and test-dayMYin the course of lactation. Fitting recursive effects had little influence on the estimation of genetic parameters. However, some differences were found in the estimates of heritability, genetic, and residual correlations, using different types of models (Gaussian-threshold vs. multivariate linear). PMID:18558070

  17. How patch size and refuge availability change interaction strength and population dynamics: a combined individual- and population-based modeling experiment.

    PubMed

    Li, Yuanheng; Brose, Ulrich; Meyer, Katrin; Rall, Björn C

    2017-01-01

    Knowledge on how functional responses (a measurement of feeding interaction strength) are affected by patch size and habitat complexity (represented by refuge availability) is crucial for understanding food-web stability and subsequently biodiversity. Due to their laborious character, it is almost impossible to carry out systematic empirical experiments on functional responses across wide gradients of patch sizes and refuge availabilities. Here we overcame this issue by using an individual-based model (IBM) to simulate feeding experiments. The model is based on empirically measured traits such as body-mass dependent speed and capture success. We simulated these experiments in patches ranging from sizes of petri dishes to natural patches in the field. Moreover, we varied the refuge availability within the patch independently of patch size, allowing for independent analyses of both variables. The maximum feeding rate (the maximum number of prey a predator can consume in a given time frame) is independent of patch size and refuge availability, as it is the physiological upper limit of feeding rates. Moreover, the results of these simulations revealed that a type III functional response, which is known to have a stabilizing effect on population dynamics, fitted the data best. The half saturation density (the prey density where a predator consumes half of its maximum feeding rate) increased with refuge availability but was only marginally influenced by patch size. Subsequently, we investigated how patch size and refuge availability influenced stability and coexistence of predator-prey systems. Following common practice, we used an allometric scaled Rosenzweig-MacArthur predator-prey model based on results from our in silico IBM experiments. The results suggested that densities of both populations are nearly constant across the range of patch sizes simulated, resulting from the constant interaction strength across the patch sizes. However, constant densities with

  18. How patch size and refuge availability change interaction strength and population dynamics: a combined individual- and population-based modeling experiment

    PubMed Central

    Brose, Ulrich; Meyer, Katrin

    2017-01-01

    Knowledge on how functional responses (a measurement of feeding interaction strength) are affected by patch size and habitat complexity (represented by refuge availability) is crucial for understanding food-web stability and subsequently biodiversity. Due to their laborious character, it is almost impossible to carry out systematic empirical experiments on functional responses across wide gradients of patch sizes and refuge availabilities. Here we overcame this issue by using an individual-based model (IBM) to simulate feeding experiments. The model is based on empirically measured traits such as body-mass dependent speed and capture success. We simulated these experiments in patches ranging from sizes of petri dishes to natural patches in the field. Moreover, we varied the refuge availability within the patch independently of patch size, allowing for independent analyses of both variables. The maximum feeding rate (the maximum number of prey a predator can consume in a given time frame) is independent of patch size and refuge availability, as it is the physiological upper limit of feeding rates. Moreover, the results of these simulations revealed that a type III functional response, which is known to have a stabilizing effect on population dynamics, fitted the data best. The half saturation density (the prey density where a predator consumes half of its maximum feeding rate) increased with refuge availability but was only marginally influenced by patch size. Subsequently, we investigated how patch size and refuge availability influenced stability and coexistence of predator-prey systems. Following common practice, we used an allometric scaled Rosenzweig–MacArthur predator-prey model based on results from our in silico IBM experiments. The results suggested that densities of both populations are nearly constant across the range of patch sizes simulated, resulting from the constant interaction strength across the patch sizes. However, constant densities with

  19. Personality traits of the five-factor model are associated with effort-reward imbalance at work: a population-based study.

    PubMed

    Törnroos, Maria; Hintsanen, Mirka; Hintsa, Taina; Jokela, Markus; Pulkki-Råback, Laura; Kivimäki, Mika; Hutri-Kähönen, Nina; Keltikangas-Järvinen, Liisa

    2012-07-01

    This study examined the association between personality traits and work stress. The sample comprised 757 women and 613 men (aged 30 to 45 years in 2007) participating in the Young Finns study. Personality was assessed with the NEO-FFI questionnaire and work stress according to Siegrist's effort-reward imbalance (ERI) model. High neuroticism, low extraversion, and low agreeableness were associated with high ERI. Low conscientiousness was associated with high ERI in men. No association was found between openness and ERI. High neuroticism, high extraversion, and low agreeableness were associated with high effort and low neuroticism, high extraversion, and high agreeableness with high rewards. High conscientiousness was associated with high effort, and in women, with high rewards. High openness was associated with high effort. This study suggests that personality traits may predispose to and protect from work stress.

  20. Unified analytical threshold voltage model for non-uniformly doped dual metal gate fully depleted silicon-on-insulator MOSFETs

    NASA Astrophysics Data System (ADS)

    Rao, Rathnamala; Katti, Guruprasad; Havaldar, Dnyanesh S.; DasGupta, Nandita; DasGupta, Amitava

    2009-03-01

    The paper describes the unified analytical threshold voltage model for non-uniformly doped, dual metal gate (DMG) fully depleted silicon-on-insulator (FDSOI) MOSFETs based on the solution of 2D Poisson's equation. 2D Poisson's equation is solved analytically for appropriate boundary conditions using separation of variables technique. The solution is then extended to obtain the threshold voltage of the FDSOI MOSFET. The model is able to handle any kind of non-uniform doping, viz. vertical, lateral as well as laterally asymetric channel (LAC) profile in the SOI film in addition to the DMG structure. The analytical results are validated with the numerical simulations using the device simulator MEDICI.

  1. Myeloid conditional deletion and transgenic models reveal a threshold for the neutrophil survival factor Serpinb1.

    PubMed

    Burgener, Sabrina S; Baumann, Mathias; Basilico, Paola; Remold-O'Donnell, Eileen; Touw, Ivo P; Benarafa, Charaf

    2016-09-01

    Serpinb1 is an inhibitor of neutrophil granule serine proteases cathepsin G, proteinase-3 and elastase. One of its core physiological functions is to protect neutrophils from granule protease-mediated cell death. Mice lacking Serpinb1a (Sb1a-/-), its mouse ortholog, have reduced bone marrow neutrophil numbers due to cell death mediated by cathepsin G and the mice show increased susceptibility to lung infections. Here, we show that conditional deletion of Serpinb1a using the Lyz2-cre and Cebpa-cre knock-in mice effectively leads to recombination-mediated deletion in neutrophils but protein-null neutrophils were only obtained using the latter recombinase-expressing strain. Absence of Serpinb1a protein in neutrophils caused neutropenia and increased granule permeabilization-induced cell death. We then generated transgenic mice expressing human Serpinb1 in neutrophils under the human MRP8 (S100A8) promoter. Serpinb1a expression levels in founder lines correlated positively with increased neutrophil survival when crossed with Sb1a-/- mice, which had their defective neutrophil phenotype rescued in the higher expressing transgenic line. Using new conditional and transgenic mouse models, our study demonstrates the presence of a relatively low Serpinb1a protein threshold in neutrophils that is required for sustained survival. These models will also be helpful in delineating recently described functions of Serpinb1 in metabolism and cancer.

  2. Partitioning into hazard subregions for regional peaks-over-threshold modeling of heavy precipitation

    NASA Astrophysics Data System (ADS)

    Carreau, J.; Naveau, P.; Neppel, L.

    2017-05-01

    The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).

  3. Overcoming pain thresholds with multilevel models-an example using quantitative sensory testing (QST) data.

    PubMed

    Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris

    2015-01-01

    The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.

  4. A study of jet fuel sooting tendency using the threshold sooting index (TSI) model

    SciTech Connect

    Yang, Yi; Boehman, Andre L.; Santoro, Robert J.

    2007-04-15

    Fuel composition can have a significant effect on soot formation during gas turbine combustion. Consequently, this paper contains a comprehensive review of the relationship between fuel hydrocarbon composition and soot formation in gas turbine combustors. Two levels of correlation are identified. First, lumped fuel composition parameters such as hydrogen content and smoke point, which are conventionally used to represent fuel sooting tendency, are correlated with soot formation in practical combustors. Second, detailed fuel hydrocarbon composition is correlated with these lumped parameters. The two-level correlation makes it possible to predict soot formation in practical combustors from basic fuel composition data. Threshold sooting index (TSI), which correlates linearly with the ratio of fuel molecular weight and smoke point in a diffusion flame, is proposed as a new lumped parameter for sooting tendency correlation. It is found that the TSI model correlates excellently with hydrocarbon compositions over a wide range of fuel samples. Also, in predicting soot formation in actual combustors, the TSI model produces the best results overall in comparison with other previously reported correlating parameters, including hydrogen content, smoke point, and composite predictors containing more than one parameter. (author)

  5. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    NASA Astrophysics Data System (ADS)

    Jain, Madhu; Meena, Rakesh Kumar

    2017-06-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  6. Zero-Inflated Models for Identifying Relationships Between Body Mass Index and Gastroesophageal Reflux Symptoms: A Nationwide Population-Based Study in China.

    PubMed

    Xu, Qin; Zhang, Wei; Zhang, Tianyi; Zhang, Ruijie; Zhao, Yanfang; Zhang, Yuan; Guo, Yibin; Wang, Rui; Ma, Xiuqiang; He, Jia

    2016-07-01

    That obesity leads to gastroesophageal reflux is a widespread notion. However, scientific evidence for this association is limited, with no rigorous epidemiological approach conducted to address this question. This study examined the relationship between body mass index (BMI) and gastroesophageal reflux symptoms in a large population-representative sample from China. We performed a cross-sectional study in an age- and gender-stratified random sample of the population of five central regions in China. Participants aged 18-80 years completed a general information questionnaire and a Chinese version of the Reflux Disease Questionnaire. The zero-inflated Poisson regression model estimated the relationship between body mass index and gastroesophageal reflux symptoms. Overall, 16,091 (89.4 %) of the 18,000 eligible participants responded. 638 (3.97 %) and 1738 (10.81 %) experienced at least weekly heartburn and weekly acid regurgitation, respectively. After adjusting for potential risk factors in the zero-inflated part, the frequency [odds ratio (OR) 0.66, 95 % confidence interval (95 % CI) 0.50-0.86, p = 0.002] and severity (OR 0.66, 95 % CI 0.50-088, p = 0.004) of heartburn in obese participants were statistically significant compared to those in normal participants. In the Poisson part, the frequency of acid regurgitation, overweight (OR 1.10, 95 % CI 1.01-1.21, p = 0.038) and obesity (OR 1.19, 95 % CI 1.04-1.37, p = 0.013) were statistically significant. BMI was strongly and positively related to the frequency and severity of gastroesophageal reflux symptoms. Additionally, gender exerted strong specific effects on the relationship between BMI and gastroesophageal reflux symptoms. The severity and frequency of heartburn were positively correlated with obesity. This relationship was presented distinct in male participants only.

  7. T Lymphocyte Activation Threshold and Membrane Reorganization Perturbations in Unique Culture Model

    NASA Technical Reports Server (NTRS)

    Adams, C. L.; Sams, C. F.

    2000-01-01

    Quantitative activation thresholds and cellular membrane reorganization are mechanisms by which resting T cells modulate their response to activating stimuli. Here we demonstrate perturbations of these cellular processes in a unique culture system that non-invasively inhibits T lymphocyte activation. During clinorotation, the T cell activation threshold is increased 5-fold. This increased threshold involves a mechanism independent of TCR triggering. Recruitment of lipid rafts to the activation site is impaired during clinorotation but does occur with increased stimulation. This study describes a situation in which an individual cell senses a change in its physical environment and alters its cell biological behavior.

  8. Coherence thresholds in models of language change and evolution: The effects of noise, dynamics, and network of interactions

    NASA Astrophysics Data System (ADS)

    Tavares, J. M.; Telo da Gama, M. M.; Nunes, A.

    2008-04-01

    A simple model of language evolution proposed by Komarova, Niyogi, and Nowak is characterized by a payoff in communicative function and by an error in learning that measure the accuracy in language acquisition. The time scale for language change is generational, and the model’s equations in the mean-field approximation are a particular case of the replicator-mutator equations of evolutionary dynamics. In well-mixed populations, this model exhibits a critical coherence threshold; i.e., a minimal accuracy in the learning process is required to maintain linguistic coherence. In this work, we analyze in detail the effects of different fitness-based dynamics driving linguistic coherence and of the network of interactions on the nature of the coherence threshold by performing numerical simulations and theoretical analyses of three different models of language change in finite populations with two types of structure: fully connected networks and regular random graphs. We find that although the threshold of the original replicator-mutator evolutionary model is robust with respect to the structure of the network of contacts, the coherence threshold of related fitness-driven models may be strongly affected by this feature.

  9. Improving Landslide Susceptibility Modeling Using an Empirical Threshold Scheme for Excluding Landslide Deposition

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Lai, J. S.; Chiang, S. H.

    2015-12-01

    Landslides are frequently triggered by typhoons and earthquakes in Taiwan, causing serious economic losses and human casualties. Remotely sensed images and geo-spatial data consisting of land-cover and environmental information have been widely used for producing landslide inventories and causative factors for slope stability analysis. Landslide susceptibility, on the other hand, can represent the spatial likelihood of landslide occurrence and is an important basis for landslide risk assessment. As multi-temporal satellite images become popular and affordable, they are commonly used to generate landslide inventories for subsequent analysis. However, it is usually difficult to distinguish different landslide sub-regions (scarp, debris flow, deposition etc.) directly from remote sensing imagery. Consequently, the extracted landslide extents using image-based visual interpretation and automatic detections may contain many depositions that may reduce the fidelity of the landslide susceptibility model. This study developed an empirical thresholding scheme based on terrain characteristics for eliminating depositions from detected landslide areas to improve landslide susceptibility modeling. In this study, Bayesian network classifier is utilized to build a landslide susceptibility model and to predict sequent rainfall-induced shallow landslides in the Shimen reservoir watershed located in northern Taiwan. Eleven causative factors are considered, including terrain slope, aspect, curvature, elevation, geology, land-use, NDVI, soil, distance to fault, river and road. Landslide areas detected using satellite images acquired before and after eight typhoons between 2004 to 2008 are collected as the main inventory for training and verification. In the analysis, previous landslide events are used as training data to predict the samples of the next event. The results are then compared with recorded landslide areas in the inventory to evaluate the accuracy. Experimental results

  10. Evaluation of landslide reactivation: A modified rainfall threshold model based on historical records of rainfall and landslides

    NASA Astrophysics Data System (ADS)

    Floris, Mario; Bozzano, Francesca

    2008-02-01

    This study proposes a modification of the conventional threshold model for assessing the probability of rainfall-induced landslide reactivation. The modification is based on the consideration that exceedance of a pre-determined rainfall threshold is a necessary but not sufficient condition to reactivate a landslide. The proposed method calculates the probability of reactivation as a function of the probability of exceedance of a pre-determined rainfall threshold, as well as the probability of occurrence of a landslide after such exceedance. The data for the calculation were obtained from historical records of landslides and rainfall. The method was applied to two complex landslides ("San Donato" and "La Salsa") involving fine-grained debris in the southern section of the Apennine foredeep. The minimum rainfall threshold triggering landslide reactivation on the two slopes was determined by examining rainfall patterns during the 180 days preceding the slide events. For the San Donato and La Salsa landslides, the minimum triggering threshold consists of rainfall events lasting 15 days, with cumulated rainfall exceeding 150 and 180 mm, respectively. Based on hydrological and statistical analyses, the annual probabilities of exceeding the thresholds were estimated to be 0.38 and 0.25, respectively. During the period from 1950 to 1987, the minimum threshold was exceeded 14 times, and four reactivations occurred at San Donato; whereas, the threshold was exceeded 10 times and three reactivations occurred at La Salsa. Hence, the probabilities of landsliding after exceedance of the minimum rainfall threshold are 4/14 and 3/10, respectively. Finally, annual reactivation probabilities were calculated to be 0.11 and 0.08, respectively. The reliability of the minimum rainfall threshold was tested by: i) simulating variations in the stress-strain behavior of the slopes as a result of fluctuations in the water table from normal to extreme values; and ii) analyzing the results of

  11. Electro-Thermal Model of Threshold Switching in TaOx-Based Devices.

    PubMed

    Goodwill, Jonathan M; Sharma, Abhishek A; Li, Dasheng; Bain, James A; Skowronski, Marek

    2017-04-05

    Pulsed and quasi-static current-voltage (I-V) characteristics of threshold switching in TiN/TaOx/TiN crossbar devices were measured as a function of stage temperature (200-495 K) and oxygen flow during the deposition of TaOx. A comparison of the pulsed and quasi-static characteristics in the high resistance part of the I-V revealed that Joule self-heating significantly affected the current and was a likely source of negative differential resistance (NDR) and thermal runaway. The experimental quasi-static I-V's were simulated using a finite element electro-thermal model that coupled current and heat flow and incorporated an external circuit with an appropriate load resistor. The simulation reproduced the experimental I-V including the OFF-state at low currents and the volatile NDR region. In the NDR region, the simulation predicted spontaneous current constriction forming a small-diameter hot conducting filament with a radius of 250 nm in a 6 μm diameter device.

  12. [Automatic detection of exudates in retinal images based on threshold moving average models].

    PubMed

    Wisaeng, K; Hiransakolwong, N; Pothiruk, E

    2015-01-01

    Since exudate diagnostic procedures require the attention of an expert ophthalmologist as well as regular monitoring of the disease, the workload of expert ophthalmologists will eventually exceed the current screening capabilities. Retinal imaging technology is a current practice screening capability providing a great potential solution. In this paper, a fast and robust automatic detection of exudates based on moving average histogram models of the fuzzy image was applied, and then the better histogram was derived. After segmentation of the exudate candidates, the true exudates were pruned based on Sobel edge detector and automatic Otsu's thresholding algorithm that resulted in the accurate location of the exudates in digital retinal images. To compare the performance of exudate detection methods we have constructed a large database of digital retinal images. The method was trained on a set of 200 retinal images, and tested on a completely independent set of 1220 retinal images. Results show that the exudate detection method performs overall best sensitivity, specificity, and accuracy of 90.42%, 94.60%, and 93.69%, respectively.

  13. High-precision percolation thresholds and Potts-model critical manifolds from graph polynomials

    NASA Astrophysics Data System (ADS)

    >Jesper Lykke Jacobsen,

    2014-04-01

    The critical curves of the q-state Potts model can be determined exactly for regular two-dimensional lattices G that are of the three-terminal type. This comprises the square, triangular, hexagonal and bow-tie lattices. Jacobsen and Scullard have defined a graph polynomial PB(q, v) that gives access to the critical manifold for general lattices. It depends on a finite repeating part of the lattice, called the basis B, and its real roots in the temperature variable v = eK - 1 provide increasingly accurate approximations to the critical manifolds upon increasing the size of B. Using transfer matrix techniques, these authors computed PB(q, v) for large bases (up to 243 edges), obtaining determinations of the ferromagnetic critical point vc > 0 for the (4, 82), kagome, and (3, 122) lattices to a precision (of the order 10-8) slightly superior to that of the best available Monte Carlo simulations. In this paper we describe a more efficient transfer matrix approach to the computation of PB(q, v) that relies on a formulation within the periodic Temperley-Lieb algebra. This makes possible computations for substantially larger bases (up to 882 edges), and the precision on vc is hence taken to the range 10-13. We further show that a large variety of regular lattices can be cast in a form suitable for this approach. This includes all Archimedean lattices, their duals and their medials. For all these lattices we tabulate high-precision estimates of the bond percolation thresholds pc and Potts critical points vc. We also trace and discuss the full Potts critical manifold in the (q, v) plane, paying special attention to the antiferromagnetic region v < 0. Finally, we adapt the technique to site percolation as well, and compute the polynomials PB(p) for certain Archimedean and dual lattices (those having only cubic and quartic vertices), using very large bases (up to 243 vertices). This produces the site percolation thresholds pc to a precision of the order of 10-9.

  14. Noisy threshold in neuronal models: connections with the noisy leaky integrate-and-fire model.

    PubMed

    Dumont, G; Henry, J; Tarniceriu, C O

    2016-12-01

    Providing an analytical treatment to the stochastic feature of neurons' dynamics is one of the current biggest challenges in mathematical biology. The noisy leaky integrate-and-fire model and its associated Fokker-Planck equation are probably the most popular way to deal with neural variability. Another well-known formalism is the escape-rate model: a model giving the probability that a neuron fires at a certain time knowing the time elapsed since its last action potential. This model leads to a so-called age-structured system, a partial differential equation with non-local boundary condition famous in the field of population dynamics, where the age of a neuron is the amount of time passed by since its previous spike. In this theoretical paper, we investigate the mathematical connection between the two formalisms. We shall derive an integral transform of the solution to the age-structured model into the solution of the Fokker-Planck equation. This integral transform highlights the link between the two stochastic processes. As far as we know, an explicit mathematical correspondence between the two solutions has not been introduced until now.

  15. Numerical modeling for investigating the optical breakdown threshold of laser-induced air plasmas at different laser characteristics

    NASA Astrophysics Data System (ADS)

    Hamam, Kholoud A.; Gaabour, Laila H.; Gamal, Yosr E. E. D.

    2017-07-01

    In this work, we report a numerical investigation of two sets of experimental measurements that were previously carried out to study the breakdown threshold dependence on laser characteristics (wavelength, pulse width, and spot size) in the breakdown of laboratory air at different pressures. The study aimed to inspect the significance of the physical mechanisms in air breakdown as related to the applied experimental conditions. In doing so, we adopted a simple theoretical formulation relying on the numerical solution of a rate equation that describes the growth of electron density due to the joined effect of multi-photon and avalanche ionization processes given in our earlier work [Gaabour et al., J. Mod. Phys. 3, 1683-1691 (2012)]. Here, the rate equation is adapted to include the effect of electron loss due to attachment processes. This equation is then solved numerically using the Runge-Kutta fourth order technique. The influence of electron gain and loss processes on the breakdown threshold is studied by calculating the breakdown threshold intensity and RMS electric field for atmospheric air using different laser parameters (wavelength, pulse widths, and focal length lenses), in correspondence to the experimental conditions given by Tambay and Thareja [J. Appl. Phys. 70(5), 2890 (1991)]. To validate the model, a comparison is made between those calculated thresholds and the experimentally measured ones. Moreover, the effective contribution of each of the considered physical processes to the breakdown phenomenon is examined by studying the effect of laser wavelength and spot diameter on the threshold intensities, as well as on the temporal variation of the electron density. The correlation between the threshold intensity and gas pressure is tested in relation to the measurements of Tambay et al. [Pramana-J. Phys. 37(2), 163 (1991)]. Calculations are also carried out to depict the impact of pulse width on the threshold intensity.

  16. Modeling direction discrimination thresholds for yaw rotations around an earth-vertical axis for arbitrary motion profiles.

    PubMed

    Soyka, Florian; Giordano, Paolo Robuffo; Barnett-Cowan, Michael; Bülthoff, Heinrich H

    2012-07-01

    Understanding the dynamics of vestibular perception is important, for example, for improving the realism of motion simulation and virtual reality environments or for diagnosing patients suffering from vestibular problems. Previous research has found a dependence of direction discrimination thresholds for rotational motions on the period length (inverse frequency) of a transient (single cycle) sinusoidal acceleration stimulus. However, self-motion is seldom purely sinusoidal, and up to now, no models have been proposed that take into account non-sinusoidal stimuli for rotational motions. In this work, the influence of both the period length and the specific time course of an inertial stimulus is investigated. Thresholds for three acceleration profile shapes (triangular, sinusoidal, and trapezoidal) were measured for three period lengths (0.3, 1.4, and 6.7 s) in ten participants. A two-alternative forced-choice discrimination task was used where participants had to judge if a yaw rotation around an earth-vertical axis was leftward or rightward. The peak velocity of the stimulus was varied, and the threshold was defined as the stimulus yielding 75 % correct answers. In accordance with previous research, thresholds decreased with shortening period length (from ~2 deg/s for 6.7 s to ~0.8 deg/s for 0.3 s). The peak velocity was the determining factor for discrimination: Different profiles with the same period length have similar velocity thresholds. These measurements were used to fit a novel model based on a description of the firing rate of semi-circular canal neurons. In accordance with previous research, the estimates of the model parameters suggest that velocity storage does not influence perceptual thresholds.

  17. Predicting Bed Grain Size in Threshold Channels Using Lidar Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Snyder, N. P.; Nesheim, A. O.; Wilkins, B. C.; Edmonds, D. A.

    2011-12-01

    Over the past 20 years, researchers have developed GIS-based algorithms to extract channel networks and measure longitudinal profiles from digital elevation models (DEMs), and have used these to study stream morphology in relation to tectonics, climate and ecology. The accuracy of stream elevations from traditional DEMs (10-50 m pixels) is typically limited by the contour interval (3-20 m) of the rasterized topographic map source. This is a particularly severe limitation in low-relief watersheds, where 3 m of channel elevation change may occur over several km. Lidar DEMs (~1 m pixels) allow researchers to resolve channel elevation changes of ~0.5 m, enabling reach-scale calculations of gradient, which is the most important parameter for understanding channel processes at that scale. Lidar DEMs have the additional advantage of allowing users to make estimates of channel width. We present a process-based model that predicts median bed grain size in threshold gravel-bed channels from lidar slope and width measurements using the Shields and Manning equations. We compare these predictions to field grain size measurements in segments of three Maine rivers. Like many paraglacial rivers, these have longitudinal profiles characterized by relatively steep (gradient >0.002) and flat (gradient <0.0005) segments, with length scales of several km. This heterogeneity corresponds to strong variations in channel form, sediment supply, bed grain size, and aquatic habitat characteristics. The model correctly predicts bed sediment size within a factor of two in ~70% of the study sites. The model works best in single-thread channels with relatively low sediment supply, and poorly in depositional, multi-thread and/or fine (median grain size <20 mm) reaches. We evaluate the river morphology (using field and lidar measurements) in the context of the Parker et al. (2007) hydraulic geometry relations for single-thread gravel-bed rivers, and find correspondence in the locations where both

  18. Use of a threshold animal model to estimate calving ease and stillbirth (co)variance components for US Holsteins

    USDA-ARS?s Scientific Manuscript database

    (Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...

  19. Transfer model of lead in soil-carrot (Daucus carota L.) system and food safety thresholds in soil.

    PubMed

    Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang

    2015-09-01

    Reliable empirical models describing lead (Pb) transfer in soil-plant systems are needed to improve soil environmental quality standards. A greenhouse experiment was conducted to develop soil-plant transfer models to predict Pb concentrations in carrot (Daucus carota L.). Soil thresholds for food safety were then derived inversely using the prediction model in view of the maximum allowable limit for Pb in food. The 2 most important soil properties that influenced carrot Pb uptake factor (ratio of Pb concentration in carrot to that in soil) were soil pH and cation exchange capacity (CEC), as revealed by path analysis. Stepwise multiple linear regression models were based on soil properties and the pseudo total (aqua regia) or extractable (0.01 M CaCl2 and 0.005 M diethylenetriamine pentaacetic acid) soil Pb concentrations. Carrot Pb contents were best explained by the pseudo total soil Pb concentrations in combination with soil pH and CEC, with the percentage of variation explained being up to 93%. The derived soil thresholds based on added Pb (total soil Pb with the geogenic background part subtracted) have the advantage of better applicability to soils with high natural background Pb levels. Validation of the thresholds against data from field trials and literature studies indicated that the proposed thresholds are reasonable and reliable. © 2015 SETAC.

  20. SEMICONDUCTOR DEVICES: Modeling and discussion of threshold voltage for a multi-floating gate FET pH sensor

    NASA Astrophysics Data System (ADS)

    Zhaoxia, Shi; Dazhong, Zhu

    2009-11-01

    Research into new pH sensors fabricated by the standard CMOS process is currently a hot topic. The new pH sensing multi-floating gate field effect transistor is found to have a very large threshold voltage, which is different from the normal ion-sensitive field effect transistor. After analyzing all the interface layers of the structure, a new sensitive model based on the Gauss theorem and the charge neutrality principle is created in this paper. According to the model, the charge trapped on the multi-floating gate during the process and the thickness of the sensitive layer are the main causes of the large threshold voltage. From this model, it is also found that removing the charge on the multi-floating gate is an effective way to decrease the threshold voltage. The test results for three different standard pH buffer solutions show the correctness of the model and point the way to solve the large threshold problem.

  1. Extension of the Mechanical Threshold Stress Model to Static and Dynamic Strain Aging: Application to AA5754-O

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Mandal, Sudipto; Gockel, Brian; Rollett, Anthony D.

    2017-08-01

    Based on the mechanical threshold stress model and the visco-plastic self-consistent algorithm, a modified constitutive model is developed to model static strain aging and dynamic strain aging for application to a non-heat treatable aluminum alloy, AA5754-O. The implementation is based on a combination of the evolution of dislocation density and the effect of solutes on both mobile dislocations and forest dislocations. Using this model, the stress-strain behavior of AA5754-O is simulated in multi-path, multi-temperature, and variable strain rate tensile tests. The low temperature and strain rate sensitivities of the modified mechanical threshold stress model in the dynamic strain aging regime are successfully accounted for. The results show quantitative agreement with experimental data from multiple sources.

  2. Modelling single shot damage thresholds of multilayer optics for high-intensity short-wavelength radiation sources.

    PubMed

    Loch, R A; Sobierajski, R; Louis, E; Bosgra, J; Bijkerk, F

    2012-12-17

    The single shot damage thresholds of multilayer optics for high-intensity short-wavelength radiation sources are theoretically investigated, using a model developed on the basis of experimental data obtained at the FLASH and LCLS free electron lasers. We compare the radiation hardness of commonly used multilayer optics and propose new material combinations selected for a high damage threshold. Our study demonstrates that the damage thresholds of multilayer optics can vary over a large range of incidence fluences and can be as high as several hundreds of mJ/cm(2). This strongly suggests that multilayer mirrors are serious candidates for damage resistant optics. Especially, multilayer optics based on Li(2)O spacers are very promising for use in current and future short-wavelength radiation sources.

  3. Irreversible mean-field model of the critical behavior of charge-density waves below the threshold for sliding

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    1993-05-01

    A mean-field (MF) model of the critical behavior of charge-density waves below the threshold for sliding is proposed, which replaces the combined effect of the pinning force and of the forces exerted by the neighbors on a given particle n by an effective force threshold Xn. It allows one to rationalize the numerical results of Middleton and Fisher [Phys. Rev. Lett. 66 (1991) 92] on the divergence of the polarization and of the largest correlation length and of Pla and Nori [Phys. Rev. Lett. 67 (1991) 919] on the distribution D( d) of sliding bursts of size d, measured in narrow intervals of driving fields E at a finite distance below the threshold Ec.

  4. Modeling extreme drought impacts on terrestrial ecosystems when thresholds are exceeded

    NASA Astrophysics Data System (ADS)

    Holm, J. A.; Rammig, A.; Smith, B.; Medvigy, D.; Lichstein, J. W.; Dukes, J. S.; Allen, C. D.; Beier, C.; Larsen, K. S.; Ficken, C. D.; Pockman, W.; Anderegg, W.; Luo, Y.

    2016-12-01

    impacts of extreme droughts while increased temperature exacerbates mortality. This study highlighted a number of questions about our current understanding of EEs and their corresponding thresholds and tipping points, and provides an analysis of confidence in model representation and accuracy of processes related to EEs.

  5. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    PubMed

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.

  6. Using generalized additive modeling to empirically identify thresholds within the ITERS in relation to toddlers' cognitive development.

    PubMed

    Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana

    2013-04-01

    Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints, in the child care quality measures used in these benchmarking efforts that differentiate between different levels of children's cognitive functioning. To date, research has provided little guidance to policymakers as to where these thresholds should be set. Using the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B) data set, this study explores the use of generalized additive modeling (GAM) as a method of identifying thresholds on the Infant/Toddler Environment Rating Scale (ITERS) in relation to toddlers' performance on the Mental Development subscale of the Bayley Scales of Infant Development (the Bayley Mental Development Scale Short Form-Research Edition, or BMDSF-R). The present findings suggest that simple linear models do not always correctly depict the relationships between ITERS scores and BMDSF-R scores and that GAM-derived thresholds were more effective at differentiating among children's performance levels on the BMDSF-R. Additionally, the present findings suggest that there is a minimum threshold on the ITERS that must be exceeded before significant improvements in children's cognitive development can be expected. There may also be a ceiling threshold on the ITERS, such that beyond a certain level, only marginal increases in children's BMDSF-R scores are observed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  7. Evaluation of the pentylenetetrazole seizure threshold test in epileptic mice as surrogate model for drug testing against pharmacoresistant seizures.

    PubMed

    Töllner, Kathrin; Twele, Friederike; Löscher, Wolfgang

    2016-04-01

    Resistance to antiepileptic drugs (AEDs) is a major problem in epilepsy therapy, so that development of more effective AEDs is an unmet clinical need. Several rat and mouse models of epilepsy with spontaneous difficult-to-treat seizures exist, but because testing of antiseizure drug efficacy is extremely laborious in such models, they are only rarely used in the development of novel AEDs. Recently, the use of acute seizure tests in epileptic rats or mice has been proposed as a novel strategy for evaluating novel AEDs for increased antiseizure efficacy. In the present study, we compared the effects of five AEDs (valproate, phenobarbital, diazepam, lamotrigine, levetiracetam) on the pentylenetetrazole (PTZ) seizure threshold in mice that were made epileptic by pilocarpine. Experiments were started 6 weeks after a pilocarpine-induced status epilepticus. At this time, control seizure threshold was significantly lower in epileptic than in nonepileptic animals. Unexpectedly, only one AED (valproate) was less effective to increase seizure threshold in epileptic vs. nonepileptic mice, and this difference was restricted to doses of 200 and 300 mg/kg, whereas the difference disappeared at 400mg/kg. All other AEDs exerted similar seizure threshold increases in epileptic and nonepileptic mice. Thus, induction of acute seizures with PTZ in mice pretreated with pilocarpine does not provide an effective and valuable surrogate method to screen drugs for antiseizure efficacy in a model of difficult-to-treat chronic epilepsy as previously suggested from experiments with this approach in rats. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Do we need a dynamic snow depth threshold when comparing hydrological models with remote sensing products in mountain catchments?

    NASA Astrophysics Data System (ADS)

    Engel, Michael; Bertoldi, Giacomo; Notarnicola, Claudia; Comiti, Francesco

    2017-04-01

    To assess the performance of simulated snow cover of hydrological models, it is common practice to compare simulated data with observed ones derived from satellite images such as MODIS. However, technical and methodological limitations such as data availability of MODIS products, its spatial resolution or difficulties in finding appropriate parameterisations of the model need to be solved previously. Another important assumption usually made is the threshold of minimum simulated snow depth, generally set to 10 mm of snow depth, to respect the MODIS detection thresholds for snow cover. But is such a constant threshold appropriate for complex alpine terrain? How important is the impact of different snow depth thresholds on the spatial and temporal distribution of the pixel-based overall accuracy (OA)? To address this aspect, we compared the snow covered area (SCA) simulated by the GEOtop 2.0 snow model to the daily composite 250 m EURAC MODIS SCA in the upper Saldur basin (61 km2, Eastern Italian Alps) during the period October 2011 - October 2013. Initially, we calibrated the snow model against snow depths and snow water equivalents at point scale, taken from measurements at different meteorological stations. We applied different snow depth thresholds (0 mm, 10 mm, 50 mm, and 100 mm) to obtain the simulated snow cover and assessed the changes in OA both in time (during the entire evaluation period, accumulation and melting season) and space (entire catchment and specific areas of topographic characteristics such as elevation, slope, aspect, landcover, and roughness). Results show remarkable spatial and temporal differences in OA with respect to different snow depth thresholds. Inaccuracies of simulated and observed SCA during the accumulation season September to November 2012 were located in areas with north-west aspect, slopes of 30° or little elevation differences at sub-pixel scale (-0.25 to 0 m). We obtained best agreements with MODIS SCA for a snow depth

  9. Implementation and assessment of the mechanical-threshold-stress model using the EPIC2 and PINON computer codes

    SciTech Connect

    Maudlin, P.J.; Davidson, R.F.; Henninger, R.J.

    1990-09-01

    A flow-stress constitutive model based on dislocation mechanics has been implemented in the EPIC2 and PINON continuum mechanics modes. This model provides a better understanding of the plastic deformation process for ductile materials by using an internal state variable called the mechanical threshold stress. This kinematic quantity tracks the evolution of the material's microstructure along some arbitrary strain, strain-rate, and temperature-dependent path using a differential form that balances dislocation generation and recovery processes. Given a value for the mechanical threshold stress, the flow stress is determined using either a thermal-activation-controlled or a drag-controlled kinetics relationship. We evaluated the performance of the Mechanical Threshold Stress (MTS) model in terms of accuracy and computational resources through a series of assessment problems chosen to exercise the model over a large range of strain rates and strains. Our calculations indicate that the more complicated MTS model is reasonable in terms of computational resources when compared with other models in common hydrocode use. In terms of accuracy, these simulations show that the MTS model is superior for problems containing mostly normal strain with shear strains less than 0.2 but perhaps not as accurate for problems that contain large amounts of shear strain. 29 refs., 33 figs., 9 tabs.

  10. Polygenic threshold model with sex dimorphism in adolescent idiopathic scoliosis: the Carter effect.

    PubMed

    Kruse, Lisa M; Buchan, Jillian G; Gurnett, Christina A; Dobbs, Matthew B

    2012-08-15

    Adolescent idiopathic scoliosis occurs between two and ten times more frequently in females than in males. The exact cause of this sex discrepancy is unknown, but it may represent a difference in susceptibility to the deformity. If this difference is attributable to genetic factors, then males with adolescent idiopathic scoliosis would need to inherit a greater number of susceptibility genes compared with females to develop the deformity. Males would also be more likely to transmit the disease to their children and to have siblings with adolescent idiopathic scoliosis. Such a phenomenon is known as the Carter effect, and the presence of such an effect would support a multifactorial threshold model of inheritance. One hundred and forty multiplex families in which more than one individual was affected with adolescent idiopathic scoliosis were studied. These families contained 1616 individuals, including 474 individuals with adolescent idiopathic scoliosis and 1142 unaffected relatives. The rates of transmission from the 122 affected mothers and from the twenty-eight affected fathers were calculated, and the prevalence among siblings was determined in the nuclear families of affected individuals. The prevalence of adolescent idiopathic scoliosis in these multiplex families was lowest in sons of affected mothers (36%, thirty-eight of 105) and highest in daughters of affected fathers (85%, twenty-two of twenty-six). Affected fathers transmitted adolescent idiopathic scoliosis to 80% (thirty-seven) of forty-six children, whereas affected mothers transmitted it to 56% (133) of 239 children (p < 0.001). Siblings of affected males also had a significantly higher prevalence of adolescent idiopathic scoliosis (55%, sixty-one of 110) compared with siblings of affected females (45%, 206 of 462) (p = 0.04). This study demonstrates the presence of the Carter effect in adolescent idiopathic scoliosis. This pattern can be explained by polygenic inheritance of adolescent idiopathic

  11. A threshold regression model to predict return to work after traumatic limb injury.

    PubMed

    Hou, Wen-Hsuan; Chuang, Hung-Yi; Lee, Mei-Ling Ting

    2016-02-01

    The study aims to examine the severity of initial impairment and recovery rate of return-to-work (RTW) predictors among workers with traumatic limb injury. This 2-year prospective cohort study recruited 1124 workers with traumatic limb injury during the first 2 weeks of hospital admission. Baseline data were obtained by questionnaire and chart review. Patient follow-up occurred at 1, 3, 6, 12, 18, and 24 months post injury. The primary outcome was the time of first RTW. The impact of potential predictors on initial impairment and rate of recovery towards RTW was estimated by threshold regression (TR). A total of 846 (75.27%) participants returned to work during the follow-up period. Our model revealed that the initial impairment level in elderly workers and lower limb injuries were 33% and 35% greater than their counterparts, respectively. Workers with >12 years of education, part-time job, and moderate and higher self-efficacy were less impaired at initial injury compared with their counterparts. In terms of the rate of recovery leading to RTW, workers with older age, part-time jobs, lower limbs, or combined injuries had a significantly slower recovery rate, while workers with 9-12 years of education and >12 years of education had a significantly faster recovery rate. Our study provides researchers and clinicians with evidence to understand the baseline impairment and rate of recovery towards RTW by explaining the predictors of RTW among workers with traumatic limb injuries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Threshold Digraphs

    PubMed Central

    Cloteaux, Brian; LaMar, M. Drew; Moseman, Elizabeth; Shook, James

    2014-01-01

    A digraph whose degree sequence has a unique vertex labeled realization is called threshold. In this paper we present several characterizations of threshold digraphs and their degree sequences, and show these characterizations to be equivalent. Using this result, we obtain a new, short proof of the Fulkerson-Chen theorem on degree sequences of general digraphs. PMID:26601029

  13. Adaptive optics for reduced threshold energy in femtosecond laser induced optical breakdown in water based eye model

    NASA Astrophysics Data System (ADS)

    Hansen, Anja; Krueger, Alexander; Ripken, Tammo

    2013-03-01

    In ophthalmic microsurgery tissue dissection is achieved using femtosecond laser pulses to create an optical breakdown. For vitreo-retinal applications the irradiance distribution in the focal volume is distorted by the anterior components of the eye causing a raised threshold energy for breakdown. In this work, an adaptive optics system enables spatial beam shaping for compensation of aberrations and investigation of wave front influence on optical breakdown. An eye model was designed to allow for aberration correction as well as detection of optical breakdown. The eye model consists of an achromatic lens for modeling the eye's refractive power, a water chamber for modeling the tissue properties, and a PTFE sample for modeling the retina's scattering properties. Aberration correction was performed using a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated using transmission measurements for determination of the breakdown threshold and video imaging of the focal region for study of the gas bubble dynamics. The threshold energy is considerably reduced when correcting for the aberrations of the system and the model eye. Also, a raise in irradiance at constant pulse energy was shown for the aberration corrected case. The reduced pulse energy lowers the potential risk of collateral damage which is especially important for retinal safety. This offers new possibilities for vitreo-retinal surgery using femtosecond laser pulses.

  14. Representation of Vegetation and Other Nonerodible Elements in Aeolian Shear Stress Partitioning Models for Predicting Transport Threshold

    NASA Technical Reports Server (NTRS)

    King, James; Nickling, William G.; Gillies, John A.

    2005-01-01

    The presence of nonerodible elements is well understood to be a reducing factor for soil erosion by wind, but the limits of its protection of the surface and erosion threshold prediction are complicated by the varying geometry, spatial organization, and density of the elements. The predictive capabilities of the most recent models for estimating wind driven particle fluxes are reduced because of the poor representation of the effectiveness of vegetation to reduce wind erosion. Two approaches have been taken to account for roughness effects on sediment transport thresholds. Marticorena and Bergametti (1995) in their dust emission model parameterize the effect of roughness on threshold with the assumption that there is a relationship between roughness density and the aerodynamic roughness length of a surface. Raupach et al. (1993) offer a different approach based on physical modeling of wake development behind individual roughness elements and the partition of the surface stress and the total stress over a roughened surface. A comparison between the models shows the partitioning approach to be a good framework to explain the effect of roughness on entrainment of sediment by wind. Both models provided very good agreement for wind tunnel experiments using solid objects on a nonerodible surface. However, the Marticorena and Bergametti (1995) approach displays a scaling dependency when the difference between the roughness length of the surface and the overall roughness length is too great, while the Raupach et al. (1993) model's predictions perform better owing to the incorporation of the roughness geometry and the alterations to the flow they can cause.

  15. Discrimination thresholds of normal and anomalous trichromats: Model of senescent changes in ocular media density on the Cambridge Colour Test.

    PubMed

    Shinomori, Keizo; Panorgias, Athanasios; Werner, John S

    2016-03-01

    Age-related changes in chromatic discrimination along dichromatic confusion lines were measured with the Cambridge Colour Test (CCT). One hundred and sixty-two individuals (16 to 88 years old) with normal Rayleigh matches were the major focus of this paper. An additional 32 anomalous trichromats classified by their Rayleigh matches were also tested. All subjects were screened to rule out abnormalities of the anterior and posterior segments. Thresholds on all three chromatic vectors measured with the CCT showed age-related increases. Protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bilinear model. Analysis and modeling demonstrated that the nominal vectors of the CCT are shifted by senescent changes in ocular media density, and a method for correcting the CCT vectors is demonstrated. A correction for these shifts indicates that classification among individuals of different ages is unaffected. New vector thresholds for elderly observers and for all age groups are suggested based on calculated tolerance limits.

  16. Discrimination thresholds of normal and anomalous trichromats: Model of senescent changes in ocular media density on the Cambridge Colour Test

    PubMed Central

    Shinomori, Keizo; Panorgias, Athanasios; Werner, John S.

    2017-01-01

    Age-related changes in chromatic discrimination along dichromatic confusion lines were measured with the Cambridge Colour Test (CCT). One hundred and sixty-two individuals (16 to 88 years old) with normal Rayleigh matches were the major focus of this paper. An additional 32 anomalous trichromats classified by their Rayleigh matches were also tested. All subjects were screened to rule out abnormalities of the anterior and posterior segments. Thresholds on all three chromatic vectors measured with the CCT showed age-related increases. Protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bilinear model. Analysis and modeling demonstrated that the nominal vectors of the CCT are shifted by senescent changes in ocular media density, and a method for correcting the CCT vectors is demonstrated. A correction for these shifts indicates that classification among individuals of different ages is unaffected. New vector thresholds for elderly observers and for all age groups are suggested based on calculated tolerance limits. PMID:26974943

  17. Predator harvesting in stage dependent predation models: insights from a threshold management policy.

    PubMed

    Costa, Michel Iskin da Silveira

    2008-11-01

    Stage dependent predation may give rise to the hydra effect--the increase of predator density at equilibrium as its mortality rate is raised. Management strategies that adjust predator harvest rates or quotas based on responses of populations to past changes in capture rates may eventually lead to a catastrophic collapse of predator species. A proposed threshold management policy avoids the hydra effect and its subsequent danger of predator extinction. Suggestions to extend the application of threshold policies in areas such as intermediate disturbance hypothesis, density-trait mediated interactions and non-optimal anti-predatory behavior are put forward.

  18. A population-based Habitable Zone perspective

    NASA Astrophysics Data System (ADS)

    Zsom, Andras

    2015-08-01

    What can we tell about exoplanet habitability if currently only the stellar properties, planet radius, and the incoming stellar flux are known? The Habitable Zone (HZ) is the region around stars where planets can harbor liquid water on their surfaces. The HZ is traditionally conceived as a sharp region around the star because it is calculated for one planet with specific properties e.g., Earth-like or desert planets , or rocky planets with H2 atmospheres. Such planet-specific approach is limiting because the planets’ atmospheric and geophysical properties, which influence the surface climate and the presence of liquid water, are currently unknown but expected to be diverse.A statistical HZ description is outlined which does not select one specific planet type. Instead the atmospheric and surface properties of exoplanets are treated as random variables and a continuous range of planet scenarios are considered. Various probability density functions are assigned to each observationally unconstrained random variable, and a combination of Monte Carlo sampling and climate modeling is used to generate synthetic exoplanet populations with known surface climates. Then, the properties of the liquid water bearing subpopulation is analyzed.Given our current observational knowledge of small exoplanets, the HZ takes the form of a weakly-constrained but smooth probability function. The model shows that the HZ has an inner edge: it is unlikely that planets receiving two-three times more stellar radiation than Earth can harbor liquid water. But a clear outer edge is not seen: a planet that receives a fraction of Earth's stellar radiation (1-10%) can be habitable, if the greenhouse effect of the atmosphere is strong enough. The main benefit of the population-based approach is that it will be refined over time as new data on exoplanets and their atmospheres become available.

  19. PT -breaking threshold in spatially asymmetric Aubry-André and Harper models: Hidden symmetry and topological states

    NASA Astrophysics Data System (ADS)

    Harter, Andrew K.; Lee, Tony E.; Joglekar, Yogesh N.

    2016-06-01

    Aubry-André-Harper lattice models, characterized by a reflection-asymmetric sinusoidally varying nearest-neighbor tunneling profile, are well known for their topological properties. We consider the fate of such models in the presence of balanced gain and loss potentials ±i γ located at reflection-symmetric sites. We predict that these models have a finite PT -breaking threshold only for specific locations of the gain-loss potential and uncover a hidden symmetry that is instrumental to the finite threshold strength. We also show that the topological edge states remain robust in the PT -symmetry-broken phase. Our predictions substantially broaden the possible experimental realizations of a PT -symmetric system.

  20. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  1. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  2. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability.

    PubMed

    Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W

    2015-04-28

    Threshold versions of Schloegl's model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  3. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    SciTech Connect

    Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  4. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    SciTech Connect

    Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  5. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    DOE PAGES

    Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.

    2015-04-28

    Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique valuemore » but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.« less

  6. A Multinomial Model for Identifying Significant Pure-Tone Threshold Shifts

    ERIC Educational Resources Information Center

    Schlauch, Robert S.; Carney, Edward

    2007-01-01

    Purpose: Significant threshold differences on retest for pure-tone audiometry are often evaluated by application of ad hoc rules, such as a shift in a pure-tone average or in 2 adjacent frequencies that exceeds a predefined amount. Rules that are so derived do not consider the probability of observing a particular audiogram. Methods: A general…

  7. "Getting Stuck" in Analogue Electronics: Threshold Concepts as an Explanatory Model

    ERIC Educational Resources Information Center

    Harlow, A.; Scott, J.; Peter, M.; Cowie, B.

    2011-01-01

    Could the challenge of mastering threshold concepts be a potential factor that influences a student's decision to continue in electronics engineering? This was the question that led to a collaborative research project between educational researchers and the Faculty of Engineering in a New Zealand university. This paper deals exclusively with the…

  8. Single-Event Upset (SEU) model verification and threshold determination using heavy ions in a bipolar static RAM

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.

    1985-01-01

    Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.

  9. Single-Event Upset (SEU) model verification and threshold determination using heavy ions in a bipolar static RAM

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.

    1985-01-01

    Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.

  10. Ethics in population-based genetic research.

    PubMed

    DeCamp, Matthew; Sugarman, Jeremy

    2004-01-01

    Population-based genetic research, including genetic epidemiology, shows tremendous potential to elucidate the role of genes as causal factors in complex and common human diseases. Like all research with human subjects, full realization of these benefits requires careful attention to its ethical conduct, establishing an appropriate balance between individual protections and the advancement of scientific and medical knowledge. This article reviews the growing literature on genetics research and ethics to describe some of the fundamental ethical issues in population-based genetics research, including research design, recruitment and informed consent, and dealing with research results. Its focus is on areas where consensus is forming and where future work is needed.

  11. The frequency response function and sinusoidal threshold properties of the Hodgkin-Huxley model of action potential encoding.

    PubMed

    French, A S

    1984-01-01

    The behavior of the space-clamped Hodgkin-Huxley model has been studied using band-limited white noise (0-50 Hz) as the input membrane current and taking the output as a point process in time given by the peaks of the action potentials. The frequency response and coherence functions were measured by use of the Fourier transform and digital filtering of the spike train. The results obtained are in good agreement with those already published for the simple integrator and leaky integrator models of neuronal encoding, as well as the earlier studies on the response of the Hodgkin-Huxley model to steady currents. In addition, the threshold of the model to sinusoidal membrane currents has been measured as a function of frequency over the range of 0.1-100 Hz. This shows a relatively constant level up to 2 Hz and then a clear minimum at 60 Hz, in agreement with measured thresholds of squid axons. These results are discussed in terms of the possible contributions of action potential encoding mechanisms to the frequency responses and sinusoidal thresholds which have been measured for rapidly adapting receptors.

  12. A modelling study of locomotion-induced hyperpolarization of voltage threshold in cat lumbar motoneurones

    PubMed Central

    Dai, Yue; Jones, Kelvin E; Fedirchuk, Brent; McCrea, David A; Jordan, Larry M

    2002-01-01

    During fictive locomotion the excitability of adult cat lumbar motoneurones is increased by a reduction (a mean hyperpolarization of ≈6.0 mV) of voltage threshold (Vth) for action potential (AP) initiation that is accompanied by only small changes in AP height and width. Further examination of the experimental data in the present study confirms that Vth lowering is present to a similar degree in both the hyperpolarized and depolarized portions of the locomotor step cycle. This indicates that Vth reduction is a modulation of motoneurone membrane currents throughout the locomotor state rather than being related to the phasic synaptic input within the locomotor cycle. Potential ionic mechanisms of this locomotor-state-dependent increase in excitability were examined using three five-compartment models of the motoneurone innervating slow, fast fatigue resistant and fast fatigable muscle fibres. Passive and active membrane conductances were set to produce input resistance, rheobase, afterhyperpolarization (AHP) and membrane time constant values similar to those measured in adult cat motoneurones in non-locomoting conditions. The parameters of 10 membrane conductances were then individually altered in an attempt to replicate the hyperpolarization of Vth that occurs in decerebrate cats during fictive locomotion. The goal was to find conductance changes that could produce a greater than 3 mV hyperpolarization of Vth with only small changes in AP height (< 3 mV) and width (< 1.2 ms). Vth reduction without large changes in AP shape could be produced either by increasing fast sodium current or by reducing delayed rectifier potassium current. The most effective Vth reductions were achieved by either increasing the conductance of fast sodium channels or by hyperpolarizing the voltage dependency of their activation. These changes were particularly effective when localized to the initial segment. Reducing the conductance of delayed rectifier channels or depolarizing their

  13. Validation of a thermal threshold nociceptive model in bearded dragons (Pogona vitticeps).

    PubMed

    Couture, Émilie L; Monteiro, Beatriz P; Aymen, Jessica; Troncy, Eric; Steagall, Paulo V

    2017-05-01

    To validate a thermal threshold (TT) nociceptive model in bearded dragons (Pogona vitticeps) and to document TT changes after administration of morphine. A two-part randomized, blinded, controlled, experimental study. Five adult bearded dragons (242-396 g). A TT device delivered a ramped nociceptive stimulus (0.6 °C second(-1)) to the medial thigh until a response (leg kick/escape behavior) was observed or maximum (cut-off) temperature of 62 °C was reached. In phase I, period 1, six TT readings were determined at 20 minute intervals for evaluation of repeatability. Two of these readings were randomly assigned to be sham to assess specificity of the behavioral response. The same experiment was repeated 2 weeks later (period 2) to test reproducibility. In phase II, animals were administered either intramuscular morphine (10 mg kg(-1)) or saline 0.9%. TTs (maximum 68 °C) were determined before and 2, 4, 8, 12 and 24 hours after treatment administration. Data were analyzed using one-way anova (temporal changes and repeatability) and paired t tests (reproducibility and treatment comparisons) using Bonferroni correction (p < 0.05). Mean TT values were 57.4 ± 3.8 °C and 57.3 ± 4.3 °C for periods 1 and 2, respectively. Data were repeatable within each period (p = 0.83 and p = 0.07, respectively). Reproducibility between periods was remarkable (p = 0.86). False-positive responses during sham testing were 10%. TTs were significantly increased after morphine administration at 2, 4 and 8 hours compared with baseline, and at 2 and 4 hours compared with saline 0.9%. The highest TT was 67.7 ± 0.7 °C at 4 hours after morphine administration. Testing was repeatable, reproducible and well tolerated in bearded dragons. TT nociceptive testing detected morphine administration and may be suitable for studying opioid regimens in bearded dragons. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and

  14. Analytical threshold voltage modeling of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Balraj; Kumar, Sanjay; Singh, Kunal; Jit, Satyabrata

    2017-04-01

    Two dimensional threshold voltage model of ion-implanted strained-Si double-material double-gate MOSFETs has been done based on the solution of two dimensional Poisson's equation in the channel region using the parabolic approximation method. Novelty of the proposed device structure lies in the amalgamation of the advantages of both the strained-Si channel and double-material double-gate structure with a vertical Gaussian-like doping profile. The effects of different device parameters (such as device channel length, gate length ratios, germanium mole fraction) and doping parameters (such as projected range, straggle parameter) on threshold voltage of the proposed structure have been investigated. It is observed that the subthreshold performance of the device can be improved by simply controlling the doping parameters while maintaining other device parameters constant. The modeling results show a good agreement with the numerical simulation data obtained by using ATLAS™, a 2D device simulator from SILVACO.

  15. Analytical threshold voltage modeling of ion-implanted strained-Si double-material double-gate (DMDG) MOSFETs

    NASA Astrophysics Data System (ADS)

    Goel, Ekta; Singh, Balraj; Kumar, Sanjay; Singh, Kunal; Jit, Satyabrata

    2016-09-01

    Two dimensional threshold voltage model of ion-implanted strained-Si double-material double-gate MOSFETs has been done based on the solution of two dimensional Poisson's equation in the channel region using the parabolic approximation method. Novelty of the proposed device structure lies in the amalgamation of the advantages of both the strained-Si channel and double-material double-gate structure with a vertical Gaussian-like doping profile. The effects of different device parameters (such as device channel length, gate length ratios, germanium mole fraction) and doping parameters (such as projected range, straggle parameter) on threshold voltage of the proposed structure have been investigated. It is observed that the subthreshold performance of the device can be improved by simply controlling the doping parameters while maintaining other device parameters constant. The modeling results show a good agreement with the numerical simulation data obtained by using ATLAS™, a 2D device simulator from SILVACO.

  16. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception

    PubMed Central

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term “cross-matching,” represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth

  17. Combining physiological threshold knowledge to species distribution models is key to improving forecasts of the future niche for macroalgae.

    PubMed

    Martínez, Brezo; Arenas, Francisco; Trilla, Alba; Viejo, Rosa M; Carreño, Francisco

    2015-04-01

    Species distribution models (SDM) are a useful tool for predicting species range shifts in response to global warming. However, they do not explore the mechanisms underlying biological processes, making it difficult to predict shifts outside the environmental gradient where the model was trained. In this study, we combine correlative SDMs and knowledge on physiological limits to provide more robust predictions. The thermal thresholds obtained in growth and survival experiments were used as proxies of the fundamental niches of two foundational marine macrophytes. The geographic projections of these species' distributions obtained using these thresholds and existing SDMs were similar in areas where the species are either absent-rare or frequent and where their potential and realized niches match, reaching consensus predictions. The cold-temperate foundational seaweed Himanthalia elongata was predicted to become extinct at its southern limit in northern Spain in response to global warming, whereas the occupancy of southern-lusitanic Bifurcaria bifurcata was expected to increase. Combined approaches such as this one may also highlight geographic areas where models disagree potentially due to biotic factors. Physiological thresholds alone tended to over-predict species prevalence, as they cannot identify absences in climatic conditions within the species' range of physiological tolerance or at the optima. Although SDMs tended to have higher sensitivity than threshold models, they may include regressions that do not reflect causal mechanisms, constraining their predictive power. We present a simple example of how combining correlative and mechanistic knowledge provides a rapid way to gain insight into a species' niche resulting in consistent predictions and highlighting potential sources of uncertainty in forecasted responses to climate change.

  18. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception.

    PubMed

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term "cross-matching," represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth perception.

  19. Revisiting the Economic Injury Level and Economic Threshold Model for Potato Leafhopper (Hemiptera: Cicadellidae) in Alfalfa.

    PubMed

    Chasen, Elissa M; Undersander, Dan J; Cullen, Eileen M

    2015-08-01

    The economic injury level for potato leafhopper, Empoasca fabae (Harris), in alfalfa (Medicago sativa L.) was developed over 30 yr ago. In response to increasing market value of alfalfa, farmers and consultants are interested in reducing the economic threshold for potato leafhopper in alfalfa. To address this question, caged field trials were established on two consecutive potato leafhopper susceptible crops in 2013. Field cages were infested with a range of potato leafhopper densities to create a linear regression of alfalfa yield response. The slopes, or yield loss per insect, for the linear regressions of both trials were used to calculate an economic injury level for a range of current alfalfa market values and control costs. This yield-loss relationship is the first quantification that could be used to help assess whether the economic threshold should be lowered, given the increased market value of alfalfa.

  20. Model-Independent Determination of the Compositeness of Near-Threshold Quasibound States

    NASA Astrophysics Data System (ADS)

    Kamiya, Yuki; Hyodo, Tetsuo

    We study the compositeness of near-threshold states to clarify the internal structure of exotic hadron candidates. Within the framework of effective field theory, we extend the Weinberg's weak-binding relation to include the nearby CDD (Castillejo-Dalitz-Dyson) pole contribution with the help of the Padé approximant. Finally, using the extended relation, we conclude that the CDD pole contribution to the Λ(1405) baryon in the bar{K}N amplitude is negligible.

  1. Holes in the Bathtub: Water Table Dependent Services and Threshold Behavior in an Economic Model of Groundwater Extraction

    NASA Astrophysics Data System (ADS)

    Kirk-lawlor, N. E.; Edwards, E. C.

    2012-12-01

    In many groundwater systems, the height of the water table must be above certain thresholds for some types of surface flow to exist. Examples of flows that depend on water table elevation include groundwater baseflow to river systems, groundwater flow to wetland systems, and flow to springs. Meeting many of the goals of sustainable water resource management requires maintaining these flows at certain rates. Water resource management decisions invariably involve weighing tradeoffs between different possible usage regimes and the economic consequences of potential management choices are an important factor in these tradeoffs. Policies based on sustainability may have a social cost from forgoing present income. This loss of income may be worth bearing, but should be well understood and carefully considered. Traditionally, the economic theory of groundwater exploitation has relied on the assumption of a single-cell or "bathtub" aquifer model, which offers a simple means to examine complex interactions between water user and hydrologic system behavior. However, such a model assumes a closed system and does not allow for the simulation of groundwater outflows that depend on water table elevation (e.g. baseflow, springs, wetlands), even though those outflows have value. We modify the traditional single-cell aquifer model by allowing for outflows when the water table is above certain threshold elevations. These thresholds behave similarly to holes in a bathtub, where the outflow is a positive function of the height of the water table above the threshold and the outflow is lost when the water table drops below the threshold. We find important economic consequences to this representation of the groundwater system. The economic value of services provided by threshold-dependent outflows (including non-market value), such as ecosystem services, can be incorporated. The value of services provided by these flows may warrant maintaining the water table at higher levels than would

  2. Changing the Risk Paradigms Can be Good for Our Health: J-Shaped, Linear and Threshold Dose-Response Models.

    PubMed

    Ricci, P F; Straja, S R; Cox, A L

    2012-01-01

    Both the linear (at low doses)-no-threshold (LNT) and the threshold models (S-shapes) dose-response lead to no benefit from low exposure. We propose three new models that allow and include, but do not require - unlike LNT and S-shaped models - this strong assumption. We also provide the means to calculate benefits associated with bi-phasic biological behaviors, when they occur and propose:THREE HORMETIC (PHASIC) MODELS: the J-shaped, inverse J-shaped, the min-max, andMethod for calculating the direct benefits associated with the J and inverse J-shaped models.The J-shaped and min-max models for mutagens and carcinogenic agents include an experimentally justified repair stage for toxic and carcinogenic damage. We link these to stochastic transition models for cancer and show how abrupt transitions in cancer hazard rates, as functions of exposure concentrations and durations, can emerge naturally in large cell populations even when the rates of cell-level events increase smoothly (e.g., proportionally) with concentration. In this very general family of models, J-shaped dose-response curves emerge. These results are universal, i.e., independent of specific biological details represented by the stochastic transition networks. Thus, using them suggests a more complete and realistic way to assess risks at low doses or dose-rates.

  3. Application of physiologically-based toxicokinetic modelling in oral-to-dermal extrapolation of threshold doses of cosmetic ingredients.

    PubMed

    Gajewska, M; Worth, A; Urani, C; Briesen, H; Schramm, K-W

    2014-06-16

    The application of physiologically based toxicokinetic (PBTK) modelling in route-to-route (RtR) extrapolation of three cosmetic ingredients: coumarin, hydroquinone and caffeine is shown in this study. In particular, the oral no-observed-adverse-effect-level (NOAEL) doses of these chemicals are extrapolated to their corresponding dermal values by comparing the internal concentrations resulting from oral and dermal exposure scenarios. The PBTK model structure has been constructed to give a good simulation performance of biochemical processes within the human body. The model parameters are calibrated based on oral and dermal experimental data for the Caucasian population available in the literature. Particular attention is given to modelling the absorption stage (skin and gastrointestinal tract) in the form of several sub-compartments. This gives better model prediction results when compared to those of a PBTK model with a simpler structure of the absorption barrier. In addition, the role of quantitative structure-property relationships (QSPRs) in predicting skin penetration is evaluated for the three substances with a view to incorporating QSPR-predicted penetration parameters in the PBTK model when experimental values are lacking. Finally, PBTK modelling is used, first to extrapolate oral NOAEL doses derived from rat studies to humans, and then to simulate internal systemic/liver concentrations - Area Under Curve (AUC) and peak concentration - resulting from specified dermal and oral exposure conditions. Based on these simulations, AUC-based dermal thresholds for the three case study compounds are derived and compared with the experimentally obtained oral threshold (NOAEL) values.

  4. Knickpoint Generation and Persistence Following Base-Level Fall: An Examination of Erosional Thresholds in Sediment Flux Dependent Erosion Models

    NASA Astrophysics Data System (ADS)

    Crosby, B. T.; Whipple, K. X.; Gasparini, N. M.; Wobus, C. W.

    2005-12-01

    Non-lithologic knickpoints, or discrete convexities in longitudinal river profiles, are commonly considered to be the mobile, upstream extent of a transient incisional signal. Downstream of the knickpoint, the landscape is responding to a recent change in base level, uplift rate or climatic condition, while upstream of the knickpoint, the landscape retains its relict form, relatively ignorant the transient signal. Though this model of knickpoint mobility and their capacity to communicate incisional signals throughout basins works well with standard formulations of the stream power erosion model, the recent development of sediment flux dependent erosion models contain explicit thresholds that limit the upstream extent of knickpoint-mediated fluvial adjustment. Sediment flux dependent erosion models fail to communicate incisional signals at small drainage areas as sediment and water discharges are insufficient to effectively erode the bed. As well, if knickpoint slopes increase beyond a threshold value, sediment impacts against the bed become too infrequent and too oblique to continue knickpoint propagation by fluvial mechanisms. This threshold in fluvial erosion could lead to the stagnation of incisional signals and the generation of hanging valleys. This theoretical expectation aligns with our observation that in numerous actively incising landscapes around the world, relict low drainage area basins are often found elevated high above and disconnected from the mainstem by extremely over-steepened channel reaches often composed of one or more near-vertical steps. In order to better understand how river networks respond during transient pulses of incision, we employ a numerical landscape evolution model (CHILD) to test the sensitivity of three different sediment flux dependent erosion models to different base-level fall scenarios. This technique allows us to observe the propagation of the signal throughout a fluvial network composed of tributaries of variable

  5. Application of Johnson et al.'s speciation threshold model to apparent colonization times of island biotas.

    PubMed

    Ricklefs, Robert E; Bermingham, Eldredge

    2004-08-01

    Understanding patterns of diversity can be furthered by analysis of the dynamics of colonization, speciation, and extinction on islands using historical information provided by molecular phylogeography. The land birds of the Lesser Antilles are one of the most thoroughly described regional faunas in this context. In an analysis of colonization times, Ricklefs and Bermingham (2001) found that the cumulative distribution of lineages with respect to increasing time since colonization exhibits a striking change in slope at a genetic distance of about 2% mitochondrial DNA sequence divergence (about one million years). They further showed how this heterogeneity could be explained by either an abrupt increase in colonization rates or a mass extinction event. Cherry et al. (2002), referring to a model developed by Johnson et al. (2000), argued instead that the pattern resulted from a speciation threshold for reproductive isolation of island populations from their continental source populations. Prior to this threshold, genetic divergence is slowed by migration from the source, and species of varying age accumulate at a low genetic distance. After the threshold is reached, source and island populations diverge more rapidly, creating heterogeneity in the distribution of apparent ages of island taxa. We simulated of Johnson et al.'s speciation-threshold model, incorporating genetic divergence at rate k and fixation at rate M of genes that have migrated between the source and the island population. Fixation resets the divergence clock to zero. The speciation-threshold model fits the distribution of divergence times of Lesser Antillean birds well with biologically plausible parameter estimates. Application of the model to the Hawaiian avifauna, which does not exhibit marked heterogeneity of genetic divergence, and the West Indian herpetofauna, which does, required unreasonably high migration-fixation rates, several orders of magnitude greater than the colonization rate. However

  6. Heritability of Autism Spectrum Disorder in a UK Population-Based Twin Sample

    PubMed Central

    Colvert, Emma; Tick, Beata; McEwen, Fiona; Stewart, Catherine; Curran, Sarah R.; Woodhouse, Emma; Gillan, Nicola; Hallett, Victoria; Lietz, Stephanie; Garnett, Tracy; Ronald, Angelica; Plomin, Robert; Rijsdijk, Frühling; Happé, Francesca; Bolton, Patrick

    2016-01-01

    IMPORTANCE Most evidence to date highlights the importance of genetic influences on the liability to autism and related traits. However, most of these findings are derived from clinically ascertained samples, possibly missing individuals with subtler manifestations, and obtained estimates may not be representative of the population. OBJECTIVES To establish the relative contributions of genetic and environmental factors in liability to autism spectrum disorder (ASD) and a broader autism phenotype in a large population-based twin sample and to ascertain the genetic/environmental relationship between dimensional trait measures and categorical diagnostic constructs of ASD. DESIGN, SETTING, AND PARTICIPANTS We used data from the population-based cohort Twins Early Development Study, which included all twin pairs born in England and Wales from January 1, 1994, through December 31, 1996. We performed joint continuous-ordinal liability threshold model fitting using the full information maximum likelihood method to estimate genetic and environmental parameters of covariance. Twin pairs underwent the following assessments: the Childhood Autism Spectrum Test (CAST) (6423 pairs; mean age, 7.9 years), the Development and Well-being Assessment (DAWBA) (359 pairs; mean age, 10.3 years), the Autism Diagnostic Observation Schedule (ADOS) (203 pairs; mean age, 13.2 years), the Autism Diagnostic Interview–Revised (ADI-R) (205 pairs; mean age, 13.2 years), and a best-estimate diagnosis (207 pairs). MAIN OUTCOMES AND MEASURES Participants underwent screening using a population-based measure of autistic traits (CAST assessment), structured diagnostic assessments (DAWBA, ADI-R, and ADOS), and a best-estimate diagnosis. RESULTS On all ASD measures, correlations among monozygotic twins (range, 0.77-0.99) were significantly higher than those for dizygotic twins (range, 0.22-0.65), giving heritability estimates of 56% to 95%. The covariance of CAST and ASD diagnostic status (DAWBA, ADOS

  7. Porcine skin visible lesion thresholds for near-infrared lasers including modeling at two pulse durations and spot sizes.

    PubMed

    Cain, C P; Polhamus, G D; Roach, W P; Stolarski, D J; Schuster, K J; Stockton, K L; Rockwell, B A; Chen, Bo; Welch, A J

    2006-01-01

    With the advent of such systems as the airborne laser and advanced tactical laser, high-energy lasers that use 1315-nm wavelengths in the near-infrared band will soon present a new laser safety challenge to armed forces and civilian populations. Experiments in nonhuman primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular, and retinal lesions as a function of pulse duration. American National Standards Institute (ANSI) laser safety standards have traditionally been based on experimental data, and there is scant data for this wavelength. We are reporting minimum visible lesion (MVL) threshold measurements using a porcine skin model for two different pulse durations and spot sizes for this wavelength. We also compare our measurements to results from our model based on the heat transfer equation and rate process equation, together with actual temperature measurements on the skin surface using a high-speed infrared camera. Our MVL-ED50 thresholds for long pulses (350 micros) at 24-h postexposure are measured to be 99 and 83 J cm(-2) for spot sizes of 0.7 and 1.3 mm diam, respectively. Q-switched laser pulses of 50 ns have a lower threshold of 11 J cm(-2) for a 5-mm-diam top-hat laser pulse.

  8. Porcine skin visible lesion thresholds for near-infrared lasers including modeling at two pulse durations and spot sizes

    NASA Astrophysics Data System (ADS)

    Cain, Clarence P.; Polhamus, Garrett D.; Roach, William P.; Stolarski, David J.; Schuster, Kurt J.; Stockton, Kevin; Rockwell, Benjamin A.; Chen, Bo; Welch, Ashley J.

    2006-07-01

    With the advent of such systems as the airborne laser and advanced tactical laser, high-energy lasers that use 1315-nm wavelengths in the near-infrared band will soon present a new laser safety challenge to armed forces and civilian populations. Experiments in nonhuman primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular, and retinal lesions as a function of pulse duration. American National Standards Institute (ANSI) laser safety standards have traditionally been based on experimental data, and there is scant data for this wavelength. We are reporting minimum visible lesion (MVL) threshold measurements using a porcine skin model for two different pulse durations and spot sizes for this wavelength. We also compare our measurements to results from our model based on the heat transfer equation and rate process equation, together with actual temperature measurements on the skin surface using a high-speed infrared camera. Our MVL-ED50 thresholds for long pulses (350 µs) at 24-h postexposure are measured to be 99 and 83 Jcm-2 for spot sizes of 0.7 and 1.3 mm diam, respectively. Q-switched laser pulses of 50 ns have a lower threshold of 11 Jcm-2 for a 5-mm-diam top-hat laser pulse.

  9. Determination of navigation FDI thresholds using a Markov model. [Failure Detection and Identification in triplex inertial platform systems for Shuttle entry

    NASA Technical Reports Server (NTRS)

    Walker, B. K.; Gai, E.

    1978-01-01

    A method for determining time-varying Failure Detection and Identification (FDI) thresholds for single sample decision functions is described in the context of a triplex system of inertial platforms. A cost function consisting of the probability of vehicle loss due to FDI decision errors is minimized. A discrete Markov model is constructed from which this cost can be determined as a function of the decision thresholds employed to detect and identify the first and second failures. Optimal thresholds are determined through the use of parameter optimization techniques. The application of this approach to threshold determination is illustrated for the Space Shuttle's inertial measurement instruments.

  10. Spatiotemporal and Spatial Threshold Models for Relating UV Exposures and Skin Cancer in the Central United States

    PubMed Central

    Hatfield, Laura A.; Hoffbeck, Richard W.; Alexander, Bruce H.; Carlin, Bradley P.

    2009-01-01

    The exact mechanisms relating exposure to ultraviolet (UV) radiation and elevated risk of skin cancer remain the subject of debate. For example, there is disagreement on whether the main risk factor is duration of the exposure, its intensity, or some combination of both. There is also uncertainty regarding the form of the dose-response curve, with many authors believing only exposures exceeding a given (but unknown) threshold are important. In this paper we explore methods to estimate such thresholds using hierarchical spatial logistic models based on a sample of a cohort of x-ray technologists for whom we have self-reports of time spent in the sun and numbers of blistering sunburns in childhood. A preliminary goal is to explore the temporal pattern of UV exposure and its gradient. Changes here would imply that identical exposure self-reports from different calendar years may correspond to differing cancer risks. PMID:20161236

  11. Spatiotemporal and Spatial Threshold Models for Relating UV Exposures and Skin Cancer in the Central United States.

    PubMed

    Hatfield, Laura A; Hoffbeck, Richard W; Alexander, Bruce H; Carlin, Bradley P

    2009-06-15

    The exact mechanisms relating exposure to ultraviolet (UV) radiation and elevated risk of skin cancer remain the subject of debate. For example, there is disagreement on whether the main risk factor is duration of the exposure, its intensity, or some combination of both. There is also uncertainty regarding the form of the dose-response curve, with many authors believing only exposures exceeding a given (but unknown) threshold are important. In this paper we explore methods to estimate such thresholds using hierarchical spatial logistic models based on a sample of a cohort of x-ray technologists for whom we have self-reports of time spent in the sun and numbers of blistering sunburns in childhood. A preliminary goal is to explore the temporal pattern of UV exposure and its gradient. Changes here would imply that identical exposure self-reports from different calendar years may correspond to differing cancer risks.

  12. Modeling of Beams’ Multiple-Contact Mode with an Application in the Design of a High-g Threshold Microaccelerometer

    PubMed Central

    Li, Kai; Chen, Wenyuan; Zhang, Weiping

    2011-01-01

    Beam’s multiple-contact mode, characterized by multiple and discrete contact regions, non-uniform stoppers’ heights, irregular contact sequence, seesaw-like effect, indirect interaction between different stoppers, and complex coupling relationship between loads and deformation is studied. A novel analysis method and a novel high speed calculation model are developed for multiple-contact mode under mechanical load and electrostatic load, without limitations on stopper height and distribution, providing the beam has stepped or curved shape. Accurate values of deflection, contact load, contact region and so on are obtained directly, with a subsequent validation by CoventorWare. A new concept design of high-g threshold microaccelerometer based on multiple-contact mode is presented, featuring multiple acceleration thresholds of one sensitive component and consequently small sensor size. PMID:22163897

  13. Modeling on oxide dependent 2DEG sheet charge density and threshold voltage in AlGaN/GaN MOSHEMT

    NASA Astrophysics Data System (ADS)

    Panda, J.; Jena, K.; Swain, R.; Lenka, T. R.

    2016-04-01

    We have developed a physics based analytical model for the calculation of threshold voltage, two dimensional electron gas (2DEG) density and surface potential for AlGaN/GaN metal oxide semiconductor high electron mobility transistors (MOSHEMT). The developed model includes important parameters like polarization charge density at oxide/AlGaN and AlGaN/GaN interfaces, interfacial defect oxide charges and donor charges at the surface of the AlGaN barrier. The effects of two different gate oxides (Al2O3 and HfO2) are compared for the performance evaluation of the proposed MOSHEMT. The MOSHEMTs with Al2O3 dielectric have an advantage of significant increase in 2DEG up to 1.2 × 1013 cm-2 with an increase in oxide thickness up to 10 nm as compared to HfO2 dielectric MOSHEMT. The surface potential for HfO2 based device decreases from 2 to -1.6 eV within 10 nm of oxide thickness whereas for the Al2O3 based device a sharp transition of surface potential occurs from 2.8 to -8.3 eV. The variation in oxide thickness and gate metal work function of the proposed MOSHEMT shifts the threshold voltage from negative to positive realizing the enhanced mode operation. Further to validate the model, the device is simulated in Silvaco Technology Computer Aided Design (TCAD) showing good agreement with the proposed model results. The accuracy of the developed calculations of the proposed model can be used to develop a complete physics based 2DEG sheet charge density and threshold voltage model for GaN MOSHEMT devices for performance analysis.

  14. A threshold-voltage model for small-scaled GaAs nMOSFET with stacked high-k gate dielectric

    NASA Astrophysics Data System (ADS)

    Chaowen, Liu; Jingping, Xu; Lu, Liu; Hanhan, Lu; Yuan, Huang

    2016-02-01

    A threshold-voltage model for a stacked high-k gate dielectric GaAs MOSFET is established by solving a two-dimensional Poisson's equation in channel and considering the short-channel, DIBL and quantum effects. The simulated results are in good agreement with the Silvaco TCAD data, confirming the correctness and validity of the model. Using the model, impacts of structural and physical parameters of the stack high-k gate dielectric on the threshold-voltage shift and the temperature characteristics of the threshold voltage are investigated. The results show that the stacked gate dielectric structure can effectively suppress the fringing-field and DIBL effects and improve the threshold and temperature characteristics, and on the other hand, the influence of temperature on the threshold voltage is overestimated if the quantum effect is ignored. Project supported by the National Natural Science Foundation of China (No. 61176100).

  15. Low-frequency Raman scattering in model disordered solids: percolators above threshold

    NASA Astrophysics Data System (ADS)

    Pilla, O.; Viliani, G.; Dell'Anna, R.; Ruocco, G.

    1997-02-01

    The Raman coupling coefficients of site- and bond-percolators at concentration higher than percolation threshold are computed for two scattering mechanisms: bond polarizability (BPOL) and dipole-induced-dipole (DID). The results show that DID does not follow a scaling law at low frequency, while in the case of BPOL the situation is less clear. The numerically computed frequency dependence in the case of BPOL, which can be considered a good scattering mechanism for a wide class of real glasses, is in semiquantitative agreement with experimental results.

  16. Establishing a rainfall threshold for flash flood warnings in China's mountainous areas based on a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Miao, Qinghua; Yang, Dawen; Yang, Hanbo; Li, Zhe

    2016-10-01

    Flash flooding is one of the most common natural hazards in China, particularly in mountainous areas, and usually causes heavy damage and casualties. However, the forecasting of flash flooding in mountainous regions remains challenging because of the short response time and limited monitoring capacity. This paper aims to establish a strategy for flash flood warnings in mountainous ungauged catchments across humid, semi-humid and semi-arid regions of China. First, we implement a geomorphology-based hydrological model (GBHM) in four mountainous catchments with drainage areas that ranges from 493 to 1601 km2. The results show that the GBHM can simulate flash floods appropriately in these four study catchments. We propose a method to determine the rainfall threshold for flood warning by using frequency analysis and binary classification based on long-term GBHM simulations that are forced by historical rainfall data to create a practically easy and straightforward approach for flash flood forecasting in ungauged mountainous catchments with drainage areas from tens to hundreds of square kilometers. The results show that the rainfall threshold value decreases significantly with increasing antecedent soil moisture in humid regions, while this value decreases slightly with increasing soil moisture in semi-humid and semi-arid regions. We also find that accumulative rainfall over a certain time span (or rainfall over a long time span) is an appropriate threshold for flash flood warnings in humid regions because the runoff is dominated by excess saturation. However, the rainfall intensity (or rainfall over a short time span) is more suitable in semi-humid and semi-arid regions because excess infiltration dominates the runoff in these regions. We conduct a comprehensive evaluation of the rainfall threshold and find that the proposed method produces reasonably accurate flash flood warnings in the study catchments. An evaluation of the performance at uncalibrated interior points

  17. Determination of threshold conditions for a non-linear stochastic partnership model for heterosexually transmitted diseases with stages.

    PubMed

    Gallop, Robert J; Mode, Charles J; Sleeman, Candace K

    2002-01-01

    When comparing the performance of a stochastic model of an epidemic at two points in a parameter space, a threshold is said to have been crossed when at one point an epidemic develops with positive probability; while at the other there is a tendency for an epidemic to become extinct. The approach used to find thresholds in this paper was to embed a system of ordinary non-linear differential equations in a stochastic process, accommodating the formation and dissolution of marital partnerships in a heterosexual population, extra-marital sexual contacts, and diseases such as HIV/AIDS with stages. A symbolic representation of the Jacobian matrix of this system was derived. To determine whether this matrix was stable or non-stable at a particular parameter point, the Jacobian was evaluated at a disease-free equilibrium and its eigenvalues were computed. The stability or non-stability of the matrix was then determined by checking if all real parts of the eigenvalues were negative. By writing software to repeat this process for a selected set of points in the parameter space, it was possible to develop search engines for finding points in the parameter space where thresholds were crossed. The results of a set of Monte Carlo simulation experiments were reported which suggest that, by combining the stochastic and deterministic paradigms within a single formulation, it was possible to obtain more informative interpretations of simulation experiments than if attention were confined solely to either paradigm.

  18. Cavitation thresholds of contrast agents in an in vitro human clot model exposed to 120-kHz ultrasound.

    PubMed

    Gruber, Matthew J; Bader, Kenneth B; Holland, Christy K

    2014-02-01

    Ultrasound contrast agents (UCAs) can be employed to nucleate cavitation to achieve desired bioeffects, such as thrombolysis, in therapeutic ultrasound applications. Effective methods of enhancing thrombolysis with ultrasound have been examined at low frequencies (<1 MHz) and low amplitudes (<0.5 MPa). The objective of this study was to determine cavitation thresholds for two UCAs exposed to 120-kHz ultrasound. A commercial ultrasound contrast agent (Definity(®)) and echogenic liposomes were investigated to determine the acoustic pressure threshold for ultraharmonic (UH) and broadband (BB) generation using an in vitro flow model perfused with human plasma. Cavitation emissions were detected using two passive receivers over a narrow frequency bandwidth (540-900 kHz) and a broad frequency bandwidth (0.54-1.74 MHz). UH and BB cavitation thresholds occurred at the same acoustic pressure (0.3 ± 0.1 MPa, peak to peak) and were found to depend on the sensitivity of the cavitation detector but not on the nucleating contrast agent or ultrasound duty cycle.

  19. Ground-water vulnerability to nitrate contamination at multiple thresholds in the mid-Atlantic region using spatial probability models

    USGS Publications Warehouse

    Greene, Earl A.; LaMotte, Andrew E.; Cullinan, Kerri-Ann

    2005-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency?s Regional Vulnerability Assessment Program, has developed a set of statistical tools to support regional-scale, ground-water quality and vulnerability assessments. The Regional Vulnerability Assessment Program?s goals are to develop and demonstrate approaches to comprehensive, regional-scale assessments that effectively inform managers and decision-makers as to the magnitude, extent, distribution, and uncertainty of current and anticipated environmental risks. The U.S. Geological Survey is developing and exploring the use of statistical probability models to characterize the relation between ground-water quality and geographic factors in the Mid-Atlantic Region. Available water-quality data obtained from U.S. Geological Survey National Water-Quality Assessment Program studies conducted in the Mid-Atlantic Region were used in association with geographic data (land cover, geology, soils, and others) to develop logistic-regression equations that use explanatory variables to predict the presence of a selected water-quality parameter exceeding a specified management concentration threshold. The resulting logistic-regression equations were transformed to determine the probability, P(X), of a water-quality parameter exceeding a specified management threshold. Additional statistical procedures modified by the U.S. Geological Survey were used to compare the observed values to model-predicted values at each sample point. In addition, procedures to evaluate the confidence of the model predictions and estimate the uncertainty of the probability value were developed and applied. The resulting logistic-regression models were applied to the Mid-Atlantic Region to predict the spatial probability of nitrate concentrations exceeding specified management thresholds. These thresholds are usually set or established by regulators or managers at National or local levels. At management thresholds of

  20. EDGE2D-EIRENE modelling of near SOL E r: possible impact on the H-mode power threshold

    NASA Astrophysics Data System (ADS)

    Chankin, A. V.; Delabie, E.; Corrigan, G.; Harting, D.; Maggi, C. F.; Meyer, H.; Contributors, JET

    2017-04-01

    Recent EDGE2D-EIRENE simulations of JET plasmas showed a significant difference between radial electric field (E r) profiles across the separatrix in two divertor configurations, with the outer strike point on the horizontal target (HT) and vertical target (VT) (Chankin et al 2016 Nucl. Mater. Energy, doi: 10.1016/j.nme.2016.10.004). Under conditions (input power, plasma density) where the HT plasma went into the H-mode, a large positive E r spike in the near scrape-off layer (SOL) was seen in the code output, leading to a very large E × B shear across the separatrix over a narrow region of a fraction of a cm width. No such E r feature was obtained in the code solution for the VT configuration, where the H-mode power threshold was found to be twice as high as in the HT configuration. It was hypothesised that the large E × B shear across the separatrix in the HT configuration could be responsible for the turbulence suppression leading to an earlier (at lower input power) L–H transition compared to the VT configuration. In the present work these ideas are extended to cover some other experimental observations on the H-mode power threshold variation with parameters which typically are not included in the multi-machine H-mode power threshold scalings, namely: ion mass dependence (isotope H–D–T exchange), dependence on the ion ∇B drift direction, and dependence on the wall material composition (ITER-like wall versus carbon wall in JET). In all these cases EDGE2D-EIRENE modelling shows larger positive E r spikes in the near SOL under conditions where the H-mode power threshold is lower, at least in the HT configuration.

  1. [Tremendous Human, Social, and Economic Losses Caused by Obstinate Application of the Failed Linear No-threshold Model].

    PubMed

    Sutou, Shizuyo

    2015-01-01

    The linear no-threshold model (LNT) was recommended in 1956, with abandonment of the traditional threshold dose-response for genetic risk assessment. Adoption of LNT by the International Commission on Radiological Protection (ICRP) became the standard for radiation regulation worldwide. The ICRP recommends a dose limit of 1 mSv/year for the public, which is too low and which terrorizes innocent people. Indeed, LNT arose mainly from the lifespan survivor study (LSS) of atomic bomb survivors. The LSS, which asserts linear dose-response and no threshold, is challenged mainly on three points. 1) Radiation doses were underestimated by half because of disregard for major residual radiation, resulting in cancer risk overestimation. 2) The dose and dose-rate effectiveness factor (DDREF) of 2 is used, but the actual DDREF is estimated as 16, resulting in cancer risk overestimation by several times. 3) Adaptive response (hormesis) is observed in leukemia and solid cancer cases, consistently contradicting the linearity of LNT. Drastic reduction of cancer risk moves the dose-response curve close to the control line, allowing the setting of a threshold. Living organisms have been evolving for 3.8 billion years under radiation exposure, naturally acquiring various defense mechanisms such as DNA repair mechanisms, apoptosis, and immune response. The failure of LNT lies in the neglect of carcinogenesis and these biological mechanisms. Obstinate application of LNT continues to cause tremendous human, social, and economic losses. The 60-year-old LNT must be rejected to establish a new scientific knowledge-based system.

  2. Double Photoionization Near Threshold

    NASA Technical Reports Server (NTRS)

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  3. Modeling of damage generation mechanisms in silicon at energies below the displacement threshold

    SciTech Connect

    Santos, Ivan; Marques, Luis A.; Pelaz, Lourdes

    2006-11-01

    We have used molecular dynamics simulation techniques to study the generation of damage in Si within the low-energy deposition regime. We have demonstrated that energy transfers below the displacement threshold can produce a significant amount of damage, usually neglected in traditional radiation damage calculations. The formation of amorphous pockets agrees with the thermal spike concept of local melting. However, we have found that the order-disorder transition is not instantaneous, but it requires some time to reach the appropriate kinetic-potential energy redistribution for melting. The competition between the rate of this energy redistribution and the energy diffusion to the surrounding atoms determines the amount of damage generated by a given deposited energy. Our findings explain the diverse damage morphology produced by ions of different masses.

  4. Applications of threshold models and the weighted bootstrap for Hungarian precipitation data

    NASA Astrophysics Data System (ADS)

    Varga, László; Rakonczai, Pál; Zempléni, András

    2016-05-01

    This paper presents applications of the peaks-over-threshold methodology for both the univariate and the recently introduced bivariate case, combined with a novel bootstrap approach. We compare the proposed bootstrap methods to the more traditional profile likelihood. We have investigated 63 years of the European Climate Assessment daily precipitation data for five Hungarian grid points, first separately for the summer and winter months, then aiming at the detection of possible changes by investigating 20 years moving windows. We show that significant changes can be observed both in the univariate and the bivariate cases, the most recent period being the most dangerous in several cases, as some return values have increased substantially. We illustrate these effects by bivariate coverage regions.

  5. Fire in a Changing Climate: Stochastic versus Threshold-constrained Ignitions in a Dynamic Global Vegetation Model

    NASA Astrophysics Data System (ADS)

    Sheehan, T.; Bachelet, D. M.; Ferschweiler, K.

    2015-12-01

    The MC2 dynamic global vegetation model fire module simulates fire occurrence, area burned, and fire impacts including mortality, biomass burned, and nitrogen volatilization. Fire occurrence is based on fuel load levels and vegetation-specific thresholds for three calculated fire weather indices: fine fuel moisture code (FFMC) for the moisture content of fine fuels; build-up index (BUI) for the total amount of fuel available for combustion; and energy release component (ERC) for the total energy available to fire. Ignitions are assumed (i.e. the probability of an ignition source is 1). The model is run with gridded inputs and the fraction of each grid cell burned is limited by a vegetation-specific fire return period (FRP) and the number of years since the last fire occurred in the grid cell. One consequence of assumed ignitions FRP constraint is that similar fire behavior can take place over large areas with identical vegetation type. In regions where thresholds are often exceeded, fires occur frequently (annually in some instances) with a very low fraction of a cell burned. In areas where fire is infrequent, a single hot, dry climate event can result in intense fire over a large region. Both cases can potentially result in large areas with uniform vegetation type and age. To better reflect realistic fire occurrence, we have developed a stochastic fire occurrence model that: a) uses a map of relative ignition probability and a multiplier to alter overall ignition occurrence; b) adjusts the original fixed fire thresholds with ignition success probabilities based on fire weather indices; and c) calculates spread by using a probability based on slope and wind direction. A Monte Carlo method is used with all three algorithms to determine occurrence. The new stochastic ignition approach yields more variety in fire intensity, a smaller annual total of cells burned, and patchier vegetation.

  6. A continuum model with a percolation threshold and tunneling-assisted interfacial conductivity for carbon nanotube-based nanocomposites

    SciTech Connect

    Wang, Yang; Weng, George J.; Meguid, Shaker A.; Hamouda, Abdel Magid

    2014-05-21

    A continuum model that possesses several desirable features of the electrical conduction process in carbon-nanotube (CNT) based nanocomposites is developed. Three basic elements are included: (i) percolation threshold, (ii) interface effects, and (iii) tunneling-assisted interfacial conductivity. We approach the first one through the selection of an effective medium theory. We approach the second one by the introduction of a diminishing layer of interface with an interfacial conductivity to build a 'thinly coated' CNT. The third one is introduced through the observation that interface conductivity can be enhanced by electron tunneling which in turn can be facilitated with the formation of CNT networks. We treat this last issue in a continuum fashion by taking the network formation as a statistical process that can be represented by Cauchy's probability density function. The outcome is a simple and yet widely useful model that can simultaneously capture all these fundamental characteristics. It is demonstrated that, without considering the interface effect, the predicted conductivity would be too high, and that, without accounting for the additional contribution from the tunneling-assisted interfacial conductivity, the predicted conductivity beyond the percolation threshold would be too low. It is with the consideration of all three elements that the theory can fully account for the experimentally measured data. We further use the developed model to demonstrate that, despite the anisotropy of the intrinsic CNT conductivity, it is its axial component along the CNT direction that dominates the overall conductivity. This theory is also proved that, even with a totally insulating matrix, it is still capable of delivering non-zero conductivity beyond the percolation threshold.

  7. Threshold driven response of permafrost in Northern Eurasia to climate and environmental change: from conceptual model to quantitative assessment

    NASA Astrophysics Data System (ADS)

    Anisimov, Oleg; Kokorev, Vasiliy; Reneva, Svetlana; Shiklomanov, Nikolai

    2010-05-01

    Numerous efforts have been made to access the environmental impacts of changing climate in permafrost regions using mathematical models. Despite the significant improvements in representation of individual sub-systems, such as permafrost, vegetation, snow and hydrology, even the most comprehensive models do not replicate the coupled non-linear interactions between them that lead to threshold-driven changes. Observations indicate that ecosystems may change dramatically, rapidly, and often irreversibly, reaching fundamentally different state once they pass a critical threshold. The key to understanding permafrost threshold phenomena is interaction with other environmental factors that are very likely to change in response to climate warming. One of such factors is vegetation. Vegetation control over the thermal state of underlying ground is two-fold. Firstly, canopies have different albedo that affects the radiation balance at the soil surface. Secondly, depending on biome composition vegetation canopy may have different thermal conductivity that governs the heat fluxes between soil and atmosphere. There are clear indications based on ground observations and remote sensing that vegetation has already been changed in response to climatic warming, in consensus with the results of manipulations at experimental plots that involve artificial warming and CO2 fertilization. Under sustained warming lower vegetation (mosses, lichens) is gradually replaced by shrubs. Mosses have high thermal insolating effect in summer, which is why their retreat enhances permafrost warming. Taller shrubs accumulate snow that further warms permafrost in winter. Permafrost remains unchanged as long as responding vegetation intercepts and mitigates the climate change signal. Beyond certain threshold enhanced abundance and growth of taller vegetation leads to abrupt permafrost changes. Changes in hydrology, i.e. soil wetting or drying, may have similar effect on permafrost. Wetting increases soil

  8. An experimental operative system for shallow landslide and flash flood warning based on rainfall thresholds and soil moisture modelling

    NASA Astrophysics Data System (ADS)

    Brigandı, G.; Aronica, G. T.; Basile, G.; Pasotti, L.; Panebianco, M.

    2012-04-01

    On November 2011 a thunderstorms became almost exceptional over the North-East part of the Sicily Region (Italy) producing local heavy rainfall, mud-debris flow and flash flooding. The storm was concentrated on the Tyrrhenian sea coast near the city of Barcellona within the Longano catchment. Main focus of the paper is to present an experimental operative system for alerting extreme hydrometeorological events by using a methodology based on the combined use of rainfall thresholds, soil moisture indexes and quantitative precipitation forecasting. As matter of fact, shallow landslide and flash flood warning is a key element to improve the Civil Protection achievements to mitigate damages and safeguard the security of people. It is a rather complicated task, particularly in those catchments with flashy response where even brief anticipations are important and welcomed. It is well known how the triggering of shallow landslides is strongly influenced by the initial soil moisture conditions of catchments. Therefore, the early warning system here applied is based on the combined use of rainfall thresholds, derived both for flash flood and for landslide, and soil moisture conditions; the system is composed of several basic component related to antecedent soil moisture conditions, real-time rainfall monitoring and antecedent rainfall. Soil moisture conditions were estimated using an Antecedent Precipitation Index (API), similar to this widely used for defining soil moisture conditions via Antecedent Moisture conditions index AMC. Rainfall threshold for landslides were derived using historical and statistical analysis. Finally, rainfall thresholds for flash flooding were derived using an Instantaneous Unit Hydrograph based lumped rainfall-runoff model with the SCS-CN routine for net rainfall. After the implementation and calibration of the model, a testing phase was carried out by using real data collected for the November 2001 event in the Longano catchment. Moreover, in

  9. Numerical modeling of gun experiments with impact velocities less than SDT threshold: Thermal explosion initiated by friction heat

    NASA Astrophysics Data System (ADS)

    Barfield, W. D.

    1982-01-01

    One and two dimensional calculations were made to model thermal explosion ignited by friction heat, hypothesized as an initiation mechanism for the unknown XDT phenomenon that is responsible for detonations observed in gun experiments with impact velocities less than threshold for shock to detonation transition. Preliminary results suggest that friction induced thermal explosion would be quenched by cooling associated with side rarefactions after penetrating only a thin layer of the propellant. Other effects would be expected to increase the calculated heating rates or speed up the friction induced thermal explosion. For this reason, friction cannot be ruled out as an initiation mechanism on the basis of the results described.

  10. [Threshold value for reimbursement of costs of new drugs: cost-effectiveness research and modelling are essential links].

    PubMed

    Frederix, Geert W J; Hövels, Anke M; Severens, Johan L; Raaijmakers, Jan A M; Schellens, Jan H M

    2015-01-01

    There is increasing discussion in the Netherlands about the introduction of a threshold value for the costs per extra year of life when reimbursing costs of new drugs. The Medicines Committee ('Commissie Geneesmiddelen'), a division of the Netherlands National Healthcare Institute ('Zorginstituut Nederland'), advises on reimbursement of costs of new drugs. This advice is based upon the determination of therapeutic value of the drug and the results of economic evaluations. Mathematical models that predict future costs and effectiveness are often used in economic evaluations; these models can vary greatly in transparency and quality due to author assumptions. Standardisation of cost-effectiveness models is one solution to overcome the unwanted variation in quality. Discussions about the introduction of a threshold value can only be meaningful if all involved are adequately informed, and by high quality in cost-effectiveness research and, particularly, economic evaluations. Collaboration and discussion between medical specialists, patients or patient organisations, health economists and policy makers, both in development of methods and in standardisation, are essential to improve the quality of decision making.

  11. Computational modeling of glucose transport in pancreatic β-cells identifies metabolic thresholds and therapeutic targets in diabetes.

    PubMed

    Luni, Camilla; Marth, Jamey D; Doyle, Francis J

    2012-01-01

    Pancreatic β-cell dysfunction is a diagnostic criterion of Type 2 diabetes and includes defects in glucose transport and insulin secretion. In healthy individuals, β-cells maintain plasma glucose concentrations within a narrow range in concert with insulin action among multiple tissues. Postprandial elevations in blood glucose facilitate glucose uptake into β-cells by diffusion through glucose transporters residing at the plasma membrane. Glucose transport is essential for glycolysis and glucose-stimulated insulin secretion. In human Type 2 diabetes and in the mouse model of obesity-associated diabetes, a marked deficiency of β-cell glucose transporters and glucose uptake occurs with the loss of glucose-stimulated insulin secretion. Recent studies have shown that the preservation of glucose transport in β-cells maintains normal insulin secretion and blocks the development of obesity-associated diabetes. To further elucidate the underlying mechanisms, we have constructed a computational model of human β-cell glucose transport in health and in Type 2 diabetes, and present a systems analysis based on experimental results from human and animal studies. Our findings identify a metabolic threshold or "tipping point" whereby diminished glucose transport across the plasma membrane of β-cells limits intracellular glucose-6-phosphate production by glucokinase. This metabolic threshold is crossed in Type 2 diabetes and results in β-cell dysfunction including the loss of glucose stimulated insulin secretion. Our model further discriminates among molecular control points in this pathway wherein maximal therapeutic intervention is achieved.

  12. Multi-host model and threshold of intermediate host Oncomelania snail density for eliminating schistosomiasis transmission in China

    PubMed Central

    Zhou, Yi-Biao; Chen, Yue; Liang, Song; Song, Xiu-Xia; Chen, Geng-Xin; He, Zhong; Cai, Bin; Yihuo, Wu-Li; He, Zong-Gui; Jiang, Qing-Wu

    2016-01-01

    Schistosomiasis remains a serious public health issue in many tropical countries, with more than 700 million people at risk of infection. In China, a national integrated control strategy, aiming at blocking its transmission, has been carried out throughout endemic areas since 2005. A longitudinal study was conducted to determine the effects of different intervention measures on the transmission dynamics of S. japonicum in three study areas and the data were analyzed using a multi-host model. The multi-host model was also used to estimate the threshold of Oncomelania snail density for interrupting schistosomiasis transmission based on the longitudinal data as well as data from the national surveillance system for schistosomiasis. The data showed a continuous decline in the risk of human infection and the multi-host model fit the data well. The 25th, 50th and 75th percentiles, and the mean of estimated thresholds of Oncomelania snail density below which the schistosomiasis transmission cannot be sustained were 0.006, 0.009, 0.028 and 0.020 snails/0.11 m2, respectively. The study results could help develop specific strategies of schistosomiasis control and elimination tailored to the local situation for each endemic area. PMID:27535177

  13. Linking neocortical, cognitive, and genetic variability in autism with alterations of brain plasticity: the Trigger-Threshold-Target model.

    PubMed

    Mottron, Laurent; Belleville, Sylvie; Rouleau, Guy A; Collignon, Olivier

    2014-11-01

    The phenotype of autism involves heterogeneous adaptive traits (strengths vs. disabilities), different domains of alterations (social vs. non-social), and various associated genetic conditions (syndromic vs. nonsyndromic autism). Three observations suggest that alterations in experience-dependent plasticity are an etiological factor in autism: (1) the main cognitive domains enhanced in autism are controlled by the most plastic cortical brain regions, the multimodal association cortices; (2) autism and sensory deprivation share several features of cortical and functional reorganization; and (3) genetic mutations and/or environmental insults involved in autism all appear to affect developmental synaptic plasticity, and mostly lead to its upregulation. We present the Trigger-Threshold-Target (TTT) model of autism to organize these findings. In this model, genetic mutations trigger brain reorganization in individuals with a low plasticity threshold, mostly within regions sensitive to cortical reallocations. These changes account for the cognitive enhancements and reduced social expertise associated with autism. Enhanced but normal plasticity may underlie non-syndromic autism, whereas syndromic autism may occur when a triggering mutation or event produces an altered plastic reaction, also resulting in intellectual disability and dysmorphism in addition to autism. Differences in the target of brain reorganization (perceptual vs. language regions) account for the main autistic subgroups. In light of this model, future research should investigate how individual and sex-related differences in synaptic/regional brain plasticity influence the occurrence of autism.

  14. Modeling electrical stimulation of retinal ganglion cell with optimizing additive noises for reducing threshold and energy consumption.

    PubMed

    Wu, Jing; Jin, Menghua; Qiao, Qingli

    2017-03-27

    Epiretinal prosthesis is one device for the treatment of blindness, which target retinal ganglion cells (RGCs) by electrodes on retinal surface. The stimulating current of epiretinal prosthesis is an important factor that influences the safety threshold and visual perception. Stochastic resonance (SR) can be used to enhance the detection and transmission of subthreshold stimuli in neurons. Here, it was assumed that SR was a potential way to improve the performance of epiretinal prosthesis. The effect of noises on the response of RGCs to electrical stimulation and the energy of stimulating current was studied based on a RGC model. The RGC was modeled as a multi-compartment model consisting of dendrites and its branches, soma and axon. To evoke SR, a subthreshold signal, a series of bipolar rectangular pulse sequences, plus stochastic biphasic pulse sequences as noises, were used as a stimulus to the model. The SR-type behavior in the model was characterized by a "power norm" measure. To decrease energy consumption of the stimulation waveform, the stochastic biphasic pulse sequences were only added to the cathode and anode phase of the subthreshold pulse and the noise parameters were optimized by using a genetic algorithm (GA). When certain intensity of noise is added to the subthreshold signal, RGC model can fire. With the noise's RMS amplitudes increased, more spikes were elicited and the curve of power norm presents the inverted U-like graph. The larger pulse width of stochastic biphasic pulse sequences resulted in higher power norm. The energy consumption and charges of the single bipolar rectangular pulse without noise in threshold level are 468.18 pJ, 15.30 nC, and after adding optimized parameters's noise to the subthreshold signal, they became 314.8174 pJ, 11.9281 nC and were reduced by 32.8 and 22.0%, respectively. The SR exists in the RGC model and can enhance the representation of RGC model to the subthreshold signal. Adding the stochastic biphasic

  15. Population bases and the 2011 Census.

    PubMed

    Smallwood, Steve

    2011-01-01

    In an increasingly complex society there are a number of different population definitions that can be relevant for users, beyond the standard definition used in counting the population. This article describes the enumeration base for the 2011 Census and how alternative population outputs may be produced. It provides a background as to how the questions on the questionnaire were decided upon and how population bases can be constructed from the Census. Similarities and differences between the information collected across the three UK Censuses (England and Wales, Scotland and Northern Ireland) are discussed. Finally, issues around estimating the population on alternative bases are presented.

  16. A threshold of mechanical strain intensity for the direct activation of osteoblast function exists in a murine maxilla loading model.

    PubMed

    Suzuki, Natsuki; Aoki, Kazuhiro; Marcián, Petr; Borák, Libor; Wakabayashi, Noriyuki

    2016-10-01

    The response to the mechanical loading of bone tissue has been extensively investigated; however, precisely how much strain intensity is necessary to promote bone formation remains unclear. Combination studies utilizing histomorphometric and numerical analyses were performed using the established murine maxilla loading model to clarify the threshold of mechanical strain needed to accelerate bone formation activity. For 7 days, 191 kPa loading stimulation for 30 min/day was applied to C57BL/6J mice. Two regions of interest, the AWAY region (away from the loading site) and the NEAR region (near the loading site), were determined. The inflammatory score increased in the NEAR region, but not in the AWAY region. A strain intensity map obtained from [Formula: see text] images was superimposed onto the images of the bone formation inhibitor, sclerostin-positive cell localization. The number of sclerostin-positive cells significantly decreased after mechanical loading of more than [Formula: see text] in the AWAY region, but not in the NEAR region. The mineral apposition rate, which shows the bone formation ability of osteoblasts, was accelerated at the site of surface strain intensity, namely around [Formula: see text], but not at the site of lower surface strain intensity, which was around [Formula: see text] in the AWAY region, thus suggesting the existence of a strain intensity threshold for promoting bone formation. Taken together, our data suggest that a threshold of mechanical strain intensity for the direct activation of osteoblast function and the reduction of sclerostin exists in a murine maxilla loading model in the non-inflammatory region.

  17. Development of a threshold model to predict germination of Populus tomentosa seeds after harvest and storage under ambient condition.

    PubMed

    Wang, Wei-Qing; Cheng, Hong-Yan; Song, Song-Quan

    2013-01-01

    Effects of temperature, storage time and their combination on germination of aspen (Populus tomentosa) seeds were investigated. Aspen seeds were germinated at 5 to 30°C at 5°C intervals after storage for a period of time under 28°C and 75% relative humidity. The effect of temperature on aspen seed germination could not be effectively described by the thermal time (TT) model, which underestimated the germination rate at 5°C and poorly predicted the time courses of germination at 10, 20, 25 and 30°C. A modified TT model (MTT) which assumed a two-phased linear relationship between germination rate and temperature was more accurate in predicting the germination rate and percentage and had a higher likelihood of being correct than the TT model. The maximum lifetime threshold (MLT) model accurately described the effect of storage time on seed germination across all the germination temperatures. An aging thermal time (ATT) model combining both the TT and MLT models was developed to describe the effect of both temperature and storage time on seed germination. When the ATT model was applied to germination data across all the temperatures and storage times, it produced a relatively poor fit. Adjusting the ATT model to separately fit germination data at low and high temperatures in the suboptimal range increased the models accuracy for predicting seed germination. Both the MLT and ATT models indicate that germination of aspen seeds have distinct physiological responses to temperature within a suboptimal range.

  18. Dynamically Sliding Threshold Model Reproduces the Initial-Strength Dependence of Spike-Timing Dependent Synaptic Plasticity

    NASA Astrophysics Data System (ADS)

    Kurashige, Hiroki; Sakai, Yutaka

    2007-11-01

    It has been considered that an amount of calcium elevation in a synaptic spine determines whether the synapse is potentiated or depressed. However, it has been pointed out that simple application of the principle can not reproduce properties of spike-timing-dependent plasticity (STDP). To solve the problem, we present a possible mechanism using dynamically sliding threshold determined as the linear summation of calcium elevations induced by single pre- and post-synaptic spikes. We demonstrate that the model can reproduce the timing dependence of biological STDP. In addition, we find that the model can reproduce the dependence of biological STDP on the initial synaptic strength, which is found to be asymmetric for synaptic potentiation and depression, whereas no explicit initial-strength dependence nor asymmetric mechanism are incorporated into the model.

  19. Cost-effectiveness and budget impact analysis of a population-based screening program for colorectal cancer.

    PubMed

    Pil, L; Fobelets, M; Putman, K; Trybou, J; Annemans, L

    2016-07-01

    Colorectal cancer (CRC) is one of the leading causes of cancer mortality in Belgium. In Flanders (Belgium), a population-based screening program with a biennial immunochemical faecal occult blood test (iFOBT) in women and men aged 56-74 has been organised since 2013. This study assessed the cost-effectiveness and budget impact of the colorectal population-based screening program in Flanders (Belgium). A health economic model was conducted, consisting of a decision tree simulating the screening process and a Markov model, with a time horizon of 20years, simulating natural progression. Predicted mortality and incidence, total costs, and quality-adjusted life-years (QALYs) with and without the screening program were calculated in order to determine the incremental cost-effectiveness ratio of CRC screening. Deterministic and probabilistic sensitivity analyses were conducted, taking into account uncertainty of the model parameters. Mortality and incidence were predicted to decrease over 20years. The colorectal screening program in Flanders is found to be cost-effective with an ICER of 1681/QALY (95% CI -1317 to 6601) in males and €4,484/QALY (95% CI -3254 to 18,163). The probability of being cost-effective given a threshold of €35,000/QALY was 100% and 97.3%, respectively. The budget impact analysis showed the extra cost for the health care payer to be limited. This health economic analysis has shown that despite the possible adverse effects of screening and the extra costs for the health care payer and the patient, the population-based screening program for CRC in Flanders is cost-effective and should therefore be maintained. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  20. Using hierarchical cluster models to systematically identify groups of jobs with similar occupational questionnaire response patterns to assist rule-based expert exposure assessment in population-based studies.

    PubMed

    Friesen, Melissa C; Shortreed, Susan M; Wheeler, David C; Burstyn, Igor; Vermeulen, Roel; Pronk, Anjoeka; Colt, Joanne S; Baris, Dalsu; Karagas, Margaret R; Schwenn, Molly; Johnson, Alison; Armenti, Karla R; Silverman, Debra T; Yu, Kai

    2015-05-01

    Rule-based expert exposure assessment based on questionnaire response patterns in population-based studies improves the transparency of the decisions. The number of unique response patterns, however, can be nearly equal to the number of jobs. An expert may reduce the number of patterns that need assessment using expert opinion, but each expert may identify different patterns of responses that identify an exposure scenario. Here, hierarchical clustering methods are proposed as a systematic data reduction step to reproducibly identify similar questionnaire response patterns prior to obtaining expert estimates. As a proof-of-concept, we used hierarchical clustering methods to identify groups of jobs (clusters) with similar responses to diesel exhaust-related questions and then evaluated whether the jobs within a cluster had similar (previously assessed) estimates of occupational diesel exhaust exposure. Using the New England Bladder Cancer Study as a case study, we applied hierarchical cluster models to the diesel-related variables extracted from the occupational history and job- and industry-specific questionnaires (modules). Cluster models were separately developed for two subsets: (i) 5395 jobs with ≥1 variable extracted from the occupational history indicating a potential diesel exposure scenario, but without a module with diesel-related questions; and (ii) 5929 jobs with both occupational history and module responses to diesel-relevant questions. For each subset, we varied the numbers of clusters extracted from the cluster tree developed for each model from 100 to 1000 groups of jobs. Using previously made estimates of the probability (ordinal), intensity (µg m(-3) respirable elemental carbon), and frequency (hours per week) of occupational exposure to diesel exhaust, we examined the similarity of the exposure estimates for jobs within the same cluster in two ways. First, the clusters' homogeneity (defined as >75% with the same estimate) was examined compared

  1. Using Hierarchical Cluster Models to Systematically Identify Groups of Jobs With Similar Occupational Questionnaire Response Patterns to Assist Rule-Based Expert Exposure Assessment in Population-Based Studies

    PubMed Central

    Friesen, Melissa C.; Shortreed, Susan M.; Wheeler, David C.; Burstyn, Igor; Vermeulen, Roel; Pronk, Anjoeka; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Armenti, Karla R.; Silverman, Debra T.; Yu, Kai

    2015-01-01

    Objectives: Rule-based expert exposure assessment based on questionnaire response patterns in population-based studies improves the transparency of the decisions. The number of unique response patterns, however, can be nearly equal to the number of jobs. An expert may reduce the number of patterns that need assessment using expert opinion, but each expert may identify different patterns of responses that identify an exposure scenario. Here, hierarchical clustering methods are proposed as a systematic data reduction step to reproducibly identify similar questionnaire response patterns prior to obtaining expert estimates. As a proof-of-concept, we used hierarchical clustering methods to identify groups of jobs (clusters) with similar responses to diesel exhaust-related questions and then evaluated whether the jobs within a cluster had similar (previously assessed) estimates of occupational diesel exhaust exposure. Methods: Using the New England Bladder Cancer Study as a case study, we applied hierarchical cluster models to the diesel-related variables extracted from the occupational history and job- and industry-specific questionnaires (modules). Cluster models were separately developed for two subsets: (i) 5395 jobs with ≥1 variable extracted from the occupational history indicating a potential diesel exposure scenario, but without a module with diesel-related questions; and (ii) 5929 jobs with both occupational history and module responses to diesel-relevant questions. For each subset, we varied the numbers of clusters extracted from the cluster tree developed for each model from 100 to 1000 groups of jobs. Using previously made estimates of the probability (ordinal), intensity (µg m−3 respirable elemental carbon), and frequency (hours per week) of occupational exposure to diesel exhaust, we examined the similarity of the exposure estimates for jobs within the same cluster in two ways. First, the clusters’ homogeneity (defined as >75% with the same estimate

  2. Modeling the calcium spike as a threshold triggered fixed waveform for synchronous inputs in the fluctuation regime

    PubMed Central

    Chua, Yansong; Morrison, Abigail; Helias, Moritz

    2015-01-01

    Modeling the layer 5 pyramidal neuron as a system of three connected isopotential compartments, the soma, proximal, and distal compartment, with calcium spike dynamics in the distal compartment following first order kinetics, we are able to reproduce in-vitro experimental results which demonstrate the involvement of calcium spikes in action potentials generation. To explore how calcium spikes affect the neuronal output in-vivo, we emulate in-vivo like conditions by embedding the neuron model in a regime of low background fluctuations with occasional large synchronous inputs. In such a regime, a full calcium spike is only triggered by the synchronous events in a threshold like manner and has a stereotypical waveform. Hence, in such a regime, we are able to replace the calcium dynamics with a simpler threshold triggered current of fixed waveform, which is amenable to analytical treatment. We obtain analytically the mean somatic membrane potential excursion due to a calcium spike being triggered while in the fluctuating regime. Our analytical form that accounts for the covariance between conductances and the membrane potential shows a better agreement with simulation results than a naive first order approximation. PMID:26283954

  3. Phonation threshold pressure and the elastic shear modulus: comparison of two-mass model calculations with experiments.

    PubMed

    Fulcher, Lewis P; Scherer, Ronald C; Waddle, John M

    2012-10-01

    Ishizaka and Flanagan's classic two-mass model of vocal fold motion is applied to small oscillations where the equations become linear and the aerodynamic driving force is described by an effective stiffness. The solution of these equations includes an analytic formula for the two eigenfrequencies; this shows that conjugate imaginary parts of the frequencies emerge beyond eigenvalue synchronization and that one of the imaginary parts becomes zero at a pressure signaling the instability associated with the onset of threshold. Using recent measurements by Fulcher et al. of intraglottal pressure distributions [J. Acoust. Soc. Am. 129, 1548-1553 (2011).] to inform the behavior of the entrance loss coefficients, an analytic formula for threshold pressure is derived. It fits most of the measurements Chan and Titze reported for their 2006 physical model of the vocal fold mucosa. Two sectors of the mass-stiffness parameter space are used to produce these fits. One is based on a rescaling of the typical glottal parameters of the original Ishizaka and Flanagan work. The second requires setting two of the spring constants equal and should be closer to the experimental conditions. In both cases, values of the elastic shear modulus are calculated from the spring constants.

  4. Modelling of capacitance and threshold voltage for ultrathin normally-off AlGaN /GaN MOSHEMT

    NASA Astrophysics Data System (ADS)

    Swain, R.; Jena, K.; Lenka, T. R.

    2017-01-01

    A compact quantitative model based on oxide semiconductor interface density of states (DOS) is proposed for Al0.25Ga0.75N/GaN metal oxide semiconductor high electron mobility transistor (MOSHEMT). Mathematical expressions for surface potential, sheet charge concentration, gate capacitance and threshold voltage have been derived. The gate capacitance behaviour is studied in terms of capacitance-voltage (CV) characteristics. Similarly, the predicted threshold voltage ( V T) is analysed by varying barrier thickness and oxide thickness. The positive V T obtained for a very thin 3 nm AlGaN barrier layer enables the enhancement mode operation of the MOSHEMT. These devices, along with depletion mode devices, are basic constituents of cascode configuration in power electronic circuits. The expressions developed are used in conventional long-channel HEMT drain current equation and evaluated to obtain different DC characteristics. The obtained results are compared with experimental data taken from literature which show good agreement and hence endorse the proposed model.

  5. Population-based incidence of macular holes.

    PubMed

    McCannel, Colin A; Ensminger, Jennifer L; Diehl, Nancy N; Hodge, David N

    2009-07-01

    To determine the incidence of full-thickness macular holes in Olmsted County, Minnesota. Population-based retrospective chart review (cross-sectional study). Ninety-four eyes of 85 patients who were residents of Olmsted County, Minnesota. A population-based retrospective chart review was performed for all diagnoses of macular hole between 1992 and 2002 among residents of Olmsted County, Minnesota. Yearly incidence rates for each given age and sex group were determined by dividing the number of cases within that group by the estimated total Olmsted County resident population of the group for that given year. Documented clinical diagnosis of a macular hole. Idiopathic macular holes occur at an age- and sex-adjusted incidence in 7.8 persons and 8.69 eyes per 100,000 population per year in Olmsted County, Minnesota. The female-to-male ratio was determined to be 3.3 to 1, and bilateral idiopathic macular holes occurred in 11.7% of patients and accounted for 20.9% of the affected eyes. This study uniquely determined the incidence of macular holes in a predominantly Caucasian population.

  6. Population-Based Smoking Cessation Strategies

    PubMed Central

    2010-01-01

    Executive Summary Objective The objective of this report was to provide the Ministry of Health Promotion (MHP) with a summary of existing evidence-based reviews of the clinical and economic outcomes of population-based smoking cessation strategies. Background Tobacco use is the leading cause of preventable disease and death in Ontario, linked to approximately 13,000 avoidable premature deaths annually – the vast majority of these are attributable to cancer, cardiovascular disease, and chronic obstructive lung disease. (1) In Ontario, tobacco related health care costs amount to $6.1 billion annually, or about $502 per person (including non-smokers) and account for 1.4% of the provincial domestic product. (2) In 2007, there were approximately 1.7 to 1.9 million smokers in Ontario with two-thirds of these intending to quit in the next six months and one-third wanting to quit within 30 days. (3) In 2007/2008, Ontario invested $15 million in cessation programs, services and training. (4) In June 2009, the Ministry of Health Promotion (MHP) requested that MAS provide a summary of the evidence base surrounding population-based smoking cessation strategies. Project Scope The MAS and the MHP agreed that the project would consist of a clinical and economic summary of the evidence surrounding nine population-based strategies for smoking cessation including: Mass media interventions Telephone counselling Post-secondary smoking cessation programs (colleges/universities) Community-wide stop-smoking contests (i.e. Quit and Win) Community interventions Physician advice to quit Nursing interventions for smoking cessation Hospital-based interventions for smoking cessation Pharmacotherapies for smoking cessation, specifically: Nicotine replacement therapies Antidepressants Anxiolytic drugs Opioid antagonists Clonidine Nicotine receptor partial agonists Reviews examining interventions for Cut Down to Quit (CDTQ) or harm reduction were not included in this review. In addition

  7. Threshold of coexistence and critical behavior of a predator-prey stochastic model in a fractal landscape

    NASA Astrophysics Data System (ADS)

    Argolo, C.; Barros, P.; Tomé, T.; Arashiro, E.; Gleria, Iram; Lyra, M. L.

    2016-08-01

    We investigate a stochastic lattice model describing a predator-prey system in a fractal scale-free landscape, mimicked by the fractal Sierpinski carpet. We determine the threshold of species coexistence, that is, the critical phase boundary related to the transition between an active state, where both species coexist and an absorbing state where one of the species is extinct. We show that the predators must live longer in order to persist in a fractal habitat. We further performed a finite-size scaling analysis in the vicinity of the absorbing-state phase transition to compute a set of stationary and dynamical critical exponents. Our results indicate that the transition belongs to the directed percolation universality class exhibited by the usual contact process model on the same fractal landscape.

  8. Phonation thresholds as a function of laryngeal size in a two-mass model of the vocal folds

    NASA Astrophysics Data System (ADS)

    Lucero, Jorge C.; Koenig, Laura L.

    2005-11-01

    This letter analyzes the oscillation onset-offset conditions of the vocal folds as a function of laryngeal size. A version of the two-mass model of the vocal folds is used, coupled to a two-tube approximation of the vocal tract in configuration for the vowel /a/. The standard male configurations of the laryngeal and vocal tract models are used as reference, and their dimensions are scaled using a single factor. Simulations of the vocal fold oscillation and oral output are produced for varying values of the scaling factor. The results show that the oscillation threshold conditions become more restricted for smaller laryngeal sizes, such as those appropriate for females and children.

  9. Effect of Canagliflozin on Renal Threshold for Glucose, Glycemia, and Body Weight in Normal and Diabetic Animal Models

    PubMed Central

    Liang, Yin; Arakawa, Kenji; Ueta, Kiichiro; Matsushita, Yasuaki; Kuriyama, Chiaki; Martin, Tonya; Du, Fuyong; Liu, Yi; Xu, June; Conway, Bruce; Conway, Jamie; Polidori, David; Ways, Kirk; Demarest, Keith

    2012-01-01

    Background Canagliflozin is a sodium glucose co-transporter (SGLT) 2 inhibitor in clinical development for the treatment of type 2 diabetes mellitus (T2DM). Methods 14C-alpha-methylglucoside uptake in Chinese hamster ovary-K cells expressing human, rat, or mouse SGLT2 or SGLT1; 3H-2-deoxy-d-glucose uptake in L6 myoblasts; and 2-electrode voltage clamp recording of oocytes expressing human SGLT3 were analyzed. Graded glucose infusions were performed to determine rate of urinary glucose excretion (UGE) at different blood glucose (BG) concentrations and the renal threshold for glucose excretion (RTG) in vehicle or canagliflozin-treated Zucker diabetic fatty (ZDF) rats. This study aimed to characterize the pharmacodynamic effects of canagliflozin in vitro and in preclinical models of T2DM and obesity. Results Treatment with canagliflozin 1 mg/kg lowered RTG from 415±12 mg/dl to 94±10 mg/dl in ZDF rats while maintaining a threshold relationship between BG and UGE with virtually no UGE observed when BG was below RTG. Canagliflozin dose-dependently decreased BG concentrations in db/db mice treated acutely. In ZDF rats treated for 4 weeks, canagliflozin decreased glycated hemoglobin (HbA1c) and improved measures of insulin secretion. In obese animal models, canagliflozin increased UGE and decreased BG, body weight gain, epididymal fat, liver weight, and the respiratory exchange ratio. Conclusions Canagliflozin lowered RTG and increased UGE, improved glycemic control and beta-cell function in rodent models of T2DM, and reduced body weight gain in rodent models of obesity. PMID:22355316

  10. Effect of canagliflozin on renal threshold for glucose, glycemia, and body weight in normal and diabetic animal models.

    PubMed

    Liang, Yin; Arakawa, Kenji; Ueta, Kiichiro; Matsushita, Yasuaki; Kuriyama, Chiaki; Martin, Tonya; Du, Fuyong; Liu, Yi; Xu, June; Conway, Bruce; Conway, Jamie; Polidori, David; Ways, Kirk; Demarest, Keith

    2012-01-01

    Canagliflozin is a sodium glucose co-transporter (SGLT) 2 inhibitor in clinical development for the treatment of type 2 diabetes mellitus (T2DM). (14)C-alpha-methylglucoside uptake in Chinese hamster ovary-K cells expressing human, rat, or mouse SGLT2 or SGLT1; (3)H-2-deoxy-d-glucose uptake in L6 myoblasts; and 2-electrode voltage clamp recording of oocytes expressing human SGLT3 were analyzed. Graded glucose infusions were performed to determine rate of urinary glucose excretion (UGE) at different blood glucose (BG) concentrations and the renal threshold for glucose excretion (RT(G)) in vehicle or canagliflozin-treated Zucker diabetic fatty (ZDF) rats. This study aimed to characterize the pharmacodynamic effects of canagliflozin in vitro and in preclinical models of T2DM and obesity. Treatment with canagliflozin 1 mg/kg lowered RT(G) from 415±12 mg/dl to 94±10 mg/dl in ZDF rats while maintaining a threshold relationship between BG and UGE with virtually no UGE observed when BG was below RT(G). Canagliflozin dose-dependently decreased BG concentrations in db/db mice treated acutely. In ZDF rats treated for 4 weeks, canagliflozin decreased glycated hemoglobin (HbA1c) and improved measures of insulin secretion. In obese animal models, canagliflozin increased UGE and decreased BG, body weight gain, epididymal fat, liver weight, and the respiratory exchange ratio. Canagliflozin lowered RT(G) and increased UGE, improved glycemic control and beta-cell function in rodent models of T2DM, and reduced body weight gain in rodent models of obesity.

  11. Modeling and control of threshold voltage based on pull-in characteristic for micro self-locked switch

    NASA Astrophysics Data System (ADS)

    Deng, Jufeng; Hao, Yongping; Liu, Shuangjie

    2017-09-01

    Micro self-locked switches (MSS), where execution voltage corresponds to the output signal, are efficient and convenient platforms for sensor applications. The proper functioning of these sensing devices requires driving accurate displacement under execution voltage. In this work, we show how to control the actuating properties of MSSS. This switch comprises microstructures of various shapes with dimensions from 3.5 to 180 μm, which are optimized to encode a desired manufacture deviation by means of mathematical model of threshold voltage. Compared with pull-in voltage, threshold voltage is more easy to control the pull-in instability point by theoretical analysis. With the help of advanced manufacture technology, switch is processed in accordance with the proposed control method. Then, experimental results show that it is better, which have been validated by corresponding experiments. In addition, they can be known from experiments that the manufacturing technology is advanced and feasible, and its high resilience and stably self-locked function can achieve instantaneously sensing.

  12. Threshold-like complexation of conjugated polymers with small molecule acceptors in solution within the neighbor-effect model.

    PubMed

    Sosorev, Andrey Yu; Parashchuk, Olga D; Zapunidi, Sergey A; Kashtanov, Grigoriy S; Golovnin, Ilya V; Kommanaboyina, Srikanth; Perepichka, Igor F; Paraschuk, Dmitry Yu

    2016-02-14

    In some donor-acceptor blends based on conjugated polymers, a pronounced charge-transfer complex (CTC) forms in the electronic ground state. In contrast to small-molecule donor-acceptor blends, the CTC concentration in polymer:acceptor solution can increase with the acceptor content in a threshold-like way. This threshold-like behavior was earlier attributed to the neighbor effect (NE) in the polymer complexation, i.e., next CTCs are preferentially formed near the existing ones; however, the NE origin is unknown. To address the factors affecting the NE, we record the optical absorption data for blends of the most studied conjugated polymers, poly(2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene) (MEH-PPV) and poly(3-hexylthiophene) (P3HT), with electron acceptors of fluorene series, 1,8-dinitro-9,10-antraquinone (), and 7,7,8,8-tetracyanoquinodimethane () in different solvents, and then analyze the data within the NE model. We have found that the NE depends on the polymer and acceptor molecular skeletons and solvent, while it does not depend on the acceptor electron affinity and polymer concentration. We conclude that the NE operates within a single macromolecule and stems from planarization of the polymer chain involved in the CTC with an acceptor molecule; as a result, the probability of further complexation with the next acceptor molecules at the adjacent repeat units increases. The steric and electronic microscopic mechanisms of NE are discussed.

  13. Crossing the Threshold Mindfully: Exploring Rites of Passage Models in Adventure Therapy

    ERIC Educational Resources Information Center

    Norris, Julian

    2011-01-01

    Rites of passage models, drawing from ethnographic descriptions of ritualized transition, are widespread in adventure therapy programmes. However, critical literature suggests that: (a) contemporary rites of passage models derive from a selective and sometimes misleading use of ethnographic materials, and (b) the appropriation of initiatory…

  14. Crossing the Threshold Mindfully: Exploring Rites of Passage Models in Adventure Therapy

    ERIC Educational Resources Information Center

    Norris, Julian

    2011-01-01

    Rites of passage models, drawing from ethnographic descriptions of ritualized transition, are widespread in adventure therapy programmes. However, critical literature suggests that: (a) contemporary rites of passage models derive from a selective and sometimes misleading use of ethnographic materials, and (b) the appropriation of initiatory…

  15. Evaluating critical uncertainty thresholds in a spatial model of forest pest invasion risk

    Treesearch

    Frank H. Koch; Denys Yemshanov; Daniel W. McKenney; William D. Smith

    2009-01-01

    Pest risk maps can provide useful decision support in invasive species management, but most do not adequately consider the uncertainty associated with predicted risk values. This study explores how increased uncertainty in a risk model’s numeric assumptions might affect the resultant risk map. We used a spatial stochastic model, integrating components for...

  16. Measuring Attitudes with a Threshold Model Drawing on a Traditional Scaling Concept.

    ERIC Educational Resources Information Center

    Rost, Jurgen

    1988-01-01

    A generalized Rasch model is presented for measuring attitudes; it is based on the concepts of Thurstone's method of successive intervals. Benefits of the model are illustrated with a study of students' (N=4,035 fifth through ninth graders) interest in physics. (SLD)

  17. Hydrodynamics of sediment threshold

    NASA Astrophysics Data System (ADS)

    Ali, Sk Zeeshan; Dey, Subhasish

    2016-07-01

    A novel hydrodynamic model for the threshold of cohesionless sediment particle motion under a steady unidirectional streamflow is presented. The hydrodynamic forces (drag and lift) acting on a solitary sediment particle resting over a closely packed bed formed by the identical sediment particles are the primary motivating forces. The drag force comprises of the form drag and form induced drag. The lift force includes the Saffman lift, Magnus lift, centrifugal lift, and turbulent lift. The points of action of the force system are appropriately obtained, for the first time, from the basics of micro-mechanics. The sediment threshold is envisioned as the rolling mode, which is the plausible mode to initiate a particle motion on the bed. The moment balance of the force system on the solitary particle about the pivoting point of rolling yields the governing equation. The conditions of sediment threshold under the hydraulically smooth, transitional, and rough flow regimes are examined. The effects of velocity fluctuations are addressed by applying the statistical theory of turbulence. This study shows that for a hindrance coefficient of 0.3, the threshold curve (threshold Shields parameter versus shear Reynolds number) has an excellent agreement with the experimental data of uniform sediments. However, most of the experimental data are bounded by the upper and lower limiting threshold curves, corresponding to the hindrance coefficients of 0.2 and 0.4, respectively. The threshold curve of this study is compared with those of previous researchers. The present model also agrees satisfactorily with the experimental data of nonuniform sediments.

  18. The nature of psychological thresholds.

    PubMed

    Rouder, Jeffrey N; Morey, Richard D

    2009-07-01

    Following G. T. Fechner (1966), thresholds have been conceptualized as the amount of intensity needed to transition between mental states, such as between states of unconsciousness and consciousness. With the advent of the theory of signal detection, however, discrete-state theory and the corresponding notion of threshold have been discounted. Consequently, phenomena such as subliminal priming and perception have a reduced theoretical basis. The authors propose a process-neutral definition of threshold that allows for graded perception and activation throughout the system. Thresholds correspond to maximum stimulus intensities such that the distribution of mental states does not differ from that when an appropriate baseline stimulus is presented. In practice, thresholds are maximum intensities such that the probability distribution on behavioral events does not differ from that from baseline. These thresholds, which the authors call task thresholds, may be estimated with modified item response psychometric measurement models. Copyright (c) 2009 APA, all rights reserved.

  19. Development of a Threshold Model to Predict Germination of Populus tomentosa Seeds after Harvest and Storage under Ambient Condition

    PubMed Central

    Wang, Wei-Qing; Cheng, Hong-Yan; Song, Song-Quan

    2013-01-01

    Effects of temperature, storage time and their combination on germination of aspen (Populus tomentosa) seeds were investigated. Aspen seeds were germinated at 5 to 30°C at 5°C intervals after storage for a period of time under 28°C and 75% relative humidity. The effect of temperature on aspen seed germination could not be effectively described by the thermal time (TT) model, which underestimated the germination rate at 5°C and poorly predicted the time courses of germination at 10, 20, 25 and 30°C. A modified TT model (MTT) which assumed a two-phased linear relationship between germination rate and temperature was more accurate in predicting the germination rate and percentage and had a higher likelihood of being correct than the TT model. The maximum lifetime threshold (MLT) model accurately described the effect of storage time on seed germination across all the germination temperatures. An aging thermal time (ATT) model combining both the TT and MLT models was developed to describe the effect of both temperature and storage time on seed germination. When the ATT model was applied to germination data across all the temperatures and storage times, it produced a relatively poor fit. Adjusting the ATT model to separately fit germination data at low and high temperatures in the suboptimal range increased the models accuracy for predicting seed germination. Both the MLT and ATT models indicate that germination of aspen seeds have distinct physiological responses to temperature within a suboptimal range. PMID:23658654

  20. Combining regional estimation and historical floods: A multivariate semiparametric peaks-over-threshold model with censored data

    NASA Astrophysics Data System (ADS)

    Sabourin, Anne; Renard, Benjamin

    2015-12-01

    The estimation of extreme flood quantiles is challenging due to the relative scarcity of extreme data compared to typical target return periods. Several approaches have been developed over the years to face this challenge, including regional estimation and the use of historical flood data. This paper investigates the combination of both approaches using a multivariate peaks-over-threshold model that allows estimating altogether the intersite dependence structure and the marginal distributions at each site. The joint distribution of extremes at several sites is constructed using a semiparametric Dirichlet Mixture model. The existence of partially missing and censored observations (historical data) is accounted for within a data augmentation scheme. This model is applied to a case study involving four catchments in Southern France, for which historical data are available since 1604. The comparison of marginal estimates from four versions of the model (with or without regionalizing the shape parameter; using or ignoring historical floods) highlights significant differences in terms of return level estimates. Moreover, the availability of historical data on several nearby catchments allows investigating the asymptotic dependence properties of extreme floods. Catchments display a significant amount of asymptotic dependence, calling for adapted multivariate statistical models.

  1. Deep sub-threshold Ξ and Λ production in nuclear collisions with the UrQMD transport model

    NASA Astrophysics Data System (ADS)

    Graef, G.; Steinheimer, J.; Li, F.; Bleicher, M.

    2014-12-01

    We present results on deep sub-threshold hyperon production in nuclear collisions, with the UrQMD transport model. Introducing anti-kaon+baryon and hyperon + hyperon strangeness exchange reactions we obtain a good description of experimental data on single strange hadron production in Ar+KCl reactions at Elab=1.76 A GeV. We find that the hyperon strangeness exchange is the dominant process contributing to the Ξ- yield; however, our study remains short of explaining the Ξ-/Λ ratio measured with the HADES experiment. We also discuss possible reasons for the discrepancy with previous studies and the experimental results, finding that many details of the transport simulation may have significant effects on the final Ξ- yield.

  2. Effect of rescattering potential on the high-energy above-threshold ionization of a model-H atom

    NASA Astrophysics Data System (ADS)

    Chen, J.-H.; Wang, G.-L.; Zhang, Z.-R.; Zhao, S.-F.

    2017-01-01

    The high-energy above-threshold ionization of a model-H atom (with 1s state and the same binding energy as H atom) in a few-cycle laser pulse is investigated by using the improved strong-field approximation (ISFA), where the spherical shell potential is used as the rescattering potential. The results obtained from numerically solving time-dependent Schrödinger equation(TDSE) are regarded as the benchmark results. Our results show that the energy distributions in high-energy region obtained from ISFA calculations using the spherical shell potential may either match or be better than those from ISFA using Yukawa potential and zero-range potential in the laser with wavelengths of 800 and 1200 nm. In addition, the influence of the rescattering potential on the density of probability at different ejection angles is also discussed in this paper.

  3. Dcx reexpression reduces subcortical band heterotopia and seizure threshold in an animal model of neuronal migration disorder.

    PubMed

    Manent, Jean-Bernard; Wang, Yu; Chang, Yoonjeung; Paramasivam, Murugan; LoTurco, Joseph J

    2009-01-01

    Disorders of neuronal migration can lead to malformations of the cerebral neocortex that greatly increase the risk of seizures. It remains untested whether malformations caused by disorders in neuronal migration can be reduced by reactivating cellular migration and whether such repair can decrease seizure risk. Here we show, in a rat model of subcortical band heterotopia (SBH) generated by in utero RNA interference of the Dcx gene, that aberrantly positioned neurons can be stimulated to migrate by reexpressing Dcx after birth. Restarting migration in this way both reduces neocortical malformations and restores neuronal patterning. We further find that the capacity to reduce SBH continues into early postnatal development. Moreover, intervention after birth reduces the convulsant-induced seizure threshold to a level similar to that in malformation-free controls. These results suggest that disorders of neuronal migration may be eventually treatable by reengaging developmental programs both to reduce the size of cortical malformations and to reduce seizure risk.

  4. The threshold value of homeostasis model assessment for insulin resistance in Qazvin Metabolic Diseases Study (QMDS): assessment of metabolic syndrome.

    PubMed

    Ziaee, Amir; Esmailzadehha, Neda; Oveisi, Sonia; Ghorbani, Azam; Ghanei, Laleh

    2015-01-01

    The homeostasis model assessment of insulin resistance (HOMA-IR) is a useful model for application at large epidemiologic studies. The aim of this study was to determine the HOMA cut off values to identify insulin resistance (IR) and metabolic syndrome (MS) in Qazvin, central Iran. Overall, 480 men and 502 women aged 20-72 yr attended in this cross sectional study from September 2010 to April 2011. The diagnostic criteria proposed by national cholesterol education program third adult treatment panel (ATPIII), International Diabetes Federation (IDF) and new Joint Interim Societies (JIS); were applied to define MS. Lower limit of the top quintile of HOMA values in normal subjects was considered as the threshold of IR. The receiver operating characteristic (ROC) curves of HOMA for MS diagnosis were depicted. The optimal cut point to determine MS was assessed by maximum Youden index and the shortest distance from the point (0, 1) on the ROC curve. The threshold of HOMA for IR was 2.48. Fifty one percent of the subjects were insulin resistant. The cut point for diagnosis of JIS, IDF, ATP III and Persian IDF defined MS was 2.92, 2.91, 2.49 and 3.21, respectively. Sensitivity and specificity of ATP III defined MS to diagnose IR was 33.95% and 84.78%, of IDF defined MS was 39.13%, 81.29% and of JIS defined MS was 43.77% and 78.11% and of Persian IDF defined MS was 27.32% and 88.76%, in that order. The high prevalence of IR in the present study warns about the future burden of type 2 diabetes. Only the ATP III criteria introduced more specific cut point for putative manifestations of IR.

  5. Towards an Epistemically Neutral Curriculum Model for Vocational Education: From Competencies to Threshold Concepts and Practices

    ERIC Educational Resources Information Center

    Hodge, Steven; Atkins, Liz; Simons, Michele

    2016-01-01

    Debate about the benefits and problems with competency-based training (CBT) has not paid sufficient attention to the fact that the model satisfies a unique, contemporary demand for cross-occupational curriculum. The adoption of CBT in the UK and Australia, along with at least some of its problems, can be understood in terms of this demand. We…

  6. Identifying Atomic Structure as a Threshold Concept: Student Mental Models and Troublesomeness

    ERIC Educational Resources Information Center

    Park, Eun Jung; Light, Gregory

    2009-01-01

    Atomic theory or the nature of matter is a principal concept in science and science education. This has, however, been complicated by the difficulty students have in learning the concept and the subsequent construction of many alternative models. To understand better the conceptual barriers to learning atomic structure, this study explores the…

  7. Identifying Atomic Structure as a Threshold Concept: Student Mental Models and Troublesomeness

    ERIC Educational Resources Information Center

    Park, Eun Jung; Light, Gregory

    2009-01-01

    Atomic theory or the nature of matter is a principal concept in science and science education. This has, however, been complicated by the difficulty students have in learning the concept and the subsequent construction of many alternative models. To understand better the conceptual barriers to learning atomic structure, this study explores the…

  8. Performance of the SWEEP model affected by estimates of threshold friction velocity

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is a process-based model and needs to be verified under a broad range of climatic, soil, and management conditions. Occasional failure of the WEPS erosion submodel (Single-event Wind Erosion Evaluation Program or SWEEP) to simulate erosion in the Columbia Pl...

  9. Identifying and assessing critical uncertainty thresholds in a forest pest risk model

    Treesearch

    Frank H. Koch; Denys Yemshanov

    2015-01-01

    Pest risk maps can provide helpful decision support for invasive alien species management, but often fail to address adequately the uncertainty associated with their predicted risk values. Th is chapter explores how increased uncertainty in a risk model’s numeric assumptions (i.e. its principal parameters) might aff ect the resulting risk map. We used a spatial...

  10. Threshold effects in nonlinear models with an application to the social capital-retirement-health relationship.

    PubMed

    Gannon, Brenda; Harris, David; Harris, Mark

    2014-09-01

    This paper considers the relationship between social capital and health in the years before, at and after retirement. This adds to the current literature that only investigates this relationship in either the population as a whole or two subpopulations, pre-retirement and post-retirement. We now investigate if there are further additional subpopulations in the years to and from retirement. We take an information criteria approach to select the optimal model of subpopulations from a full range of potential models. This approach is similar to that proposed for linear models. Our contribution is to show how this may also be applied to nonlinear models and without the need for estimating subsequent subpopulations conditional on previous fixed subpopulations. Our main finding is that the association of social capital with health diminishes at retirement, and this decreases further 10 years after retirement. We find a strong positive significant association of social capital with health, although this turns negative after 20 years, indicating potential unobserved heterogeneity. The types of social capital may differ in later years (e.g., less volunteering) and hence overall social capital may have less of an influence on health in later years.

  11. Towards an Epistemically Neutral Curriculum Model for Vocational Education: From Competencies to Threshold Concepts and Practices

    ERIC Educational Resources Information Center

    Hodge, Steven; Atkins, Liz; Simons, Michele

    2016-01-01

    Debate about the benefits and problems with competency-based training (CBT) has not paid sufficient attention to the fact that the model satisfies a unique, contemporary demand for cross-occupational curriculum. The adoption of CBT in the UK and Australia, along with at least some of its problems, can be understood in terms of this demand. We…

  12. Optimising threshold levels for information transmission in binary threshold networks: Independent multiplicative noise on each threshold

    NASA Astrophysics Data System (ADS)

    Zhou, Bingchang; McDonnell, Mark D.

    2015-02-01

    The problem of optimising the threshold levels in multilevel threshold system subject to multiplicative Gaussian and uniform noise is considered. Similar to previous results for additive noise, we find a bifurcation phenomenon in the optimal threshold values, as the noise intensity changes. This occurs when the number of threshold units is greater than one. We also study the optimal thresholds for combined additive and multiplicative Gaussian noise, and find that all threshold levels need to be identical to optimise the system when the additive noise intensity is a constant. However, this identical value is not equal to the signal mean, unlike the case of additive noise. When the multiplicative noise intensity is instead held constant, the optimal threshold levels are not all identical for small additive noise intensity but are all equal to zero for large additive noise intensity. The model and our results are potentially relevant for sensor network design and understanding neurobiological sensory neurons such as in the peripheral auditory system.

  13. Bayesian approach to color-difference models based on threshold and constant-stimuli methods.

    PubMed

    Brusola, Fernando; Tortajada, Ignacio; Lengua, Ismael; Jordá, Begoña; Peris, Guillermo

    2015-06-15

    An alternative approach based on statistical Bayesian inference is presented to deal with the development of color-difference models and the precision of parameter estimation. The approach was applied to simulated data and real data, the latter published by selected authors involved with the development of color-difference formulae using traditional methods. Our results show very good agreement between the Bayesian and classical approaches. Among other benefits, our proposed methodology allows one to determine the marginal posterior distribution of each random individual parameter of the color-difference model. In this manner, it is possible to analyze the effect of individual parameters on the statistical significance calculation of a color-difference equation.

  14. The Limits of Moral Principle: An Ends, Means, and Role Spheres Model of the Ethical Threshold.

    DTIC Science & Technology

    1986-09-01

    one’s point of view. In either case , society will benefit. The model can be included in educational curriculum, can be incorporated into quantitative...devel- opment grew from an effort to perform a case study of the recent space shuttle Challenger accident. In order to "* perform a review of ethical...natural law was devel- oped. It was decided to redirect effort from a case study of the shuttle accident to a more complete development and analysis of the

  15. An advanced stochastic model for threshold crossing studies of rotor blade vibrations.

    NASA Technical Reports Server (NTRS)

    Gaonkar, G. H.; Hohenemser, K. H.

    1972-01-01

    A stochastic model to analyze turbulence-excited rotor blade vibrations, previously described by Gaonkar et al. (1971), is generalized to include nonuniformity of the atmospheric turbulence velocity across the rotor disk in the longitudinal direction. The results of the presented analysis suggest that the nonuniformity of the vertical turbulence over the rotor disk is of little influence on the random blade flapping response, at least as far as longitudinal nonuniformity is concerned.

  16. Effects of alpha stopping power modelling on the ignition threshold in a directly-driven inertial confinement fusion capsule

    DOE PAGES

    Temporal, Mauro; Canaud, Benoit; Cayzac, Witold; ...

    2017-05-25

    The alpha-particle energy deposition mechanism modifies the ignition conditions of the thermonuclear Deuterium-Tritium fusion reactions, and constitutes a key issue in achieving high gain in Inertial Confinement Fusion implosions. One-dimensional hydrodynamic calculations have been performed with the code Multi-IFE to simulate the implosion of a capsule directly irradiated by a laser beam. The diffusion approximation for the alpha energy deposition has been used to optimize three laser profiles corresponding to different implosion velocities. A Monte-Carlo package has been included in Multi-IFE to calculate the alpha energy transport, and in this case the energy deposition uses both the LP and themore » BPS stopping power models. Homothetic transformations that maintain a constant implosion velocity have been used to map out the transition region between marginally-igniting and high-gain configurations. Furthermore, the results provided by the two models have been compared and it is found that – close to the ignition threshold – in order to produce the same fusion energy, the calculations performed with the BPS model require about 10% more invested energy with respect to the LP model.« less

  17. Effects of alpha stopping power modelling on the ignition threshold in a directly-driven inertial confinement fusion capsule

    NASA Astrophysics Data System (ADS)

    Temporal, Mauro; Canaud, Benoit; Cayzac, Witold; Ramis, Rafael; Singleton, Robert L.

    2017-05-01

    The alpha-particle energy deposition mechanism modifies the ignition conditions of the thermonuclear Deuterium-Tritium fusion reactions, and constitutes a key issue in achieving high gain in Inertial Confinement Fusion implosions. One-dimensional hydrodynamic calculations have been performed with the code Multi-IFE [R. Ramis, J. Meyer-ter-Vehn, Comput. Phys. Commun. 203, 226 (2016)] to simulate the implosion of a capsule directly irradiated by a laser beam. The diffusion approximation for the alpha energy deposition has been used to optimize three laser profiles corresponding to different implosion velocities. A Monte-Carlo package has been included in Multi-IFE to calculate the alpha energy transport, and in this case the energy deposition uses both the LP [C.K. Li, R.D. Petrasso, Phys. Rev. Lett. 70, 3059 (1993)] and the BPS [L.S. Brown, D.L. Preston, R.L. Singleton Jr., Phys. Rep. 410, 237 (2005)] stopping power models. Homothetic transformations that maintain a constant implosion velocity have been used to map out the transition region between marginally-igniting and high-gain configurations. The results provided by the two models have been compared and it is found that - close to the ignition threshold - in order to produce the same fusion energy, the calculations performed with the BPS model require about 10% more invested energy with respect to the LP model.

  18. Societal costs and effects of implementing population-based mammography screening in Greenland.

    PubMed

    Christensen, Maria Klitgaard; Niclasen, Birgit V; Iburg, Kim Moesgaard

    2017-01-01

    With a low breast cancer incidence and low population density, Greenland is geographically and organisationally challenged in implementing a cost effective breast cancer screening programme where a large proportion of the Greenlandic women will have to travel far to attend. The aim of this paper is to evaluate the cost effectiveness and cost utility of different strategies for implementing population-based breast cancer screening in Greenland. Two strategies were evaluated: Centralised screening in the capital Nuuk and decentralised screening in the five municipal regions of Greenland. A cost effectiveness and cost utility analysis were performed from a societal perspective to estimate the costs per years of life saved and per QALY gained. Two accommodation models for the women's attendance were examined; accommodation in ordinary hotels or in patient hotels. The least costly accommodation model was the hotel model compared with the patient hotel model, regardless of screening strategy. The decentralised strategy was more cost effective compared with the centralised strategy, resulting in 0.5 million DKK per years of life saved (YLS) and 4.1 million DKK per quality-adjusted life year (QALY) gained within the hotel model. These ratios are significantly higher compared with findings from other countries. The sensitivity analysis showed a substantial gap between the most and least favourable model assumptions. The investigated strategies were all estimated to be extremely costly, mostly due to high transportation and accommodation costs and loss of productivity, and none would be accepted as cost-effective per YLS/QALY gained within a conventional threshold level. The least expensive strategy was regional screening with hotel accommodation.

  19. Elaborating on Threshold Concepts

    ERIC Educational Resources Information Center

    Rountree, Janet; Robins, Anthony; Rountree, Nathan

    2013-01-01

    We propose an expanded definition of Threshold Concepts (TCs) that requires the successful acquisition and internalisation not only of knowledge, but also its practical elaboration in the domains of applied strategies and mental models. This richer definition allows us to clarify the relationship between TCs and Fundamental Ideas, and to account…

  20. Population-based study on infant mortality.

    PubMed

    Lima, Jaqueline Costa; Mingarelli, Alexandre Marchezoni; Segri, Neuber José; Zavala, Arturo Alejandro Zavala; Takano, Olga Akiko

    2017-03-01

    Although Brazil has reduced social, economic and health indicators disparities in the last decade, intra- and inter-regional differences in child mortality rates (CMR) persist in regions such as the state capital of Mato Grosso. This population-based study aimed to investigate factors associated with child mortality in five cohorts of live births (LB) of mothers living in Cuiabá (MT), Brazil, 2006-2010, through probabilistic linkage in 47,018 LB. We used hierarchical logistic regression analysis. Of the 617 child deaths, 48% occurred in the early neonatal period. CMR ranged from 14.6 to 12.0 deaths per thousand LB. The following remained independently associated with death: mothers without companion (OR = 1.32); low number of prenatal consultations (OR = 1.65); low birthweight (OR = 4.83); prematurity (OR = 3.05); Apgar ≤ 7 at the first minute (OR = 3.19); Apgar ≤ 7 at the fifth minute (OR = 4.95); congenital malformations (OR = 14.91) and male gender (OR = 1.26). CMR has declined in Cuiabá, however, there is need to guide public healthcare policies in the prenatal and perinatal period to reduce early neonatal mortality and further studies to identify the causes of preventable deaths.

  1. Severe neonatal hypernatraemia: a population based study.

    PubMed

    Oddie, Sam Joseph; Craven, Vanessa; Deakin, Kathryn; Westman, Janette; Scally, Andrew

    2013-09-01

    To describe incidence, presentation, treatment and short term outcomes of severe neonatal hypernatraemia (SNH, sodium ≥160 mmol/l). Prospective, population based surveillance study over 13 months using the British Paediatric Surveillance Unit. Cases were >33 weeks gestation at birth, fed breast or formula milk and <28 days of age at presentation. Of 62 cases of SNH reported (7, 95% CI 5.4 to 9.0 per 1 00 000 live births), 61 mothers had intended to achieve exclusive breast feeding. Infants presented at median day 6 (range 2-17) with median weight loss of 19.5% (range 8.9-30.9). 12 had jaundice and 57 weight loss as a presenting feature. 58 presented with weight loss ≥15%. 25 babies had not stooled in the 24 h prior to admission. Serum sodium fell by median 12.9 mmol/l per 24 h (range 0-30). No baby died, had seizures or coma or was treated with dialysis or a central line. At discharge, babies had regained 11% of initial birth weight after a median admission of 5 (range 2-14) days. 10 were exclusively breast fed on discharge from hospital. Neonatal hypernatraemia at this level, in this population, is strongly associated with weight loss. It occurs almost exclusively after attempts to initiate breast feeding, occurs uncommonly and does not appear to be associated with serious short term morbidities, beyond admission to hospital.

  2. Collaborations in Population-Based Health Research

    PubMed Central

    Lieu, Tracy A.; Hinrichsen, Virginia L.; Moreira, Andrea; Platt, Richard

    2011-01-01

    The HMO Research Network (HMORN) is a consortium of 16 health care systems with integrated research centers. Approximately 475 people participated in its 17th annual conference, hosted by the Department of Population Medicine, Harvard Pilgrim Health Care Institute and Harvard Medical School. The theme, “Collaborations in Population-Based Health Research,” reflected the network’s emphasis on collaborative studies both among its members and with external investigators. Plenary talks highlighted the initial phase of the HMORN’s work to establish the NIH-HMO Collaboratory, opportunities for public health collaborations, the work of early career investigators, and the state of the network. Platform and poster presentations showcased a broad spectrum of innovative public domain research in areas including disease epidemiology and treatment, health economics, and information technology. Special interest group sessions and ancillary meetings provided venues for informal conversation and structured work among ongoing groups, including networks in cancer, cardiovascular diseases, lung diseases, medical product safety, and mental health. PMID:22090515

  3. Computational Modeling of Interventions and Protective Thresholds to Prevent Disease Transmission in Deploying Populations

    PubMed Central

    2014-01-01

    Military personnel are deployed abroad for missions ranging from humanitarian relief efforts to combat actions; delay or interruption in these activities due to disease transmission can cause operational disruptions, significant economic loss, and stressed or exceeded military medical resources. Deployed troops function in environments favorable to the rapid and efficient transmission of many viruses particularly when levels of protection are suboptimal. When immunity among deployed military populations is low, the risk of vaccine-preventable disease outbreaks increases, impacting troop readiness and achievement of mission objectives. However, targeted vaccination and the optimization of preexisting immunity among deployed populations can decrease the threat of outbreaks among deployed troops. Here we describe methods for the computational modeling of disease transmission to explore how preexisting immunity compares with vaccination at the time of deployment as a means of preventing outbreaks and protecting troops and mission objectives during extended military deployment actions. These methods are illustrated with five modeling case studies for separate diseases common in many parts of the world, to show different approaches required in varying epidemiological settings. PMID:25009579

  4. Computational modeling of interventions and protective thresholds to prevent disease transmission in deploying populations.

    PubMed

    Burgess, Colleen; Peace, Angela; Everett, Rebecca; Allegri, Buena; Garman, Patrick

    2014-01-01

    Military personnel are deployed abroad for missions ranging from humanitarian relief efforts to combat actions; delay or interruption in these activities due to disease transmission can cause operational disruptions, significant economic loss, and stressed or exceeded military medical resources. Deployed troops function in environments favorable to the rapid and efficient transmission of many viruses particularly when levels of protection are suboptimal. When immunity among deployed military populations is low, the risk of vaccine-preventable disease outbreaks increases, impacting troop readiness and achievement of mission objectives. However, targeted vaccination and the optimization of preexisting immunity among deployed populations can decrease the threat of outbreaks among deployed troops. Here we describe methods for the computational modeling of disease transmission to explore how preexisting immunity compares with vaccination at the time of deployment as a means of preventing outbreaks and protecting troops and mission objectives during extended military deployment actions. These methods are illustrated with five modeling case studies for separate diseases common in many parts of the world, to show different approaches required in varying epidemiological settings.

  5. New Concepts in Hypertension Management: A Population-Based Perspective.

    PubMed

    Milani, Richard V; Lavie, Carl J; Wilt, Jonathan K; Bober, Robert M; Ventura, Hector O

    Hypertension (HTN) is the most common chronic disease in the U.S., and the standard model of office-based care delivery has yielded suboptimal outcomes, with approximately 50% of affected patients not achieving blood pressure (BP) control. Poor population-level BP control has been primarily attributed to therapeutic inertia and low patient engagement. New models of care delivery utilizing patient-generated health data, comprehensive assessment of social health determinants, computerized algorithms generating tailored interventions, frequent communication and reporting, and non-physician providers organized as an integrated practice unit, have the potential to transform population-based HTN control. This review will highlight the importance of these elements and construct the rationale for a reengineered model of care delivery for populations with HTN. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Pattern of trauma determines the threshold for epileptic activity in a model of cortical deafferentation

    PubMed Central

    Volman, Vladislav; Bazhenov, Maxim; Sejnowski, Terrence J.

    2011-01-01

    Epileptic activity often occurs in the cortex after a latent period after head trauma; this delay has been attributed to the destabilizing influence of homeostatic synaptic scaling and changes in intrinsic properties. However, the impact of the spatial organization of cortical trauma on epileptogenesis is poorly understood. We addressed this question by analyzing the dynamics of a large-scale biophysically realistic cortical network model subjected to different patterns of trauma. Our results suggest that the spatial pattern of trauma can greatly affect the propensity for developing posttraumatic epileptic activity. For the same fraction of lesioned neurons, spatially compact trauma resulted in stronger posttraumatic elevation of paroxysmal activity than spatially diffuse trauma. In the case of very severe trauma, diffuse distribution of a small number of surviving intact neurons alleviated posttraumatic epileptogenesis. We suggest that clinical evaluation of the severity of brain trauma should take into account the spatial pattern of the injured cortex. PMID:21896754

  7. Genetic control of malaria parasite transmission: threshold levels for infection in an avian model system.

    PubMed

    Jasinskiene, Nijole; Coleman, Judy; Ashikyan, Aurora; Salampessy, Michael; Marinotti, Osvaldo; James, Anthony A

    2007-06-01

    Genetic strategies for controlling malaria transmission based on engineering pathogen resistance in Anopheles mosquitoes are being tested in a number of animal models. A key component is the effector molecule and the efficiency with which it reduces parasite transmission. Single-chain antibodies (scFvs) that bind the circumsporozoite protein of the avian parasite, Plasmodium gallinaceum, can reduce mean intensities of sporozoite infection of salivary glands by two to four orders of magnitude in transgenic Aedes aegypti. Significantly, mosquitoes with as few as 20 sporozoites in their salivary glands are infectious for a vertebrate host, Gallus gallus. Although scFvs hold promise as effector molecules, they will have to reduce mean intensities of infection to zero to prevent parasite transmission and disease. We conclude that similar endpoints must be reached with human pathogens if we are to expect an effect on disease transmission.

  8. Growth and recovery of temporary threshold shift at 3 kHz in bottlenose dolphins: experimental data and mathematical models.

    PubMed

    Finneran, James J; Carder, Donald A; Schlundt, Carolyn E; Dear, Randall L

    2010-05-01

    Measurements of temporary threshold shift (TTS) in marine mammals have become important components in developing safe exposure guidelines for animals exposed to intense human-generated underwater noise; however, existing marine mammal TTS data are somewhat limited in that they have typically induced small amounts of TTS. This paper presents experimental data for the growth and recovery of larger amounts of TTS (up to 23 dB) in two bottlenose dolphins (Tursiops truncatus). Exposures consisted of 3-kHz tones with durations from 4 to 128 s and sound pressure levels from 100 to 200 dB re 1 μPa. The resulting TTS data were combined with existing data from two additional dolphins to develop mathematical models for the growth and recovery of TTS. TTS growth was modeled as the product of functions of exposure duration and sound pressure level. TTS recovery was modeled using a double exponential function of the TTS at 4-min post-exposure and the recovery time.

  9. Modeling the impact of spinal cord stimulation paddle lead position on impedance, stimulation threshold, and activation region.

    PubMed

    Xiaoyi Min; Kent, Alexander R

    2015-08-01

    The effectiveness of spinal cord stimulation (SCS) for chronic pain treatment depends on selection of appropriate stimulation settings, which can be especially challenging following posture change or SCS lead migration. The objective of this work was to investigate the feasibility of using SCS lead impedance for determining the location of a SCS lead and for detecting lead migration, as well as the impact of axial movement and rotation of the St. Jude Medical PENTA™ paddle in the dorsal-ventral or medial-lateral directions on dorsal column (DC) stimulation thresholds and neural activation regions. We used a two-stage computational model, including a finite element method model of field potentials in the spinal cord during stimulation, coupled to a biophysical cable model of mammalian, myelinated nerve fibers to calculate tissue impedance and nerve fiber activation within the DC. We found that SCS lead impedance was highly sensitive to the distance between the lead and cerebrospinal fluid (CSF) layer. In addition, among all the lead positions studied, medial-lateral movement resulted in the most substantial changes to SC activation regions. These results suggest that impedance can be used for detecting paddle position and lead migration, and therefore for guiding SCS programming.

  10. Finger Vein Segmentation from Infrared Images Based on a Modified Separable Mumford Shah Model and Local Entropy Thresholding.

    PubMed

    Vlachos, Marios; Dermatas, Evangelos

    2015-01-01

    A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern.

  11. Quasi-two-dimensional threshold voltage model for junctionless cylindrical surrounding gate metal-oxide-semiconductor field-effect transistor with dual-material gate

    NASA Astrophysics Data System (ADS)

    Li, Cong; Zhuang, Yi-Qi; Zhang, Li; Jin, Gang

    2014-01-01

    Based on the quasi-two-dimensional (2D) solution of Poisson's equation in two continuous channel regions, an analytical threshold voltage model for short-channel junctionless dual-material cylindrical surrounding-gate (JLDMCSG) metal-oxide-semiconductor field-effect transistor (MOSFET) is developed. Using the derived model, channel potential distribution, horizontal electrical field distribution, and threshold voltage roll-off of JLDMCSG MOSFET are investigated. Compared with junctionless single-material CSG (JLSGCSG) MOSFET, JLDMCSG MOSFET can effectively suppress short-channel effects and simultaneously improve carrier transport efficiency. It is also revealed that threshold voltage roll-off of JLDMCSG can be significantly reduced by adopting both a small oxide thickness and a small silicon channel radius. The model is verified by comparing its calculated results with that obtained from three-dimensional (3D) numerical device simulator ISE.

  12. Drought Risk Modeling for Thermoelectric Power Plants Siting using an Excess Over Threshold Approach

    SciTech Connect

    Bekera, Behailu B; Francis, Royce A; Omitaomu, Olufemi A

    2014-01-01

    Water availability is among the most important elements of thermoelectric power plant site selection and evaluation criteria. With increased variability and changes in hydrologic statistical stationarity, one concern is the increased occurrence of extreme drought events that may be attributable to climatic changes. As hydrological systems are altered, operators of thermoelectric power plants need to ensure a reliable supply of water for cooling and generation requirements. The effects of climate change are expected to influence hydrological systems at multiple scales, possibly leading to reduced efficiency of thermoelectric power plants. In this paper, we model drought characteristics from a thermoelectric systems operational and regulation perspective. A systematic approach to characterise a stream environment in relation to extreme drought occurrence, duration and deficit-volume is proposed and demonstrated. This approach can potentially enhance early stage decisions in identifying candidate sites for a thermoelectric power plant application and allow investigation and assessment of varying degrees of drought risk during more advanced stages of the siting process.

  13. Reduced seizure threshold and altered network oscillatory properties in a mouse model of Rett syndrome.

    PubMed

    McLeod, F; Ganley, R; Williams, L; Selfridge, J; Bird, A; Cobb, S R

    2013-02-12

    Rett syndrome (RTT) is a disorder with a pronounced neurological phenotype and is caused mainly by mutations in the X-linked gene MECP2. A common feature of RTT is an abnormal electroencephalography and a propensity for seizures. In the current study we aimed to assess brain network excitability and seizure propensity in a mouse model of RTT. Mice in which Mecp2 expression was silenced (Mecp2(stop/y)) showed a higher seizure score (mean=6 ± 0.8 compared to 4±0.2 in wild-type [WT]) and more rapid seizure onset (median onset=10 min in Mecp2(stop/y) and 32 min in WT) when challenged with the convulsant drug kainic acid (25mg/kg). Hippocampal slices from Mecp2(stop/y) brain displayed no spontaneous field potential activities under control conditions but showed higher power gamma frequency field potential oscillations compared to WT in response to kainic acid (400 nM) in vitro. Brain slices challenged with the GABA(A)-receptor antagonist bicuculline (0.1-10 μM) and the potassium channel blocker 4-aminopyridine (1-50 μM) also revealed differences between genotypes with hippocampal circuits from Mecp2(stop/y) mouse slices showing enhanced epileptiform burst duration and frequency. In contrast to these network level findings, single cell analysis of pyramidal cells by whole-cell patch clamp recording revealed no detectable differences in synaptic or biophysical properties between methyl-CpG-binding protein 2 (MeCP2)-containing and MeCP2-deficient neurons. These data support the proposal that loss of MeCP2 alters network level excitability in the brain to promote epileptogenesis. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Threshold dose for peripheral neuropathy following intraoperative radiotherapy (IORT) in a large animal model

    SciTech Connect

    Kinsella, T.J.; DeLuca, A.M.; Barnes, M.; Anderson, W.; Terrill, R.; Sindelar, W.F. )

    1991-04-01

    Radiation injury to peripheral nerve is a dose-limiting toxicity in the clinical application of intraoperative radiotherapy, particularly for pelvic and retroperitoneal tumors. Intraoperative radiotherapy-related peripheral neuropathy in humans receiving doses of 20-25 Gy is manifested as a mixed motor-sensory deficit beginning 6-9 months following treatment. In a previous experimental study of intraoperative radiotherapy-related neuropathy of the lumbro-sacral plexus, an approximate inverse linear relationship was reported between the intraoperative dose (20-75 Gy range) and the time to onset of hind limb paresis (1-12 mos following intraoperative radiotherapy). The principal histological lesion in irradiated nerve was loss of large nerve fibers and perineural fibrosis without significant vascular injury. Similar histological changes in irradiated nerves were found in humans. To assess peripheral nerve injury to lower doses of intraoperative radiotherapy in this same large animal model, groups of four adult American Foxhounds received doses of 10, 15, or 20 Gy to the right lumbro-sacral plexus and sciatic nerve using 9 MeV electrons. The left lumbro-sacral plexus and sciatic nerve were excluded from the intraoperative field to allow each animal to serve as its own control. Following treatment, a complete neurological exam, electromyogram, and nerve conduction studies were performed monthly for 1 year. Monthly neurological exams were performed in years 2 and 3 whereas electromyogram and nerve conduction studies were performed every 3 months during this follow-up period. With follow-up of greater than or equal to 42 months, no dog receiving 10 or 15 Gy IORT shows any clinical or laboratory evidence of peripheral nerve injury. However, all four dogs receiving 20 Gy developed right hind limb paresis at 8, 9, 9, and 12 mos following intraoperative radiotherapy.

  15. Using the product threshold model for estimating separately the effect of temperature on male and female fertility.

    PubMed

    Tusell, L; David, I; Bodin, L; Legarra, A; Rafel, O; López-Bejar, M; Piles, M

    2011-12-01

    Animals under environmental thermal stress conditions have reduced fertility due to impairment of some mechanisms involved in their reproductive performance that are different in males and females. As a consequence, the most sensitive periods of time and the magnitude of effect of temperature on fertility can differ between sexes. The objective of this study was to estimate separately the effect of temperature in different periods around the insemination time on male and on female fertility by using the product threshold model. This model assumes that an observed reproduction outcome is the result of the product of 2 unobserved variables corresponding to the unobserved fertilities of the 2 individuals involved in the mating. A total of 7,625 AI records from rabbits belonging to a line selected for growth rate and indoor daily temperature records were used. The average maximum daily temperature and the proportion of days in which the maximum temperature was greater than 25°C were used as temperature descriptors. These descriptors were calculated for several periods around the day of AI. In the case of males, 4 periods of time covered different stages of the spermatogenesis, the transit through the epididymus of the sperm, and the day of AI. For females, 5 periods of time covered the phases of preovulatory follicular maturation including day of AI and ovulation, fertilization and peri-implantational stage of the embryos, embryonic and early fetal periods of gestation, and finally, late gestation until birth. The effect of the different temperature descriptors was estimated in the corresponding male and female liabilities in a set of threshold product models. The temperature of the day of AI seems to be the most relevant temperature descriptor affecting male fertility because greater temperature records on the day of AI caused a decrease in male fertility (-6% in male fertility rate with respect to thermoneutrality). Departures from the thermal zone in temperature

  16. Oral Sex and HPV: Population Based Indications.

    PubMed

    Mishra, Anupam; Verma, Veerendra

    2015-03-01

    Human pappilloma virus (HPV) is well established in etiology of uterine cervical cancers, but its role in head and neck cancer is strongly suggested through many epidemiological and laboratory studies. Although HPV-16 induced oropharyngeal cancer is a distinct molecular entity, its role at other sub-sites (oral cavity, larynx, nasopharynx, hypopharynx) is less well established. Oral sex is supposedly the most commonly practiced unnatural sex across the globe and may prove to be a potential transmitting link between cancers of the uterine cervix and the oropharynx in males particularly in those 10-15% non-smokers. In India with the second largest population (higher population density than China) the oral sex is likely to be a common 'recreation-tool' amongst the majority (poor) and with the concurrent highly prevalent bad cervical/oral hygiene the HPV is likely to synergize other carcinogens. Hence in accordance (or coincidently), in India the cervical cancer happens to be the commonest cancer amongst females while oral/oropharyngeal cancer amongst males. Oral sex as a link between these two cancer types, can largely be argued considering a poor level of evidence in the existing literature. The modern world has even commercialized oral sex in the form of flavored condoms. The inadequate world literature currently is of a low level of evidence to conclude such a relationship because no such specific prospective study has been carried out and also due to wide (and unpredictable) variety of sexual practices, such a relationship can only be speculated. This article briefly reviews the existing literature on various modes and population based indications for HPV to be implicated in head and neck cancer with reference to oral sexual practice.

  17. Limitations to the robustness of binormal ROC curves: effects of model misspecification and location of decision thresholds on bias, precision, size and power.

    PubMed

    Walsh, S J

    1997-03-30

    This paper concerns robustness of the binormal assumption for inferences that pertain to the area under an ROC curve. I applied the binormal model to rating method data sets sampled from bilogistic curves and observed small biases in area estimates. Bias increased as the range of decision thresholds decreased. The variance of area estimates also increased as the range of decision thresholds decreased. Together, minor bias and inflated variance substantially altered the size and power of statistical tests that compared areas under bilogistic ROC curves. I repeated the simulations by applying the binormal assumption to data sampled from binormal curves. Biases in area estimates were minimal for the binormal data, but the variance of area estimates was again higher when the range of decision thresholds was narrow. The size of tests that compared areas did not vary from the chosen significance level. Power fell, however, when the variance of area estimates was inflated. I conclude that inferences derived from the binormal assumption are sensitive to model misspecification and to the location of decision thresholds. A narrow span of decision thresholds increases the variability of area estimates and reduces the power of area comparisons. Model misspecification produces bias that alters test size and can exaggerate the loss of power that accompanies increased variability.

  18. Elaborating on threshold concepts

    NASA Astrophysics Data System (ADS)

    Rountree, Janet; Robins, Anthony; Rountree, Nathan

    2013-09-01

    We propose an expanded definition of Threshold Concepts (TCs) that requires the successful acquisition and internalisation not only of knowledge, but also its practical elaboration in the domains of applied strategies and mental models. This richer definition allows us to clarify the relationship between TCs and Fundamental Ideas, and to account for both the important and the problematic characteristics of TCs in terms of the Knowledge/Strategies/Mental Models Framework defined in previous work.

  19. Genetic threshold hypothesis of neocortical spike-and-wave discharges in the rat: an animal model of petit mal epilepsy.

    PubMed

    Vadász, C; Carpi, D; Jando, G; Kandel, A; Urioste, R; Horváth, Z; Pierre, E; Vadi, D; Fleischer, A; Buzsáki, G

    1995-02-27

    Neocortical high-voltage spike-and-wave discharges (HVS) in the rat are an animal model of petit mal epilepsy. Genetic analysis of total duration of HVS (s/12 hr) in reciprocal F1 and F2 hybrids of F344 and BN rats indicated that the phenotypic variability of HVS cannot be explained by a simple, monogenic Mendelian model. Biometrical analysis suggested the presence of additive, dominance, and sex-linked-epistatic effects, buffering maternal influence, and heterosis. High correlation was observed between average duration (s/episode) and frequency of occurrence of spike-and-wave episodes (n/12 hr) in parental and segregating generations, indicating that common genes affect both duration and frequency of the spike-and-wave pattern. We propose that both genetic and developmental-environmental factors control an underlying quantitative variable, which, above a certain threshold level, precipitates HVS discharges. These findings, together with the recent availability of rat DNA markers for total genome mapping, pave the way to the identification of genes that control the susceptibility of the brain to spike-and-wave discharges.

  20. Climate change in a Point-Over-Threshold model: an example on ocean-wave-storm hazard in NE Spain

    NASA Astrophysics Data System (ADS)

    Tolosana-Delgado, R.; Ortego, M. I.; Egozcue, J. J.; Sánchez-Arcilla, A.

    2009-09-01

    Climatic change is a problem of general concern. When dealing with hazardous events such as wind-storms, heavy rainfall or ocean-wave storms this concern is even more serious. Climate change might imply an increase of human and material losses, and it is worth devoting efforts to detect it. Hazard assessment of such events is often carried out with a point-over-threshold (POT) model. Time-occurrence of events is assumed to be Poisson distributed, and the magnitude of each event is modelled as an arbitrary random variable, which upper tail is described by a Generalized Pareto Distribution (GPD). Independence between this magnitude and occurrence in time is assumed, as well as independence from event to event. The GPD models excesses over a threshold. If X is the magnitude of an event and x0 a value of the support of X, the excess over the threshold x0 is Y = X - x0, conditioned to X > x0. Therefore, the support of Y is (a segment of) the positive real line. The GPD model has a scale and a shape parameter. The scale parameter of the distribution is β > 0. The shape parameter, ? is real-valued, and it defines three different sub-families of distributions. GPD distributions with ? < 0 have limited support (ysup = -β?). For values ? > 0, distributions have infinite heavy tails (ysup = +? ), and for ? = 0 we obtain the exponential distribution, which has an infinite support but a well-behaved tail. The GPD distribution function is ( ? )- 1 ? FY(y|β,?) = 1- 1+ β-y , 0 ? y < ysup ? = 0 , with exponential limit form FY (y|β,? = 0) = 1 - exp( ) - yβ for ? = 0, where 0 ? y < ? . Scarcity of data arises as an additional difficulty, as hazardous events are fortunately rare. A Bayesian approach seems thus quite a needed step to estimate the GPD parameters in a way that their uncertainty is adequately propagated to the hazard parameters, such as return periods. This bayesian perspective allows us to transparently include necessary conditions on the parameters for our

  1. Genetic threshold hypothesis of neocortical spike-and-wave discharges in the rat: An animal model of petit mal epilepsy

    SciTech Connect

    Vadasz, C.; Fleischer, A.; Carpi, D.; Jando, G.

    1995-02-27

    Neocortical high-voltage spike-and-wave discharges (HVS) in the rat are an animal model of petit mal epilepsy. Genetic analysis of total duration of HVS (s/12 hr) in reciprocal F1 and F2 hybrids of F344 and BN rats indicated that the phenotypic variability of HVS cannot be explained by simple, monogenic Mendelian model. Biometrical analysis suggested the presence of additive, dominance, and sex-linked-epistatic effects, buffering maternal influence, and heterosis. High correlation was observed between average duration (s/episode) and frequency of occurrence of spike-and-wave episodes (n/12 hr) in parental and segregating generations, indicating that common genes affect both duration and frequency of the spike-and-wave pattern. We propose that both genetic and developmental - environmental factors control an underlying quantitative variable, which, above a certain threshold level, precipitates HVS discharges. These findings, together with the recent availability of rat DNA markers for total genome mapping, pave the way to the identification of genes that control the susceptibility of the brain to spike-and-wave discharges. 67 refs., 3 figs., 5 tabs.

  2. Learning foraging thresholds for lizards

    SciTech Connect

    Goldberg, L.A.; Hart, W.E.; Wilson, D.B.

    1996-01-12

    This work gives a proof of convergence for a randomized learning algorithm that describes how anoles (lizards found in the Carribean) learn a foraging threshold distance. This model assumes that an anole will pursue a prey if and only if it is within this threshold of the anole`s perch. This learning algorithm was proposed by the biologist Roughgarden and his colleagues. They experimentally confirmed that this algorithm quickly converges to the foraging threshold that is predicted by optimal foraging theory our analysis provides an analytic confirmation that the learning algorithm converses to this optimal foraging threshold with high probability.

  3. Towards theory integration: Threshold model as a link between signal detection theory, fast-and-frugal trees and evidence accumulation theory.

    PubMed

    Hozo, Iztok; Djulbegovic, Benjamin; Luan, Shenghua; Tsalatsanis, Athanasios; Gigerenzer, Gerd

    2017-02-01

    Theories of decision making are divided between those aiming to help decision makers in the real, 'large' world and those who study decisions in idealized 'small' world settings. For the most part, these large- and small-world decision theories remain disconnected. We linked the small-world decision theoretic concepts of signal detection theory (SDT) and evidence accumulation theory (EAT) to the threshold model and the large world of heuristic decision making that rely on fast-and-frugal decision trees (FFT). We connected these large- and small-world theories by demonstrating that seemingly different decision-making concepts are actually equivalent. In doing so, we were able (1) to link the threshold model to EAT and FFT, thereby creating decision criteria that take into account both the classification accuracy of FFT and the consequences built in the threshold model; (2) to demonstrate how threshold criteria can be used as a strategy for optimal selection of cues when constructing FFT; and (3) to show that the compensatory strategy expressed in the threshold model can be linked to a non-compensatory FFT approach to decision making. We also showed how construction and performance of FFT depend on having reliable information - the results were highly sensitive to the estimates of benefits and harms of health interventions. We illustrate the practical usefulness of our analysis by describing an FFT we developed for prescribing statins for primary prevention of cardiovascular disease. By linking SDT and EAT to the compensatory threshold model and to non-compensatory heuristic decision making (FFT), we showed how these two decision strategies are ultimately linked within a broader theoretical framework and thereby respond to calls for integrating decision theory paradigms. © 2015 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.

  4. Numerical Analysis of Threshold between Laser-Supported Detonation and Combustion Wave Using Thermal Non-Equilibrium and Multi-Charged Ionization Model

    NASA Astrophysics Data System (ADS)

    Shiraishi, Hiroyuki; Kumagai, Yuya

    Laser-supported Detonation (LSD), which is one type of Laser-supported Plasma (LSP), is an important phenomenon because it can generate high pressures and temperatures for laser absorption. In this study, using thermal-non-equilibrium model, we numerically simulate LSPs, which are categorized as either LSDs or laser-supported combustion-waves (LSCs). For the analysis model, a two-temperature (heavy particle and electron-temperature) model has been used because the electronic mode excites first in laser absorption and a thermal non-equilibrium state easily arises. In the numerical analysis of the LSDs, laser absorption models are particularly important. Therefore, a multi-charged ionization model is considered to evaluate precisely the propagation and the structure transition of the LSD waves in the proximity of the LSC-LSD threshold. In the new model, the transition of the LSD construction near the threshold, which is indicated by the ionization delay length, becomes more practical.

  5. A semi-analytic power balance model for low (L) to high (H) mode transition power threshold

    SciTech Connect

    Singh, R.; Jhang, Hogun; Kaw, P. K.; Diamond, P. H.; Nordman, H.; Bourdelle, C.

    2014-06-15

    We present a semi-analytic model for low (L) to high (H) mode transition power threshold (P{sub th}). Two main assumptions are made in our study. First, high poloidal mode number drift resistive ballooning modes (high-m DRBM) are assumed to be the dominant turbulence driver in a narrow edge region near to last closed flux surface. Second, the pre-transition edge profile and turbulent diffusivity at the narrow edge region pertain to turbulent equipartition. An edge power balance relation is derived by calculating the dissipated power flux through both turbulent conduction and convection, and radiation in the edge region. P{sub th} is obtained by imposing the turbulence quench rule due to sheared E × B rotation. Evaluation of P{sub th} shows a good agreement with experimental results in existing machines. Increase of P{sub th} at low density (i.e., the existence of roll-over density in P{sub th} vs. density) is shown to originate from the longer scale length of the density profile than that of the temperature profile.

  6. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics

    PubMed Central

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-01-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  7. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  8. The integral pulse frequency modulation model with time-varying threshold: application to heart rate variability analysis during exercise stress testing.

    PubMed

    Bailón, Raquel; Laouini, Ghailen; Grao, César; Orini, Michele; Laguna, Pablo; Meste, Olivier

    2011-03-01

    In this paper, an approach for heart rate variability analysis during exercise stress testing is proposed based on the integral pulse frequency modulation (IPFM) model, where a time-varying threshold is included to account for the nonstationary mean heart rate. The proposed technique allows the estimation of the autonomic nervous system (ANS) modulating signal using the methods derived for the IPFM model with constant threshold plus a correction, which is shown to be needed to take into account the time-varying mean heart rate. On simulations, this technique allows the estimation of the ANS modulation on the heart from the beat occurrence time series with lower errors than the IPFM model with constant threshold (1.1% ± 1.3% versus 15.0% ± 14.9%). On an exercise stress testing database, the ANS modulation estimated by the proposed technique is closer to physiology than that obtained from the IPFM model with constant threshold, which tends to overestimate the ANS modulation during the recovery and underestimate it during the initial rest.

  9. An Analytical Threshold Voltage Model of Fully Depleted (FD) Recessed-Source/Drain (Re-S/D) SOI MOSFETs with Back-Gate Control

    NASA Astrophysics Data System (ADS)

    Saramekala, Gopi Krishna; Tiwari, Pramod Kumar

    2016-10-01

    This paper presents an analytical threshold voltage model for back-gated fully depleted (FD), recessed-source drain silicon-on-insulator metal-oxide-semiconductor field-effect transistors (MOSFETs). Analytical surface potential models have been developed at front and back surfaces of the channel by solving the two-dimensional (2-D) Poisson's equation in the channel region with appropriate boundary conditions assuming a parabolic potential profile in the transverse direction of the channel. The strong inversion criterion is applied to the front surface potential as well as on the back one in order to find two separate threshold voltages for front and back channels of the device, respectively. The device threshold voltage has been assumed to be associated with the surface that offers a lower threshold voltage. The developed model was analyzed extensively for a variety of device geometry parameters like the oxide and silicon channel thicknesses, the thickness of the source/drain extension in the buried oxide, and the applied bias voltages with back-gate control. The proposed model has been validated by comparing the analytical results with numerical simulation data obtained from ATLAS™, a 2-D device simulator from SILVACO.

  10. A threshold voltage model of short-channel fully-depleted recessed-source/drain (Re-S/D) UTB SOI MOSFETs including substrate induced surface potential effects

    <